Abstract:
Question answering can be considered as a key area in Natural Language Processing and Information Retrieval, where users construct queries in natural language and receive suitable answers in return. In the travel domain, most questions are “content questions”, where the expected answer is not the equivalent of “yes” or “no”, but rather factual information. Replying to a free-form factual question based on a large collection of text is challenging. Previous research has shown that the accuracy of question answering systems can be improved by adding a classification phase based on the expected answer type. This paper focuses on implementing a multi-level, multi-class question classification system focusing on the travel domain. Existing research for the travel domain is conducted using language-specific features and traditional Machine Learning models. In contrast, this research employs transformer-based state-of-the-art deep contextualized word embedding models for question classification. The proposed method improves the coarse class Micro F1-Score by 5.43% compared to the baseline. Fine-grain Micro F1-Score has also improved by 3.8%. We also present an empirical analysis of the effectiveness of different transformer-based deep contextualized word embedding models for multi-level multi-class classification.
Citation:
C. Weerakoon and S. Ranathunga, "Question Classification for the Travel Domain using Deep Contextualized Word Embedding Models," 2021 Moratuwa Engineering Research Conference (MERCon), 2021, pp. 573-578, doi: 10.1109/MERCon52712.2021.9525789.