In this era of digital technology when people are busy with their daily life, they lookfor methods to learn quickly with minimal effort. Today, people more often dependon machines to store and retrieve information. Soon, they will interact with a ma-chine to seek information conversationally by asking questions to establish a contin-uous dialogue based on the information gained through the conversation. This thesisaims to study existing models created to help such machines, for a popular datasetQuAC (Question Answering in Context) [1]. Furthermore, this thesis looks to re-duce the gap between the state-of-the-artF1score of 64.1% (achieved by FlowQA[2] at the beginning of this thesis) and the human performance of 81.1%. In thisthesis, we mainly focused on experimenting with FlowQA by (1) replacing the at-tention mechanism with multi-head attention, (2) integrating BERT (BidirectionalEncoderRepresentation fromTransformer) [3]. In every experiment, there was aconsiderable amount of increment in theF1score, with the highest score being 66.4%achieved by a novel combination of FlowQA and BERT along with the concept ofobtaining contextualized word embeddings using a combination of dialog historyand a moving window. Moreover, this thesis also developed a model using BERTalone that delivered an accuracy of 43.4% on the QuAC dataset.
Soumyadeep Mondal, Vishnu Raveendra Nadhan