THANK YOU FOR SUBSCRIBING
The DialoGPT is built on the GPT-2 transformer design and trained using a dataset scraped from a comment thread. This DialoGPT was released by Microsoft Research's Natural Language Processing Group.
Fremont, CA: Microsoft released the dialogue generative pre-trained transformer (DialoGPT), a pre-trained deep-learning natural language processing (NLP) model for automatic conversation response generation. This DialoGPT was released by Microsoft Research's Natural Language Processing Group. The model achieved top-of-the-line results on several benchmarks after being trained on over 147M dialogues.
The team published the details of the system in a paper on arXiv. The DialoGPT is built on the GPT-2 transformer design and trained using a dataset scraped from a comment thread. The model was appraised on two test datasets, the Dialog System Technology Challenges (DSTC-7) dataset and a new dataset of 6k examples also extracted from Reddit.
The team used the machine-translation metrics such as BLEU and Meteor for both the data sets. These metrics evaluate the performance of DialoGPT compared with Microsoft's Personality Chat and also with the winner of DSTC -7. DialoGPT outperformed all other models, and it also used human judges to rank the output.
To perform NLP tasks, Transformer Architecture has become a popular deep learning model. These models use unsupervised learning, pre-trained, on large datasets like the contents of Wikipedia. The model learns natural language structure through pre-training before getting trained on a dataset for a particular task. The larger pre-trained models can achieve wonderful results on NLP benchmarks even without fine-tuning. "Many these models are notorious for generating bland, uninformative samples," pointed out the DialoGPT team. They implemented the maximum mutual information (MMI) scoring function to address this issue. This MMI re-ranks the model's output, penalizing 'bland' outputs. The group also examined using reinforcement learning to develop the model's results but found that the responses simply repeated the source sentence.
The conversational system finds the pre-trained models more attractive as they lack high-quality training datasets for dialogue tasks. It is assumed that the model may learn offensive speech if the natural dialogue information is used from Internet sites like Reddit or Twitter. Microsoft's Personality Chat cloud service tries to address the issue which it's an experimental chatbot, Tay has created after conversing with users of Twitter. It attempts to address by filtering out offensive input before auto-generating response using a series of machine-learning classifiers.
Check out: Top Microsoft Solution Companies