Experiments with GPT-2 Chatbots - Michael Clark


Details
NLP has made some big steps this year, but how big? Can we use the latest models for chat bots?
Many people use simple chatbots to gather customer data. Let's look at how good chatbots can get when using the latest model's and trained for variety as the expense of consistency.
Content warning: Some of these bots intentionally or unintentionally amplify biases present in the training data. This means they will say inappropriate or offensive things about people's personal attributes and things you hold dear. If you will be uncomfortable hearing or discussing this in a curious and technical context you may not enjoy the talk. If it helps I can make the slides available or discuss beforehand if you wish - just DM Michael Clark on meetup.com or wassname on slack.
You can try one yourself. I made one using data from reddit.com/r/techsupport and reddit.com/r/toastme (compliments) and put it online at: https://webchat.freenode.net/##techsupport_bot . The content warning also applies here.
A technical + humorous talk looking at making chatbots using the latest NLP models and open data from Reddit.
Code: https://github.com/wassname/transfer-learning-conv-ai
The views expressed in this talk or by the chatbots are - of course - not the views of my work, PMLG, or any sponsors.
Please arrive before 6pm to ensure entrance

Experiments with GPT-2 Chatbots - Michael Clark