Next Meetup

Easy life in NLP: only 100 examples to train a classifier with Transfer Learning
Note: You will also need to register for the event at Skills Matter: "With only 100 examples they are able to reach the same error rate that the model reaches when trained from scratch on 20k examples!!" ( NLP has become much easier this year - now you can train a classifier with just 100 annotated examples. All that is needed is a general pre-trained model from the Internet for your language. This time the group discussion time will be shorter, for a very good reason - at 20:15 we will have an opportunity to *ask our remaining questions to the author of the paper,* Sebastian Ruder who has kindly agreed to be available. **The least effort way** to get familiar with ULMFit is to watch a two hour [Lesson 10] ( from by jumping to right timestamp using these [notes]( And if you have more time, lesson 4 has the background of RNN and language models. [video]( [notes]( ) For more formal learning, ULMFit has two parts: - Transfer Learning - The underlying Language Model that is transferred ## Transfer learning - [Blog]( NLP's ImageNet moment has arrived by Sebastian Ruder - [Blog]( Introducing state of the art text classification with universal language models by Jeremy Howard and Sebastian Ruder - [Paper]( Universal Language Model Fine-tuning for Text Classification. Jeremy Howard, Sebastian Ruder - [Notebook]( - [Video] [Lesson 10] ( ## Language model - [Blog]( from 2017 by Jeremy Howard - [Paper]( AWD-LSTM paper describing regularizations - [Paper]( Introducing English Wikipedia 103 mln word dataset - [Code]( English wikipedia model used in the paper Explained in Lesson 10 - [Code]( In a lot of other languages Finally, here are the slides that Lev will use in the first 5 mins to seed the discussion A note about the Journal Club format: 1. There is no speaker at Journal Club. 2. There is NO speaker at Journal Club. 3. We split into small groups of 6 people and discuss the papers. For the first hour the groups are random to make sure everyone is on the same page. Afterwards we split into blog/paper/code groups to go deeper. 4. Volunteers sometimes seed the discussion by guiding through the paper highlights for 5 mins. You are very welcome to volunteer in the comments. 5. Reading the materials in advance is really helpful. If you don't have time, please come anyway. We need this group to learn together.

Skills Matter at CodeNode

10 South Place, London · EC2M 2RB

1 comment

    Past Meetups (32)

    What we're about

    Keeping up with the latest research is important for the data scientist so let's work on this together. Each week or two, we will choose one or more articles to read and meet up to discuss them. This group is open to data scientists of any experience level and speciality but I expect the core group will be relatively small. I hope you are as excited about data science, machine learning, and statistics research as I am, and I look forward to meeting you!

    This is a variety of the Silicon Valley Data Science Journal Club and we will be closely following the pages that they read.

    Members (2,584)

    Photos (1)