Skip to content

AI and NLP: Data Talks

Photo of
Hosted By
Samuel D. and Mayumi B.
AI and NLP: Data Talks


This event is co-hosted by Tokyo Machine Learning, Basis Technology, and amplified ai. Special thanks to for hosting us in their lab. Detailed directions to the lab are below.

18:30 Doors Open
19:00 Keynote: The Evolution and Adoption of Machine Learning
19:10 Talk 1 (by video conference 20 minutes + 10 minutes Q&A)
19:40 Talk 2 (20 minutes + 10 minutes Q&A)
20:10 Pizza and Beer Networking

--- KEYNOTE ---
The Evolution and Adoption of Machine Learning

A look at key milestones in the development of NLP & machine learning and how that technology has become adopted by industry. Specifically we will use legal tech as a case study to understand new technology adoption and how fundamental advances in research can lead to new market opportunities.

--- TALK 1 ---
Deep Learning for Dense Document Representations

The past few years have seen a significant improvement in natural language understanding, thanks to the application of the distributional hypothesis to generate dense word embeddings combined with the ‘unreasonable effectiveness of recurrent neural networks’. Despite this massive leap forward at the word and sentence level, attempts at learning dense document embeddings have only seen modest improvements over older approaches. In this talk, I will discuss the evolution of neural document embedding models in the context of their pre-neural predecessors, highlighting where we’re at on the cutting edge. We’ll consider the intuition behind different approaches and architectures as well as the technical difficulties and opportunities that document-level models provide.

--- TALK 2 ----
Deep Learning for Named Entity Recognition

Named Entity Recognition is one of the key tasks in commercial Natural Language Processing applications. Its objective is to identify named entity mentions, such as people, organizations, and locations, in running text. State-of-the-art approaches are purely data-driven, leveraging deep neural networks. In this talk, I will present a few of those works, followed by a description of our own deep NER implementation. We'll look at accuracy, speed, and memory footprint, while comparing some of the best known deep architectures with a basic statistical approach.

■Access to the community lab
Tokyo-to, Shinagawa-ku Osaki, 4-1-2 Win Gotanda Building 3F
7mins from JR Gotanda Station, West Exit
1min from Tokyu Ikegami Line Osaki Hirokoji Station

東京都品川区大崎4-1-2  ウィン第2五反田ビル 3F
JR五反田駅: 西口改札より徒歩7分
東急池上線 大崎広小路駅: 改札口より徒歩1分
Goof Lab
東京都品川区大崎4-1-2, ウィン第2五反田ビル 3F · Tokyo
Google map of the user's next upcoming event's location