Deep Learning on the JVM and Feeding the Second Screen


Details
It's been a while, but we are planning a next meetup. As speakers, we welcome Adam Gibson, who is a co-founder of the deep learning focused company Skymind and Daan Odijk, who works at the Information and Language Processing Systems group at the University of Amsterdam on his PhD research.
This meetup is hosted by Travelbird (http://jobs.travelbird.com) in their central Amsterdam office. Travelbird (http://jobs.travelbird.com) will also provide food and beverages for the evening. We thank them for sponsoring our event!
Agenda
• 18.00: Arrive, socialise, have a drink and eat
• 18.50: Short introduction by your humble organizers
• 19.00: Talk 1, by Adam Gibson, Co-founder of Skymind and Creator of Deeplearning4
ND4J: A scientific computing framework driving deep learning on the JVM
In this talk, we will present the ND4J framework with an iScala notebook. Combined with Spark's dataframes, this is making real data science viable in Scala. ND4J is "Numpy for Java." It works with multiple architectures (or backends) that allow for run-time-neutral scientific computing as well as chip-specific optimizations -- all while writing the same code. Algorithm developers and scientific engineers can write code for a Spark, Hadoop, or Flink clusters while keeping underlying computations that are platform-agnostic. A modern runtime for the JVM with the capability to work with GPUs lets engineers leverage the best parts of the production ecosystem without having to pick which scientific library to use.
• 19.45: short break
• 20.00: Talk 2, by Daan Odijk, PhD Candidate in Information Retrieval @ ILPS, University of Amsterdam
Feeding the Second Screen: Machine Learning applied to TV Subtitles
While watching television, people increasingly consume additional content related to what they are watching. To support this type of functionality, I consider the two information retrieval task in a live TV setting for which we leverage the textual stream of subtitles associated with the broadcast. First, I present the task of linking a textual streams to Wikipedia. While link generation has received considerable attention in recent years, this task has unique demands that require an approach that needs to (i) be high-precision oriented, (ii) perform in real-time, (iii) work in a streaming setting, and (iv) typically, with a very limited context. I present a learning to rerank approach whose processing time is very short. Second, I present the task of finding video content related to a live television broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our method is highly efficient and can be used in a live television setting, i.e., in near real time.
• 20.45: more drinks and social talks
• 21.30 or later: out!

Deep Learning on the JVM and Feeding the Second Screen