• Serverless Inference; and A New Stock Prediction Architecture

    Betahaus (New Location)

    Thanks to INNOQ for sponsoring food & drinks 🙏🍻 And, thanks to Jose Quesada for hosting & MC'ing this event! 🏋️‍♀️ -- Talk 1: Running inference as a serverless cloud function (45 min) Speaker: Michael Perlin, innoq Abstract: Abstract: When deploying your model into production, you have to care about configuring and maintaining the runtime environment, scaling, monitoring and more – all tasks, which are more related to DevOps than to ML. In some contexts, you can achieve your goals in a much simpler way by establishing a "serverless" approach. We’ll take a look at the cloud services "AWS Lambda", "Google Cloud Functions", and "Azure Functions" and show how they enable running ML inference. Bio: Michael Perlin is Senior Consultant at INNOQ. Since more than fifteen years he has been working on multiple topics around Software Development, DevOps and Machine Learning. -- Talk 2: The Architecture of a Stock Prediction System Speakers: Stefan Savev & Rey Farhan Abstract: In this talk, we will share our experience of building a stock prediction system based on the recently released Deutsche Börse Public Dataset (https://registry.opendata.aws/deutsche-boerse-pds/). Architecture components include the following. 1) achieving insights about stock market behavior via the available data and validating existing predictive approaches; 2) encoding ML models with a Domain Specific Language (DSL) that targets explicitly the properties of financial data; 3) encoding trading strategies based on the results of the ML model using another DSL. We combine approaches from data science, engineering, and stock market traders. We believe is a rare open source attempt to use ML for stock prediction in combination with a strategy and which evaluates the predictions reliably. Code: https://github.com/Originate/dbg-pds-tensorflow-demo https://github.com/Originate/dbg-pds-tensorflow-demo/blob/ss-add-strategy-image/ROADMAP.md.

    6
  • Berlin ML Group - Conv Nets for ranking and improving images

    Thanks to Idealo for hosting this meetup, including food & drinks! Talk 1: Using Deep Learning to automatically rank millions of hotel images Speakers: Hao Nguyen and Christopher Lennan Abstract: At idealo.de (a leading price comparison website in Europe) we have a dedicated service to provide hotel price comparisons (hotel.idealo.de). For each hotel we receive dozens of images and face the challenge of choosing the most "attractive" image for each offer on our offer comparison pages, as photos can be just as important for bookings as reviews. Given that we have millions of hotel offers, we end up with more than 100 million images for which we need an "attractiveness" assessment. We addressed the need to automatically assess image quality by implementing an aesthetic and technical image quality classifier based on Google's research paper "NIMA: Neural Image Assessment". NIMA consists of two Convolutional Neural Networks (CNN) that aim to predict the aesthetic and technical quality of images, respectively. The models are trained via transfer learning, where ImageNet pre-trained CNNs are fine-tuned for each quality classification task. In this talk we will present our training approach and insights that we've gained throughout the process. We will then try to shed some light on what the trained models actually learned. You can find a first write-up on https://medium.com/idealo-tech-blog/using-deep-learning-to-automatically-rank-millions-of-hotel-images-c7e2d2e5cae2 and the corresponding code on https://github.com/idealo/image-quality-assessment Bios: Hao is currently a Master's student at the Hasso Plattner Institute (Data Engineering) and a working student at idealo.de. His principal interests focus on machine learning and deep learning. Christopher is a Data Scientist at idealo.de where he works on computer vision problems to improve the product search experience. In previous positions he applied machine learning methods to fMRI as well as financial data. Christopher holds a Master's degree in statistics from Humboldt Universität Berlin. -- Talk 2: Reconstructing high-resolution images from their low-resolution counterpart Speaker: Francesco Cardinale Abstract: Single-image super resolution (ISR) addresses the problem of reconstructing high-resolution images given their low-resolution (LR) counterpart. ISR finds use in various computer vision applications, from security and surveillance imaging, medical imaging to object recognition. This ill-posed problem has multiple solutions for any LR input and deep learning approaches, specifically using convolutional neural networks (CNN) proved to be able to achieve better results than the classic interpolation based methods. Here I'll briefly introduce some of the recent literature and the main parts of our project, which is a Keras implementation of the CNN Residual Dense Network described in Residual Dense Network for Image Super-Resolution (Zhang et al. 2018). Then I will show the results of the "out-of-the-box" implementation, as well as some of our very early improvements tailor made for our specific use-case. For code and details, see Github: https://github.com/idealo/image-super-resolution Medium: https://medium.com/idealo-tech-blog/a-deep-learning-based-magnifying-glass-dae1f565c359 Bio: Francesco is a machine learning trainee at idealo.de and currently making first steps into deep learning. Francesco's master thesis in applied mathematics is about Bayesian optimization.

  • Sequential Learning for Personalization; Tutorial on Meta-Learning

    Thanks to Motion.Lab for hosting us! There will be drinks available. Talk 1: Sequential Learning for Personalization Speaker: Prof. Dr. Maurits Kaptein Abstract: In this talk Maurits will detail the main data science challenges encountered when trying to learn “the right treatment, for the right person, at the right time” based on data. Using a Multi-Armed Bandit formalization of personalization Maurits will discuss novel policies (e.g., Bootstrap Thompson Sampling), software (https://github.com/Nth-iteration-labs/streamingbandit & https://github.com/Nth-iteration-labs/contextual), and offline policy evaluation methods to effectively address personalization problems. Bio: Prof. Dr. Maurits Kaptein is the PI of the Computational Personalization lab at JADS research; he and his team work on (statistical) methods for treatment personalization. http://www.mauritskaptein.com/about/ -- Talk 2: Learn how to learn how to learn: A tutorial on Meta-learning Speaker: Joaquin Vanschoren Abstract: When we learn new skills, we rarely - if ever - start from scratch. We start from skills learned earlier in related tasks, reuse approaches that worked well before, and focus on what is likely worth trying based on experience. With every skill learned, learning new skills becomes easier, requiring fewer examples and less trial-and-error. In short, we 'learn how to learn' across tasks. Likewise, when building machine learning models for a specific task, we often build on experience with related tasks, or use our (often implicit) understanding of the behavior of machine learning techniques to help make the right choices. Meta-learning is the science of systematically observing how different machine learning approaches perform on a wide range of learning tasks, and then learning from this experience to learn new tasks much faster than otherwise possible. Not only does this dramatically speed up and improve the design of machine learning pipelines or neural architectures, it also allows us to replace hand-engineered algorithms with novel approaches learned in a data-driven way. In this tutorial, we provide an overview of the state of the art in this fascinating and continuously evolving field. [This is a shorter version of the Automatic Machine Learning tutorial given at NIPS 2018.] Bio: Joaquin Vanschoren is an Assistant Professor in Machine Learning at the Eindhoven University of Technology. His research focuses on machine learning, meta-learning, and understanding and automating learning. He founded and leads OpenML.org, an open science platform for machine learning. He received several demo and open data awards, has been tutorial speaker at NIPS and ECMLPKDD, and invited speaker at ECDA, StatComp, AutoML@ICML, CiML@NIPS, DEEM@SIGMOD, AutoML@PRICAI, MLOSS@NIPS, and many other occasions. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and he co-organizes the AutoML and meta-learning workshop series at NIPS and ICML. He is also co-editor of the book ’Automatic Machine Learning: Methods, Systems, Challenges’.

    8
  • Similarity search inspired by fruit flies; Open-source NLP with Flair

    **Note: location is Köpernicker Str. 154 (entrance A) like the Nov ML meetup, i.e. the "bridge" Betahaus location. It will be a little more cramped than usual. Soon we'll have the new Betahaus location. Thanks for your patience. ---- Thanks to Pix4D for sponsoring this event! ---- Lightning talks: Carlos Becker of Pix4D ---- Talk 1: Drosophila hits Machine Learning - A new algorithm for similarity search derived from the olfactory processing of fruit flies Speaker: Dr. Daniela Schmidt Bio: Daniela is a biologist who did her PhD in the field of insect olfaction and chemical ecology. Interested by statistical analysis at that time she left academia to detect her passion for machine learning five years ago. Since then she had diverse projects like modelling solvency prediction, document classification, information extraction and image recognition. She has a broad and general interest in diverse ML topics. Currently she is working as a data scientist in the big data research group of the Adolf Würth GmbH & Co. KG. Abstract: Only recently, ML specialists have detected how brains learn to smell as a source of inspiration for developing new methods. In this talk, I will introduce you to a new and promising algorithm for the nearest neighbor problem which was inspired by the olfactory processing of the fruit fly Drosophila. It was published last November in Science by Sanjoy Dasgupta, Charles F. Stevens and Saket Navlakha (http://science.sciencemag.org/content/358/6364/...­). I will give you an introduction to the approximate nearest neighbor problem and how it can be solved by a special family of hash functions: locality sensitive hashing (LSH). Then I will explain how insect process odor information and how they tag odors by a sparse random projection. This sparse random projection represents a new type of LSH-function and confronts with the dense Gaussian projection in traditional LSH. Moreover, the fly-LSH has been proven to be faster and more accurate than traditional LSH. ---- Talk 2: Intro to Flair - Open Source NLP Framework Speaker: Alan Akbik (Zalando) Bio: TBD Abstract: TBD

    15
  • Distributed Training on MXNet & Gluon; Uncertainty in Deep Learning in Medicine

    Thanks to AWS for sponsoring this event! Talk 1: Distributed Training on Apache MXNet and Gluon Speaker: Cyrus Vahid (AWS Berlin) Abstract: MXNet is a highly optimized Deep Learning Framework. MXNet’s performance for distributed training can help organizations reduce their training cost (better bottom-line) and capitalize better on opportunities (topline). In this session, development and optimization of models for large datasets will be explored. Various optimization techniques are discussed ranging from using larger batch size to using only sign of the gradient. The code for the talk is developed in Gluon. Bio: Cyrus is Principal Evangelist for MXNet at AWS Deep Engine team. Cyrus has been working in the IT industry for the past 20 years. He has studied mathematics, computer science, AI, and cognitive science. For the past two years he has been working at Amazon with specific focus on deep learning. Currently, his work includes development of SOTA papers in MXNet, providing education for ML developers, and public speaking in conferences. Talk 2: Leveraging Uncertainty Information from Deep Neural Networks for Disease Detection Speaker: Christian Leibig (Merantix) Abstract: In medical imaging, algorithmic solutions based on Deep Learning (DL) have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Bio: Christian is a physicist by training (Diploma from the University of Konstanz) and holds a PhD from the International Max Planck Research School for Neural Information Processing. Since the beginning of his PhD times he has developed machine learning based methods and solutions for renowned companies and research institutions. After working as a scientist for the Zeiss Vision Science lab, Christian joined Merantix as a Machine Intelligence Engineer.

    2
  • ConvNets for understanding documents & RL for electricity systems

    Thanks to SAP for hosting and sponsoring this Meetup! Talk 1: Chargrid: Towards Understanding 2D Documents Speaker: Dr. Christian Reisswig, SAP Deep Learning Center of Excellence Abstract: We introduce a novel type of text representation that preserves the 2D layout of a document. It encodes document pages as a two-dimensional grid of characters. Based on this novel representation, we present a generic document understanding pipeline for structured documents. This pipeline makes use of a fully convolutional encoder-decoder network that predicts a segmentation mask and bounding boxes. We demonstrate its capabilities on an information extraction task from invoices and show that it significantly outperforms approaches based on sequential text or document images. Bio: Christian is a theoretical astrophysicist who left academia and hit the deep learning highway three years ago. After spending some time in industry working on deep learning algorithms for autonomous driving and medical imaging, he is now a senior data scientist at SAP where he is building machine learning prototypes for problems in natural language processing and computer vision. Talk 2: Reinforcement learning for electricity systems Speaker: Adam Green Abstract: This talk reviews two years of work on energy_py - a reinforcement learning (RL) for energy systems (https://github.com/ADGEfficiency/energy_py). We will look at lessons learned designing the library, experience using the library with Open AI gym and energy_py environments. Also covered is the use of synthetic data generation in energy_py environments. Bio: Adam is an energy engineer who started his transition into data science two years ago. He now works at Tempus Energy, using supervised and reinforcement learning to control flexible electrical load that supports variable renewable energy. Lightning talks: -Michael Arthur Bucko on new computer vision meetup

    5
  • Learning to Follow Language Instructions | Machine Learning the Product

    Thanks to GoEuro for hosting and sponsoring this meetup! *Talk 1: Learning to Follow Language Instructions with Adversarial Reward Induction* Speaker: Dzmitry Bahdanau Abstract: Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, for many real-world natural language commands that involve a degree of underspecification or ambiguity, such as tidy the room, it would be challenging or impossible to program an appropriate reward function. To overcome this, we present a method for learning to follow commands from a training set of instructions and corresponding example goal-states, rather than an explicit reward function. Importantly, the example goal-states are not seen at test time. The approach effectively separates the representation of what instructions require from how they can be executed. In a simple grid world, the method enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new training examples. Bio: Dzmitry Bahdanau is a Ph.D. student at the Montreal Institute for Learning Algorithms under the supervision of Yoshua Bengio. He has invented the content-based neural attention widely used for language understanding and in particular for machine translation. His research interest is enabling intelligent assistants that could collaborate with humans and communicate with them in natural language. In the view of this long-term research goal he is interested in language acquisition, language grounding models, reinforcement learning and imitation learning. Dzmitry studied applied mathematics at Belarusian State University in Minsk and obtained his MsC degree in computer science at Jacobs University Bremen, Germany. *Talk 2: Machine Learning the Product* Speaker: Boxun Zhang Abstract: A/B testing is widely adopted by tech companies when introducing changes or rolling out new features. Since running A/B test can often be both costly (e.g., developing cross-platform features) and risky (e.g., new feature ruining user experience), data scientists always try to maximize learnings from each test. However, the learnings from using conventional A/B testing techniques can be rather limited, as those techniques only reveal the effect on test statistic from treatment, but can't tell much about underlying causes (besides the treatment) of such effect. In this talk, I will present a machine learning approach we used for analyzing A/B test, aiming to maximize learnings and to identify underlying causes of effect. Bio: Boxun Zhang is currently leading the Data Science team at GoEuro, developing machine learning capabilities to engineer the future of travel. Previously, he worked as data scientist at Spotify for several years, focused on user behavior modeling, machine learning, and experimentation. Before Spotify, he obtained his PhD in Computer Science from the Netherlands, studying user behaviors in large-scale Peer-to-Peer systems.

    5
  • ML-based personalisation at Bonial & AI to improve efficiency of Taxi-Drivers

    Thanks to Axel Springer for hosting and sponsoring! ************************************************************************** Talk 1: Machine learning based personalisation at Bonial Speakers: Benjamin Mohn Ben did his bachelor in mathematics in 2015 at the HU Berlin, followed by a Master in Scientific Computing which he is currently finishing at the TU Berlin. He joined Bonial in 2014 as a QA working student and works now in the core data team of Bonial as a Data Scientist. Philipp Johannis Philipp finished his diploma thesis in business mathematics in 2010 at the HTWK Leipzig. After working 3 years as a Data Analyst for a telecommunication company, he joined Bonial in 2013 and build up the Business Intelligence department. He is now responsible for the core data team since one year. Abstract: Bonial will share insights, learning and challenges of developing a personalized experience over the course of more than one year. We will discuss which algorithms we used, the features we took into account, how it impacted our user base and which lessons we learned. ************************************************************************** Talk 2: Using AI to improve efficiency of Taxi-Drivers Speaker: Florian Grüning Florian studied economics, business administration and market research in Germany, Sweden and Istanbul. He then worked as a data scientist and strategic consultant on data science projects in an international management consultancy. Since November, he has been working with a team of three to reduce the infrastructure load by orchestrating mobility services according to demand instead of using the organically grown systems. Abstract: Powerplace.io predicts the flow of users through cities. Our software helps mobility providers to improve availability and responsiveness of the fleet to their end-customers. In other words, we adjust the fleet management in real-time towards the demand situation in cities. We believe that only a well-distributed range of mobility services is capable of ensuring a seamless mobility experience, providing value to the consumer and reducing the number of private vehicles in the long term for a better future in a smart-city. We provide mobility service providers with an Operations Manager Dashboard, which controls the fleet in real- time and helps to strategically evaluate the demand for mobility. Fleet drivers have access to a driving assistance app (Navigator). The navigator helps to control the demand situation for the driver as a real-time navigation recommendation and thus prevents over-allocation. ************************************************************************** P.S.: We hate no-shows (well, who doesn’t)! Please RSVP only if you are willing to attend the meetup. :)

    2
  • Non-trivial at scale & Transfer Learning for Fun and Profit

    Thanks to Equinor for sponsoring this meetup. Talk 1: Non-trivial at scale: the mountains we need to climb to transform the energy sector with data driven applications We will discuss some of the most pressing and complex data science problems we have, and how we intend to solve them at scale. - Protecting our assets, colleagues and the environment: Making machines in offshore installations operate autonomously - True competitive advantage: Maximising production from hydrocarbon source you don’t see and you can’t measure - Protecting our colleagues: Building a machine that understands the context of safety incidents - Protecting the environment: Detecting micro-seismic activity from CO2 injection - Protecting our assets: Baselining complex machines and detecting deviations How to do it at scale? Bio: Ahmed Khamassi is VP Data Science at Equinor. Formerly at JP Morgan, Wipro Digital, SAS and Google. Talk 2: Transfer Learning for Fun and Profit With very little setup time, we needed to track lots of dishes at the Oktoberfest 2017. This talk explains how we mined those dishes from raw videos without prior data. We’ll recap how to choose the right pretrained model for human-in-the-loop labelling. Next, we’ll cover how occlusion, realtime constraints and motion blur affect the entire object tracking pipeline. The talk concludes with thoughts on the future of sharing visual datasets. Bio: Alexander Hirner is the founder and CTO of moonvision.io

    5