• Taming the Deep Learning Workflow / Distributed Deep Learning with Hopsworks

    Title: Taming the Deep Learning Workflow Abstract: We are entering the golden age of artificial intelligence. Model-driven, statistical AI has already been responsible for breakthroughs on applications such as computer vision, speech recognition and machine translation, with countless more use-cases on the horizon. But if AI is the lynchpin to a new era of innovation, why does the infrastructure it’s built upon feel trapped in the 20th century? Worse, why is advanced AI tooling locked within the walls of a handful of multi-billion-dollar tech companies, inaccessible to anyone else? In this talk, we will describe how deep learning workflows are impeded by the following circumstances: The necessary expertise is scarce; Hardware requirements can be prohibitive; Current software tools are immature and limited in scope; Under such circumstances, there are promising opportunities to dramatically improve these workflows via novel algorithmic and software solutions, including resource-aware neural architecture search and fully automated GPU training-cluster orchestration. The talk draws on academic work at CMU, UC Berkeley, and UCLA, as well as our experiences at Determined AI, a startup that builds software to make deep learning engineers more productive. Bio: Evan R. Sparks is co-founder and CEO of Determined AI, a software company that makes machine learning engineers and data scientists dramatically more productive. While earning his Ph.D. in computer science in Berkeley's AMPLab, he contributed to the design and implementation of much of the large-scale machine learning ecosystem around Apache Spark, including MLlib and KeystoneML. Prior to Berkeley, Evan worked in quantitative finance and web intelligence. He holds an A.B. in computer science from Dartmouth College. Title:Distributed Deep Learning with Hopsworks Abstract: Distributed Deep Learning (DDL) can reduce the time it takes to train models and make your data scientists and AI researchers more productive. The inner loop in DDL involves utilizing a cluster of machines to train a model using gradient-averaging techniques, whereas the outer loop involves running parallel experiments to perform black-box optimization and find the best hyperparameters. Algorithms for DDL are becoming commoditized, with support by all of the main machine learning frameworks. However, managing the operations of DDL is still a challenge, lacking simple abstractions to use for data scientists and engineers. In this talk we will present Hopsworks, a platform for horizontally scalable deep learning, with support for parallel and reproducible experiments, distributed training with GPUs, a feature store, and pipeline orchestration with airflow. We will go over the lessons learned in providing platform-support for DDL and present ongoing systems research on a framework for hyperparameter optimization. Bio: Kim Hammar is a software engineer at Logical Clocks AB, the main developers of Hops Hadoop (http://www.hops.io ). He received his MSc in Distributed Systems from KTH Royal Institute of Technology in 2018. He has previously worked as an engineer at Ericsson, as a researcher at KTH, as well as a data scientist at Allstate.

    1
  • [Paid] Full Stack Deep Learning is a two-day weekend program

    University of California, Berkeley

    Full Stack Deep Learning is a two-day weekend program for those who already know the basics of deep learning, and want to learn about the rest of the “stack”: organizing projects, handling data, debugging training, managing experiments, deploying models at scale, etc. Here’s an example of the type of content we cover. In addition to lectures from our instructors (this March in Berkeley we have Pieter Abbeel from Covariant.AI / UC Berkeley, Jeremy Howard from Fast.AI, Sergey Karayev from Turnitin, Richard Socher from Salesforce, Josh Tobin from OpenAI, and Raquel Urtasun from Uber ATG / UToronto), participants complete a hands-on project that culminates in deploying a computer vision / NLP system into production, and can take an optional exam to test their knowledge after the course. Attendees of our first weekend last August had a blast and are still tweeting nice things about the experience. We had tons of fun too and are really looking forward to more! This Spring's program will take place March 2-3 in Berkeley (we also have one in NYC later in the month). Visit our website fullstackdeeplearning.com to learn more and complete a short application. Turnitin is offering scholarship program this year for member of underrepresented groups - check out the details in the application if you are interested. The program is small and applications are on a rolling basis, so apply soon if you are interested https://fullstackdeeplearning.com/

  • [Online] Deep learning for nonengineers-hands on platform.ai

    How can we put the power of deep learning in the hands of product managers, business analysts, educators and researchers? The brainchild of Jeremy Howard (fast.ai, Kaggle, Enlitic), platform.ai wants to combine the latest research in human perception, active learning, transfer from pre-trained nets, and noise-resilient training so that the labeler's time is used in the most productive way and the model learns from every aspect of the human interaction. Hands-on explore use cases in fashion, cars, food, autonomous vehicles, or whatever dataset you decide to bring. Attend the event via Zoom: https://zoom.us/j/499031140

    2
  • Deep learning for non-engineers, hands on platform.ai

    Orange Silicon Valley

    How can we put the power of deep learning in the hands of product managers, business analysts, educators and researchers? The brainchild of Jeremy Howard (fast.ai, Kaggle, Enlitic), platform.ai wants to combine the latest research in human perception, active learning, transfer from pre-trained nets, and noise-resilient training so that the labeler's time is used in the most productive way and the model learns from every aspect of the human interaction. Brief tech talk will be followed by a hackathon using platform.ai free public version. Hands-on explore use cases in fashion, cars, food, scene understanding, illustrations or whatever dataset you decide to bring.

    1
  • Conversation between Jeremy Howard and Leslie Smith

    USF Data Institute

    Join noted researcher Leslie Smith (famous for discovering the super-convergence phenomenon, developing the 1cycle algorithm, and much more) and fast.ai co-founder Jeremy Howard for a conversation about recent research advances in training neural networks. Doors open at 6.00p, and discussion starts at 6.30p.

    7
  • Learning to Learn by Pieter Abbeel

    Amazon Music

    Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on meta-learning for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet. Agenda: 6:30-7:30 PM: Check-in + networking 7:30-8:30 PM: Learning to Learn presented by Pieter Abbeel 8:30-9:00 PM: Q&A Session 9:00-10:00 PM: Raffle giveaways + more networking About Pieter Abbeel: Pieter Abbeel (Professor at UC Berkeley [2008- ], Co-Founder Embodied Intelligence [2017- ], Co-Founder Gradescope [2014- ], Research Scientist at OpenAI [[masked]], Founder Faculty Partner AI@TheHouse AI Incubator, Advisor to many AI/Robotics start-ups) works in machine learning and robotics, in particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

    11
  • Increasing data science productivity; founders of Prodigy & spaCy

    101 Howard St, University of San Francisco - Downtown Campus, San Francisco, CA 94105

    This meetup is co-hosted with the Data Institute at USF. Prodigy: An annotation tool designed for rapid iteration and developer productivity Ines Montani (Co-founder of Explosion AI) Most developers working with machine learning recognize that data quality and quantity is a more important factor for the success of their project than the specifics of their statistical model. Despite this, it's common for inexperienced teams to make almost no investment into their data. Even amongst more experienced teams, developers often under-estimate the extent to which annotation is a knowledge-based process that requires several iterations to perfect. As a solution, we suggest machine learning developers perform initial annotations themselves, to help them refine the schema. To enable this workflow, we've developed Prodigy, an annotation tool with several features designed to improve productivity. In this talk I'll discuss what we've learned about annotation, and show you how we've implemented these insights into Prodigy. spaCy: Multi-lingual natural language understanding with spaCy Matthew Honnibal (Co-founder of Explosion AI and author of spaCy) spaCy is a popular open-source Natural Language Processing library designed for practical usage. In this talk, I'll outline the new parsing model we've been developing to improve spaCy's support for more languages and text types. The parsing model takes an incremental approach, reading the words one-by-one and updating the parse state, by pushing or popping words to a stack, creating arcs between them, inserting sentence boundaries, or splitting and merging tokens. This allow a single neural network model to determine the sentence segmentation, tokenization and dependency parse of a whole document. This joint approach improves parse accuracy on many types of text, especially for languages such as Chinese. When the new model is complete, spaCy will be able to support a much wider variety of languages, with a better balance of efficiency, accuracy and customisability.

  • Fellowship.AI Third Anniversary / Deep learning for non-engineers

    As Fellowship.AI celebrates it's third anniversary, some of our fellows would like to share their insights about making deep learning more accessible to non-engineers. Tools like PyTorch, Keras, fastText are making it easier than ever to build and apply state-of-the-art models in vision and language.

  • Introducing Studio.ML: Simplifying ML Model Development

    Metis San Francisco

    Studio.ml is an early-stage, ML model management framework written in Python and developed to minimize the overhead involved with scheduling, running, monitoring and managing artifacts of your machine learning experiments. Most of the features are compatible with any Python ML framework, including Keras, TensorFlow, PyTorch, and scikit-learn (additional features available for Keras and TensorFlow). This code is still in the early phases of development, but we want to share our work and encourage developers to try it, report back problems, ask questions, provide feedback and contribute.