• A.I. Day - Research : Prototyping : Production

    Google Asia Pacific

    Please sign up for free on Eventbrite using this link : http://bit.ly/ai-day-2019 This event is aimed at an intermediate to advanced level and is for people who have already been working on Deep Learning. For the first time in Singapore, we will have 4 Google Developer Experts in Machine Learning and 3 speakers from Google Brain all on the same platform. Speakers & Talks will include "TensorFlow Lite: On-Device ML and the Model Optimization Toolkit" - Jason Zaman - Light Machine Learning at the edge is important for everything from user privacy to battery consumption. This talk will give an overview of the different strategies to optimize models for on-device inference: pruning, integer quantization with the model optimization toolkit. Then there will be a demo of all these techniques together to run a model on an EdgeTPU. Jason is the community lead for TensorFlow SIG-Build and an ML-GDE. He works as a machine learning engineer at Light doing computational photography in mobile cameras. Along with speaking regularly, he is also active in Open Source as a Gentoo Linux developer and maintainer of the SELinux Project. "Which image should we show? Neural Linear Bandit for Image Selection" - Sirinart Tangruamsub - Agoda Sirinart is a data scientist at Agoda. Before joining Agoda, she was a postdoctoral researcher at the University of Goettingen. She has extensive experience in the fields of computer vision and natural language processing at various startups and corporates. Her current areas of interests include personalization and recommendation systems. "XLNet - The Latest in language models" - Martin Andrews - Red Dragon AI Martin is a Google Developer Expert in Machine Learning based in Singapore - and was doing Neural Networks before the last AI winter... He's an active contributor in the Singapore data science community and is the co-host of the Singapore TensorFlow and Deep Learning MeetUp (with now with 3700+ members in Singapore). "Deep Learning on Graphs for Conversational AI" Sam Witteveen - Red Dragon AI Sam is a Google Developer Expert for Machine Learning and is a co-founder of Red Dragon AI a deep tech company based in Singapore. He has extensive experience in startups and mobile applications and is helping developers and companies create smarter applications with machine learning. Sam is especially passionate about Deep Learning and AI in the fields of Natural Language and Conversational Agents and regularly shares his knowledge at events and trainings across the world, as well as being the co-organiser of the Singapore TensorFlow and Deep Learning group. "Swift for TensorFlow" - Paige Bailey - Google Brain "TFX and TF Ranking" - Robert Crowe - Google Brain We also have other talks that we are adding over the next few days The adoption of Artificial Intelligence is accelerating all over the world in all types of industries and businesses. With pioneers such as Andrew Ng describing AI as the "new electricity", we are seeing a great deal of discussion around what AI can do and how this "new electricity" really works. In this one day conference, we will have a variety of experts showing some of the latest and greatest technology that is being used in developing and creating real-world AI products. It's one thing to see people talk about the emergence of these technologies and another to see industry experts break down how some of these products are being made with tips and tricks of the trade. All the speakers are first-hand practitioners working in the field rather than marketing and sales people. Speaker's Topics will include: - Latest info on TensorFlow 2.0 - Building recommendation systems that serve millions of people daily - Prototyping a Deep Learning product - Researching with a view to developing AI Products - Building ML Pipelines with TFX The event is supported by Red Dragon AI & Google

    4
  • Advanced Computer Vision with Deep Learning (20 - 21 June 2019)

    If you are interested to attend this - please sign up here http://bit.ly/acv-j-2019 Overview Together with Red Dragon AI, SGInnovate is pleased to present the second module of the Deep Learning Developer Series. In this module, we go beyond the basic skills taught in module 1 such as Convolutional Neural Networks (CNNs). This would expand your ability to build modern image networks using a variety of architectures and for applications beyond simple classification. About the Deep Learning Developer Series: The Deep Learning Developer Series is a hands-on and cutting-edge series targeted at developers and data scientists who are looking to build Artificial Intelligence (AI) applications for real-world usage. It is an expanded curriculum that breaks away from the regular 8-weeks long full-time course structure and allows for modular customisation according to your own pace and preference. In every module, you will have the opportunity to build your own Deep Learning models as part of your main project. You will also be challenged to use your new skills in an application that relates to your field of work or interest. About this module: Building on the learnings from the first module, we will be going beyond just using TensorFlow and Keras. PyTorch and TorchVision, which are often used for research in computer vision and cutting-edge architectures, will be introduced. To understand the current state-of-the-art technologies, we will review the history of ImageNet winning models and focus on Inception and Residual models. We will also look at some of the newer models such as NASNet and AmoebaNet, and explore how the field has gone beyond hand-engineered models. One key skill that you will acquire is how to use these modern architectures as feature extractors and apply them to create applications like image search and similarity comparisons. You will also discover how to do such tasks such as object detection and learn how models (like YOLO) are able to detect multiple objects in an image. You will also learn about image segmentation and classification at the pixel level. This will involve using architectures like U-Nets and DenseNets. Furthermore, you will learn how they are used in a variety of image segmentation tasks from perception for self-driving cars to medical image analysis. As with the other Deep Learning Developer modules, you will have the opportunity to build multiple models yourself. As with the other Deep Learning Developer modules, you will have the opportunity to build multiple models yourself. In this course, participants will learn: * Advanced classification and objection detection * Skills to create applications like image search and similarity comparisons * Image segmentation and classification at the pixel level with architectures like U-Nets and DenseNets, and how they are used in a variety of image segmentation tasks Recommended Prerequisites: * Ideally will have attended Module 1: Deep Learning Jump-start Workshop * Alternative : If you believe that you have covered the material in the Jumpstart course, please just let us know - we have a process in place to enable suitable students to join the Advanced courses directly

  • TF&DL May : AutoML - The what, how and why

    Google Asia Pacific

    AutoML is a machine learning technique that has grown in popularity over the last 2 years. This month we take a look at this topic from a few different angles. Planned Talks : "AutoML on GCP" - Sam Witteveen Recently at Google's Cloud Next event, Google announced a new suite of AutoML products which allow for making very high-quality ML models for Vision, Text and Tabular data. Sam will show you where these can be useful and how to get started using the AutoML service. "AutoML with Autokeras" - Timothy Liu AutoML is gaining popularity of the launch of various cloud services all claiming to enable state of the art ML without the need to hand-craft models. In this talk, Timothy will be sharing on the concept of AutoML, and how to use a library like Autokeras to get started with AutoML without the use of proprietary services. "Single-Path Neural Architecture Search (and the Lottery Ticket Hypothesis)" - Martin Andrews Martin will dive into two interesting recent papers, one with a more efficient way to do AutoML, and the other with insights into Neural Network training, pruning and initialisation (in that order). "OpenAI DOTA Five Finals" - Olzhas Akpambetov Olzhas will explain the main ideas of Proximal Policy Optimization (PPO) and will include an overview of the OpenAI Five Finals event he attended on April 13 in San Francisco where OpenAI's system, developed with a scaled version of PPO, played against the 2018 DOTA 2 world champions "OG". --- Talks will start at 7:00pm and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    3
  • TF&DL April : Images and CNNs with TensorFlow2.0

    Google Asia Pacific

    After a few months of covering the "Bleeding Edge" (whether the news from NeuIPS or the new features in TensorFlow), we are returning to 'regular Deep Learning' this month. In particular, we'll have speakers covering the use of CNNs for images, from the ground up. Planned Talks : "First steps in Deep Learning with TensorFlow 2.0 : CNNs" - Martin Andrews This talk aims to cover the "something for beginners" part of our tagline - motivating the building blocks of CNNs, how they are trained, and how the resulting model can be applied to different datasets. Code examples will be provided in Colab notebooks. "PreTrained CNNs and tf.keras.applications" - Timothy Liu In this talk, Timothy will discuss (and demonstrate) the advantages of using pre-trained CNNs for vision tasks. An undergraduate at SUTD, Timothy has interned at Nvidia and is actively involved in setting up SUTD's GPU resources. "Tips for Image Classification in TensorFlow 2.0" - Sam Witteveen While knowing how to train a CNN is great, it doesn't guarantee that you will train a good model that generalizes well in a reasonably quick amount of time. In this talk, Sam will show some slightly more advanced techniques that beginners can use to improve their models. These include how to use TensorFlow's new Datasets API to get data to train on and how to do such tasks as Image Augmentation in training, Test Time Augmentation and Progressive Resizing of images. --- Talks will start at 7:00pm and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    19
  • TensorFlow and Deep Learning : The latest from #TFDevSummit

    Since there's so much Open Source content in Singapore this week (with FOSSASIA being on) : We thought that it would be a great time to update everyone on the Open Source Machine Learning project with the most stars on GitHub! This month's MeetUp is all about recapping the announcements and new products from the TensorFlow Summit, which was on 6-7 of March. So, if you're using TensorFlow (or hope to soon), this will be a great event to understand what's "Coming Soon" (which is particularly timely, since TF 2.0 is imminent). --- Planned Talks (in brief) : "TensorFlow 2.0 function tracing and autograph" - Aurélien Géron TensorFlow's upcoming 2.0 version will make it much easier to build efficient & portable computation graphs thanks to automatic function tracing and autograph. In this talk, you will learn how to use these great features and understand how they work. Aurélien Géron is the author of the best-selling book "Hands-On Machine Learning with Scikit-Learn and TensorFlow" (O'Reilly, 2017). A former Googler, he led YouTube's video classification team from 2013 to 2016. He also founded Wifirst, a leading Wireless ISP in France. Before this, he worked as a software engineer in a variety of domains: healthcare, media & telecommunications, finance, defense, manufacturing and more. He recently moved to Singapore where he founded kiwisoft.io, a Machine Learning consulting & training firm. "TFLite, Microcontrollers and EdgeTPUs" - Martin Andrews Part of TensorFlow's appeal in commercial settings (compared to other frameworks) is its extensive ecosystem, including the ability to scale from huge clusters of TPUs down to tiny devices. TensorFlow Lite is a key part of this, and Martin will give an overview of what it is, who is using it and some new devices that only became available at the TF Summit. Martin has a PhD in Machine Learning, and has been an Open Source developer since 1999. After a career in finance (based in London and New York), he decided to follow his original passion, and now works on Machine Learning / Artificial Intelligence full-time : a combination of consulting, research and running courses in Deep Learning in Singapore. "TensorFlow 2.0 API Changes and Intro to the future with Swift" - Sam Witteveen TensorFlow 2.0 has introduced a number of key changes to its APIs, including enabling Eager by default, and tf.keras as the standard for layers, loses and optimizers. This talk will cover these features and how to upgrade your code from 1.0 to be ready for use in 2.0. Lastly, we will take a quick look at the future of TensorFlow with Swift and how you can try it out now. Sam is a Google Developer Expert for Machine Learning. He has extensive experience in AI and mobile startups and is helping developers and companies across SEA create smarter applications with machine learning. Red Dragon AI is a leading AI solutions and research company based out of Singapore. "TensorFlow SIGs, RFCs and Community" - Jason Zaman TensorFlow is where it is today because of many amazing open source contributions, through GitHub, Special Interest Groups and Requests for Comment (RFCs). In addition to that, we have communities of contributors focused on documentation, and on testing the forthcoming TensorFlow 2.0 release. This talk highlights the work of those groups, and explains how you can get involved. Jason is community lead for TensorFlow's SIG-Build and an ML GDE. He is also a Gentoo Linux developer, maintainer on the SELinux Project and is an active member of the Open Source community, speaking regularly on TensorFlow, SELinux and Android. --- Talks will start at 7pm (A/V equipment permitting) and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    5
  • TensorFlow and Deep Learning : NeurIPS and the Cutting Edge

    The focus for this MeetUp's main talks will be the interesting developments that were seen at the NeurIPS conference (formerly known as NIPS'2018) last December. As always, we'll try to make this accessible + interesting. SHOUT OUT : If you were at NeurIPS with a poster/presentation, please email us at least 24 hours before the event to suggest yourself for a quick talk about your work! It would be great if some of the researchers at Singaporean institutions could show their work to the wider community here... --- Planned Talks : "Advances in Unsupervised and Self Supervised Learning" - Sam Witteveen Sam's talk will cover some of the presentations and papers related to the current state of the art in unsupervised and self-supervised learning across multiple domains. "SLAYER for Deep Spiking Neural Network" - Sumit Bam Shrestha, NUS Spiking Neural Networks (SNNs) is an exciting research avenue for low power spike event based computation. However, the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm. At NeurIPS, Sumit's team introduced a new mechanism for learning synaptic weights which uses a temporal credit assignment policy for backpropagating error to preceding layers. "NeurIPS Lightning Talks" - Martin Andrews Rather than give one big talk, Martin will talk briefly about several ideas from NeurIPS (and after) : * Neural ODEs (awarded NeurIPS Best Paper) * Lightly supervised image correspondences * Learning ImageNet layer-by-layer --- Talks will start at 7pm (A/V equipment permitting) and end at around 8:45pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    5
  • December Meetup - Dealing with tabular data

    Google Asia Pacific

    After a few months of covering new features in TensorFlow and cutting-edge papers, we are returning to some more basics of Deep Learning this month. We will look at dealing with structured tabular data and some of the techniques around doing that in TensorFlow. Planned Talks : "Using Feature Columns with tf.Keras" - Sam Witteveen Using tabular data is often tricky and painful in TensorFlow and Keras. In TensorFlow 1.12 Feature Columns have been introduced for handling data from data frames such as Pandas. Sam will show how to use them to create new features with tf.data for feeding into Keras layers. 'Embed all the things" - Martin Andrews Embeddings are extremely powerful and their power goes beyond just using them for text. In this talk, Martin will show using embeddings for a variety of objects including embeddings for graph models. "MLBlocks demo" - Rishabh Anand and Sarvasv Kulpati MLBlocks - a company that won Ideasinc 2018 and won $10000 in seed money from NTU that makes machine learning more accessible to companies. MLBlocks contains tool, that enables people to create and deploy image models in the cloud without touching a line of code. --- Talks will start at 7:00pm and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    6
  • Happy 3rd Birthday TensorFlow, Google Brain, HUB-GANS and BERT

    This month we have a couple of new speakers in from Google Brain Mountain View. The focus for this MeetUp's main talks will be on looking full pipelines for TensorFlow in the cloud, but we'll also include Deep Learning content to make sure there's something for everyone. Planned Talks : "tf.data Beyond the Basics" - Rachel Lim Rachel works on the tf.data team and will show some of the useful techniques for dealing with data pipelines in TensorFlow "TFX: Complete ML Pipelines" - Chuan Yu Foo Foo will go through the different components that have been open-sourced (TensorFlow Data Validation, TensorFlow Transform, TensorFlow Model Analysis, TensorFlow Serving), what they do, and how they can be used together to build machine learning pipelines. "Language Learning with BERT" - Martin Andrews In this talk for people just starting out, Martin will describe how Google's new BERT model can turbo charge your Natural Language Processing solutions. "TF Hub GANs" - Sam Witteveen Recently a number of key GANs have been released including BigGAN by DeepMind. While these GANs require a tremendous amount of compute and time to train, they have now become available for anyone to use through TF Hub. Sam will give a short talk about how you can access these models in TF Hub and use them to generate images. --- Talks will start at 7:00pm (A/V equipment permitting) and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    8
  • TensorFlow and Deep Learning : TensorFlow 2.0 Introduction and Guide

    Due to room availability, we will start a little later than usual at 7:30pm The focus for this MeetUp's main talks will be on looking towards the future of TensorFlow, but we'll also include Deep Learning content to make sure there's something for everyone. Planned Talks : TensorFlow 2.0 is coming. We will give you a glimpse of the planned changes, and describe our plans to make the transition as painless as possible. Also, we will give you an overview of the many alternative platforms your TensorFlow can train and run on, including Swift, JavaScript and mobile devices. Speaker Bio: Frank Chen is a Software Engineer at Google Brain, working to help make TensorFlow and TPUs faster and easier to use for everyone. Before Google, he worked on online education platforms as one of the founding software engineers at Coursera. When not working, Frank enjoys photography and musical theater and has seen over 30 Broadway shows. Frank has Bachelor's and Master's degrees in Computer Science from Stanford. "Raw Audio to Piano Transcription" - Martin Andrews Google's Magenta team has created a network to convert raw audio files to a midi piano roll, and has now released the python backend, a Colab notebook and an in-browser (local Javascript) version. Martin will describe how their Deep Learning network is built, the special 'losses' required to make it perform so well, and demonstrate it in action on music sourced 'in the wild'. "Let Google do the pretraining for you: Exploring TF Hub" - Sam Witteveen Introduced earlier this year TF Hub provides pretrained components and parts of graphs that you can implement into your models very quickly and easily. Sam will walk through some use cases and show code for using TF Hub modules in your own projects. --- Talks will start at 7:30pm (A/V equipment permitting) and end at around 9:00pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    7
  • TensorFlow and Deep Learning : Explainability and Interpretability

    The focus for this MeetUp's main talks will be on understanding what Deep Learning models are actually doing. Even if you're getting great results, being able to explain *why* may well be of interest. And if you haven't started Deep Learning yet - and may be being held back by doubts about 'black box behaviour' - then this MeetUp will also be relevant. FWIW, Pedro Domingos recently sent out the following 'scare tweet' about the GDPR : """Starting May 25, the European Union will require algorithms to explain their output, making deep learning illegal.""" (unless you attend this talk, of course). --- Planned Talks : "Explainable Models : A Primer" - Hardeep Arora Hardeep's talk approaches the explainability topic from more of a data science angle : So that we can see what the issues are, and contrast the techniques with those that are being discovered for Deep Learning. "Explainable AI : Shapley Values and Concept Activation Vectors" - Lee XA Xiong An is a Senior Research Officer from the Imaging Informatics Division of A*STAR, Bioinformatics Institute. He is passionate about the applications of deep learning onto unexplored areas and is currently using deep learning and machine learning to improve crop yields in smart urban farms. His training is in computational biology and holds a bachelor's degree in Science from NUS. "Did the Model Understand the Question?" - Martin Andrews This talk should be pretty 'accessible', even though it describes the work from a recent paper. Specifically, Martin will describe how Question Answering for Images works, and then show how to 'deconstruct' what the network is doing. Sometimes with surprising results. We also have a confirmed lightning talk on KubeFlow (and how it can be used to automate Jupyter deployments) - any other lightning talks will be warmly accepted : Please check in with Martin at the start of the event. --- If you have something that you'd like to present in a welcoming environment, please let us know by suggesting yourself via the /suggestion/ link given below... We're very enthusiastic about Lightning Talks, which are a great way of showing people cool stuff that you've been working on, without the (imagined) pressure of the "Full Presentation". --- Talks will start at 7pm (A/V equipment permitting) and end at around 8:45pm, at which point people normally come up to the front for a bit of a chat with each other, and the speakers. --- As always, we're actively looking for more speakers for future events - both '30 minutes long-form', and lightning talks. For the lightning talks, we welcome folks to come and talk about something cool they've done with TensorFlow and/or Deep Learning for 5-10mins (so, if you have slides, then #max=10). We believe that the key ingredient for the success of a Lightning Talk is simply the cool/interesting factor. It doesn't matter whether you're an expert or and enthusiastic beginner: Given the responses we have had, we're sure there are lots of people who would be interested to hear what you've been playing with. Please suggest yourself here : https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/suggestion/

    10