• Microsoft:deployment, indico: model adapting , Oriel Research Therap: AML&ML

    Cambridge Innovation Center (CIC)

    Madison May - ML Architect - indico data Madison is the Machine Learning Architect at Indico where he has played a key role in developing the company's enterprise AI platform for unstructured content. Prior to joining indico in its early days, Madison designed and built an NLP system at Fetchnotes and was an active open source contributor to projects like Python3 and Pylearn2." The past year and a half has seen the rise of language model finetuning as the de-facto standard for natural language processing applications in low data environments. However, the boost to model quality comes at a cost -- modern language models have large disk-space footprints and are compute + memory intensive. Indico data solutions has deployed several unique strategies to make these model architectures more suitable for industry use. One such strategy, dubbed the "adapter" method, achieves comparable accuracy to full model finetuning while training only 5% of the weights. We'll discuss how we employ the adapter strategy in production and how custom model finetuning with the indico platform allows subject matter experts to automate the boring aspects of their jobs. Eila Arich-Landkof (meetup organizer)- Founder and CEO of Oriel Research Therapeutics - a startup company that uses big data and machine learning to diagnose disease and match therapy to patients. In the last five years, Eila navigated her career from the high tech industry to the biomedical research field. She worked as a wet-lab technician at The Whitehead institute for biomedical research at MIT and as a product manager at Mass General Hospital and The Broad Institute of Harvard and MIT where she used her biological and computer science experience to build an application to present therapies and genomic information. Her work won the first place for clinical poster at the 2016 Broad Institute scientific contest. Machine learning @ healthcare panelist: https://www.youtube.com/watch?v=5uW2jJda4iM Abstract: I will introduce our work on screening for AML (acute myeloid leukemia) using RNA and clinical data and machine learning. AI in medical field challenges and ways to move forward. Francesca Lazzeri, Ph.D. - @frlazzeri - is Senior Machine Learning Scientist at Microsoft on the Cloud Advocacy team and expert in big data technology innovations and the applications of machine learning-based solutions to real-world problems. Her research has spanned the areas of machine learning, statistical modeling, time series econometrics and forecasting, and a range of industries – energy, oil and gas, retail, aerospace, healthcare, and professional services. Francesca periodically teaches applied analytics and machine learning classes at universities and research institutions around the world. She is Data Science mentor for Ph.D. and Postdoc students at the Massachusetts Institute of Technology, and speaker at academic and industry conferences - where she shares her knowledge and passion for AI, machine learning, and coding. Abstract: How to tackle the challenges in deploying machine learning models Francesca Lazzeri, PhD Model deployment is the method by which you integrate a machine learning model into an existing production environment in order to start using it to make practical business decisions based on data. It is only once models are deployed to production that they start adding value, making deployment a crucial step. However, there is complexity in the deployment of machine learning models. In this talk you will learn how to deploy your machine learning models with Azure Machine Learning. The service fully supports open-source technologies such as PyTorch, TensorFlow, and scikit-learn and can be used for any kind of machine learning, from classical ml to deep learning, supervised and unsupervised learning. Useful Resources for the Session: Azure Notebooks: https://aka.ms/AzureNB Python Microsoft: https://aka.ms/PythonMS

  • Galaxy; Augmentation , SENTENAI:ML@gigantic facilities, Gravyty:AI for funding

    Galaxy.ai Bio: Mr Pranab Banerjee is currently the Chief Data Scientist at Galaxy AI. His primary areas of research are computer vision, pattern recognition, Bayesian probabilistic models, bio- and socio-inspired computation, autonomous systems, and scientific data visualization. In the past, Mr. Banerjee was a member of computational sciences staff at NASA's Jet Propulsion Laboratory; a senior scientist at Space Dynamics Laboratory; principal research engineer at BAE Systems; and principal research scientist at Boston Fusion Corp. He is also an international award winning photographer. Title: Augmentation of Deep Learning Object Classifiers with Random Field Models for Enhancing Accuracy There are real-life challenging situations where object detection fails to identify an object with high confidence, such as for low contrast imagery under adverse weather conditions, objects under partial occlusion, and imperfect illumination (e.g. glare, shadows). Under such scenarios, a DL system may robustly identify a set of objects in the scene and classify some of the other objects with low confidence. To address this problem, the DL system is augmented with a random field model to exploit contextual information from historical data to enhance the classification accuracy of the original low confidence identifications. These contextual models are used to quantify the likelihood of an ill-identified object, and determine the most likely label of the low confidence target via probabilistic inference. SENTENAI Bio: Brendan Kohler is a entrepreneur and investor focused on bringing AI solutions from the lab into production. He started his career working on large scale distributed systems and supercomputing applications at Georgia Tech. Later he researched complex event processing and machine learning applications for cyber-physical systems at Yale. In 2015 he co-founded Hyperplane Venture Capital to bring investment dollars to help entrepreneurs working on "hard tech” solutions based on machine learning and artificial intelligence get the patient capital and support they need to be successful. Brendan is currently co-founder and CTO of Sentenai, a Boston startup that uses unsupervised machine learning to provide realtime data preparation for industrial IoT data. Abstract: Deep learning on the factory floor: an open problem When people think of deploying advanced machine learning to the factory floor, the first thing that comes to mind is probably low-power prediction at the edge, but factories are gigantic facilities with power and physical footprints that make such considerations meaningless. Instead, the three challenges of data, dimensionality, and deployment are what prevent us from successfully implementing deep learning on the factory floor. In this talk, we explore the new techniques and technologies required to build solutions that work in this complex, hostile environment so that we can turn the “lights out factory” from fantasy into reality. Gravyty: Abstract: As practitioners of vertical AI products, we’ll show how we’ve utilized open source technologies including TensorFlow, NLTK, CRFsuite, SKlearn, and others to determine who is most likely to give a donation to an organization and learn from communications to mimic the cognitive functions of fundraisers to expand their reach. Bio: Rich is the co-founder and CTO of Gravyty, the first company focused on applying artificial intelligence to the social good industry. Rich believes that technology and companies are at their best when they augment people and allow them to do things that were previously impossible in simple, cost-effective and elegant ways. As a brain aneurysm survivor, Rich has a deeply rooted belief that technology should be used for the positive benefit of people. He strives to scale Gravyty to help nonprofits of all shapes and sizes gain the resources they need to achieve their missions.

  • ML on mobile devices (deep dive - 2 talks ) & Deep Learning for NLP at HubSpot

    Cambridge Innovation Center (CIC)

    Topic Name: Machine Learning Inference on Mobile Devices Talk Abstract: A lot of the cool new features that we love on our phones like face recognition in photos, Face-ID for authentication, smart reply in Gmail etc. rely on Machine Learning, most of which takes place on device. But, it can be really challenging to deploy these Machine Learning models on edge devices like Mobile Phones, embedded processors etc. In this talk, we will discuss that why we should look at doing Machine Learning inference locally on the mobile devices and also explore that what kind of an expertise is required for a developer to deploy a Machine Learning model on a Mobile Device. We will also develop and train some Machine Learning models for different use cases, deploy the trained models in an Android/iOS App and see that how we can leverage the power of the CPU/GPU/AI Chips on our Mobile Phones/embedded devices etc. for doing inference locally. Bio: Anuj is a Machine Learning Researcher at Bose Corporation. His work involves exploring, innovating and developing new machine learning models to create solutions that will amaze the customers. Apart from that, his work also involves optimizing and deploying machine learning models on mobile devices [iOS/Android], embedded devices like ARM Cortex A/Cortex M processors as well as on Blockchains for quick proof-of-concept development. Anuj is also a contributor to the OpenMined project, a project to develop tools for secure, privacy- preserving, value-aligned Artificial Intelligence. Requirements and Resource Links: You do not need to code along. All the code for the Machine Learning models, iOS and Android App code as well as the presentation for this talk will be available after the talk on GitHub [https://github.com/anujdutt9/Machine-Learning-Inference-on-Mobile-Devices]. Topic: Deep Learning for NLP at HubSpot Bio: Vedant was founder and CEO of Kemvi, acquired by HubSpot (NYSE:HUBS), where he now works on deep learning for NLP at the intersection of research, engineering, and product. Kemvi leveraged novel models and a distributed web-scale pipeline for information extraction to personalize messaging at scale and accelerate sales and marketing teams. Vedant studied physics and math at Columbia University and has publications and patents across machine learning, human computer interaction, theoretical black hole physics, and quantitative finance. His work has been covered by TechCrunch, Fortune, Wired, Technology Review, and others. Title: Advanced topics in Deep Learning on Phones Abstract: As on-device deep learning matures, developers face significant challenges optimizing models and scaling their applications. This talk covers state-of-the-art techniques for running deep learning models on mobile devices. I will discuss network architecture choices, pruning techniques, and quantization methods to improve network speed and reduce size. I'll also discuss methods for monitoring models in production across millions of devices and share benchmarks from deep learning models in real world use. Finally, I'll discuss model version control and deployment strategies as your applications scale. Bio: Dr. Jameson Toole is the CEO and cofounder of Fritz—helping developers teach devices how to see, hear, sense, and think. He holds undergraduate degrees in Physics, Economics, and Applied Mathematics from the University of Michigan as well as an M.S. and PhD in Engineering Systems from MIT. His work in the Human Mobility and Networks Lab centered on applications of big data and machine learning to urban and transportation planning. Prior to founding Fritz, he built analytics pipelines for Google[X]’s Project Wing and ran the data science team at Jana Mobile, a Boston technology startup.

  • Beginners session - BYOD - bring your own data & computer - hands on session

    Cambridge Innovation Center (CIC)

    Hello, I have realized that there is a huge gap between the data out there to its cloud adoption. This gap might be a knowledge gap that can be covered with hands on sessions. This is going to be our first hands on session. If it becomes popular, we will have more in the future. Please make sure to have your computer with you and CSV, Google sheet, txt with sample data of your data. We will be using Google platform for this session. Best, Eila We currently don't have sponsors for the beginners events. Snack and water bottles will be offered for $5 (snack bag + water bottle).

  • Diabetes Recommendations / customer experience & AI / reinforcement learning

    Data-Driven Diabetes Recommendations Using Machine Learning Abstract: Approximately 9% of the U.S. population has diabetes. Every year, a significant fraction of Aetna’s medical and pharmacy costs are from treating diabetes episodes. Behavior changes and treatment adherence can prevent the development of disabling and life-threatening complications. This talk will cover a technology that delivers data-driven diabetes recommendations using machine learning techniques. The work is based on a dataset with over 250,000 people described using over 2,000 variables (half a billion data points). Rule based logic generates recommendations for members. A quantile regression model then leverages the stratification to assign a bracket-specific cost savings for closing each gap in care. Bio: Dr. Mark L. Homer is the Director of Artificial Intelligence & Health Informatics at Aetna. He earned his B.S. and M.S. degrees from an accelerated program at MIT and a Ph.D. at Brown University in biomedical engineering. Most recently, he was awarded a fellowship by the National Library of Medicine, achieving a Masters in Medical Science at Harvard Medical School in biomedical informatics. Dr. Homer has led projects in the biotech, aerospace, and healthcare startup space. Examples include software control for bioreactors, obstacle avoidance for Mars landers, and brain computer interfaces for those with paralysis. Title: “A real time tagged data loop (at scale) for improving customer experience via deep learning” Abstract Though AI has made large strides over the past 10 years, the problem of Intent and Entity understanding, especially for the last 15%, still remains a challenge. At Interactions, our omni-channel virtual assistant is backed by human intelligence (HI). This presentation is about how we use a continuous data loop that leverages AI and HI data with billions of tagged utterances to constantly improve deep learning outcomes in ASR and NL. The presentation will touch on the data, infrastructure, tools, and processes that Interactions is successfully combining to achieve proven results at scale in production applications. Speaker : Chris Buchino Chris has been building software and leading teams for over 15 years. At Boston-based Grasshopper, he served as Director of Engineering and Architecture. He was instrumental in helping Grasshopper grow and scale. As Director of Advanced Engineering at Interactions, Chris has been responsible for data infrastructure, automation, and software development of Interactions’ AI services and technologies. Chris has co-founded two startups. Chris enjoys entrepreneurship and bringing technology ideas to life. Title: problem of exploration in reinforcement learning Bio & Abstract: My name is Francisco Garcia and I am a PhD candidate in computer science at Umass Amherst, advised by professor Philip Thomas. My research focuses in reinforcement learning, specifically in how an agent can used information gathered from past experiences to improve learning on novel tasks. Besides RL, my other research interest is in optimization, planning and general machine learning techniques. In this talk, I will give a quick introduction to the problem of exploration in reinforcement learning and present a new approach for improving exploration. I propose separating the behavior of an agent into two distinct policies (an exploration policy and an exploitation policy) and propose an objective to optimize the exploration policy. It can then be shown that this problem can be formulated as a separate RL problem, where the optimal policy corresponds to an optimal exploration policy.

  • Robustness of ML Forecasts. Retailers integration of ML. Transfer learning

    Cambridge Innovation Center (CIC)

    Accuracy and Robustness of Machine Learning Forecasts, Kenric P. Nelson Machine learning algorithms are typically trained and tested based on classification or regression error. While the Kullback-Liebler or other information theoretic metrics may be utilized to assess the error in the probability forecasts, these metrics often measure relative performance without a clear sense of what constitutes an absolute standard of success. The interpretation of information theoretic metrics is clarified by translating them into a probability which can be compared with the classification metrics. Furthermore via the weighted generalized mean of predicted probabilities, which is a translation of the Tsallis and Renyi generalizations of entropy, the contrast between decisive and robust algorithms can be measured. Some illustrative examples show how Gaussian models lead to overconfident probability forecasts when complexity in the source of uncertainty actually involves distributions with slower decaying tails. Methods for improving the accuracy and robustness of machine learning models are discussed. Dr. Kenric Nelson is a Senior Principal Engineer with Raytheon Integrated Defense Systems and Research Professor with Boston University Electrical & Computer Engineering. At Raytheon he leads projects on sensor management, tracking, discrimination, and debris mitigation. At Boston University he is developing a novel approach to information theory for complex systems. He has multiple inventions applying non-additive information theory to improve the robustness of radar processing and enable efficient probabilistic computation. His education in electrical engineering includes a B.S. degree Summa Cum Laude from Tulane University, a M.S. degree from Rensselaer Polytechnic Institute, and a Ph.D. degree from Boston University. His education in Program Management includes an Executive Certificate from MIT Sloan and certification with the Program Management Institute. His research interests include machine learning, complex adaptive signals and systems, and sensor systems. Tim Moody Bio: I've spent the last several years working with Fortune 500 e-commerce teams on testing and optimization. The theme of our A/B testing program is often centered around deploying new technologies which rely on machine learning to provide some new competitive advantage. Abstract/Topic: We will discuss and show several examples of how online retailers and web applications can integrate machine learning in their production applications in order to generate a ROI. We will also walk through a basic technical implementation. Kishan Supreet Alguri Bio Supreet is a final year PhD student from Department of Electrical and Computer Engineering, University of Utah working with Dr. Joel B. Harley. His research interests include machine learning, transfer learning, complex wave propagation, and signal processing. Abstract: Getting more from less with transfer learning: Dictionary learning is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements them- selves. In this talk we discuss how dictionary learning has been used as a transfer learning method in solving problems of complex wave propagation in structures. We are first going to briefly understand what compressive sensing and dictionary learning is, and then discuss the challenges of physical wave propagation and di iculties in obtaining wave propagation data. We then demonstrate how we used dictionary learning to solve wave field reconstruction challenges with very less experimental data by using the knowledge gained by learning from synthetic data.

  • Google & ML - 6/26

    Google Cambridge

    Hi all, To allow a deeper dive into the platform, the meetup will have two talks of ~45 min (and not 3 as usual). Following is the bio and Abstract for the talks: 1. Abstract: Data is the lifeblood of machine learning. The ability to effectively manage and process data at scale is a critical component of doing machine learning in production. In this talk I will introduce some of the tools we have developed with Apache Beam and TensorFlow to do machine learning at scale for both preparing data and evaluating models. I will also demonstrate how Beam's model of pluggable backends allow one to run the same pipeline both locally and on distributed backends like Flink and Google Cloud Dataflow. Bio: Robert Bradshaw is a software engineer at Google, developing on tools for doing petabyte-scale data processing, most recently working on Apache Beam. He is also active in the open source community, leading the Cython project since it's inception and as a long-time contributor to the open source mathematics software Sage. He received Ph.D. in Mathematics from University of Washington and currently resides in Seattle, Washington. 2: Abstract: How to use the cloud machine learning infrastructure to prototype and develop machine learning based application that can scale up quickly to a word wide usage. I will share Oriel Research (OR) process from data to a trained model. This experience could be applied to any other industry ML based solution that you might be working on. I will discuss high level guidelines that were implied for OR and code samples. The main tools that will be demonstrated are: Datalab, Dataflow, Apache-beam, TF.hub Bio: Eila Arich-Landkof Entrepreneur with broad experience at the high-tech and life science industries. Worked for Cisco, Microsoft, Mass General Hospital, Whitehead institute for biomedical research and The Broad institute of Harvard and MIT. Currently leading this meetup and Oriel Research startup that diagnoses disease based on the patients genomic & molecular information using machine learning methods. Looking forward to seeing you. Best, Eila

  • Optimal Economic Design through DL, Scaling ML App in Spark, Minimal images

    David Parkes, Nift's Chief Scientist Bio: David Parkes, Nift's Chief Scientist, is the Co-Director of Harvard's Data Science Initiative and the former Area Dean for Computer Science at Harvard University, where he leads research at the interface between economics and computer science, with a focus on electronic commerce, artificial intelligence and machine learning, and where he founded the EconCS research group. David has served as Chair of the ACM Special Interest Group on Electronic Commerce [masked]), and on several international scientific advisory boards. David received his Ph.D. in Computer and Information Science from the University of Pennsylvania in 2001, and an M. Eng. (First class) in Engineering and Computing Science from the University of Oxford in 1995. Defining the field of modern marketplace engineering, David is currently writing the first comprehensive book of the field of science that underpins Uber, AirB&B—combining electronic commerce, artificial intelligence, machine learning, auctioning, gaming, economic theory and computer science. Abstract: Optimal Economic Design through Deep Learning - David C. Parkes Designing an auction that maximizes expected revenue is an intricate task. Despite major efforts, only the single-item case is fully understood. We explore the use of tools from deep learning on this topic, and show that we can recover the state-of-the-art analytical results as well as derive auctions for poorly understood problems, including settings with multiple items and budget constraints. Joint work with Paul Duetting (LSE), Zhe Feng (Harvard University), and Harikrishna Narasimhan (Harvard University). Working paper: https://arxiv.org/abs/1706.03459 About Nift: Nift is an invitation-only network that helps neighborhood businesses get the right local customers to walk through the door. Launched in Boston in the Summer of 2016, our start-up outperforms every other form of advertising and has helped over a million people discover great local businesses. Data science is the heart and core of Nift. Algorithms and models are the foundation upon which our product is built, with data driving our key decisions. Our product represents a new kind of marketplace and the science around it has yet to be defined. Ruslan Vaulin - Sr. Research Scientist at Amazon. Bio: Ruslan Vaulin is a Senior Research Scientist at Amazon Web Services. Previously he was a Senior Data Scientist at Sqrrl Data Inc. which was acquired by Amazon in January of 2018. He is an expert in machine learning, anomaly detection, and Bayesian statistics. At Amazon, he develops algorithms for detecting cyber-security threats and cyber attacks. Prior joining Sqrrl Data Inc, Ruslan Vaulin was a research scientist at the MIT-LIGO Laboratory, Massachusetts Institute of Technology developing algorithms for detecting gravitational-wave signals. He is a co-author on the LIGO’s discovery of the first gravitational-wave signal paper (https://www.ligo.caltech.edu/news/ligo20160211, http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.241103). Guy Ben-Yosef Bio: A postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory at MIT, and also an affiliate of MIT’s Center for Brains, Minds and Machines. He studies human and machine vision, with a focus on visual recognition and image interpretation. Title: Scaling up your production machine learning applications in Spark Abstract: on the meetup board (we crossed the words limit ;-))

  • Deep Learning @ Spotify. Talks by Spotify, Credly, Boston University

    • What we'll do Richard Whitcomb - Spotify - see board. the words limit was crossed. Emma Nguyen - Credly Inc Abstract: With all of the attention on machine learning, you all are seeking a better understanding of this hot topic and the benefits that it could provide to your organization. Machine learning – as well as deep learning, natural language processing and cognitive computing – are driving innovations in identifying images, personalizing marketing campaigns, genomics, and navigating the self-driving car. In this session, you will be introduced to several productivity tools that are currently using AI to help surface relevant information to different types of teams (HR, developers, and sales) enabling them be more be more proactive and effective. Bio: Emma Nguyen - Graduated from Bentley University in 2011. A Product Manager with an edTech and healthcare background. A professional with comprehensive knowledge and experience in enterprise software implementations, pre/post-sale support, and custom application deployment. Successfully led various builds of custom-design software products across North America and Australia. Worked with customers and developers to build products like a digital badging system, a CRM and also an EMR in the past. Passionate about innovative solutions and process improvements. Researched and worked with various products with AI to help increase her team's productivity. Parisa Babaheidarian - Boston University Title: Explosive detection using feature engineering and machine learning methods Abstract: In this talk, I review the fundamentals of CT imaging and its potential capability in identifying threats and explosives in bag inspection. Then, I present a new feature extraction transform which can represent the X-ray attenuation property of different materials in a sparse subspace followed by prediction algorithms that incorporate the extracted features in a MAP estimation scheme to classify the present materials in a bag. The proposed class of prediction algorithms utilize both shallow and deep structures to model the data and prior distributions, respectively. Bio: Dr. Parisa Babaheidarian received the B.Sc. degree in electrical engineering from the University of Tehran in 2008. She graduated with the M.Sc. degree in electrical engineering from the Sharif University of Technology in 2011 and received her Ph.D. in electrical engineering from Boston University in 2017. She has conducted several research projects in information theory, security, machine learning, and computer vision. She received the graduate teaching fellowship from Boston University in 2013, she was nominated for the program of excellence for the doctoral program in Electrical Engineering by KTH Royal Institute of Technology, Sweden in 2012 and she received the Iranian top researcher award in information security in 2011. She has delivered research talks in both academic and industrial platforms including Harvard University, Intel Labs, and Uber company. Currently, she is working on deep learning applications in healthcare in collaboration with Boston University. • Important to know Please sign up with full name and email (the details are extracted from your profile). No aliases please. The names are passed on to Spotify security for entrance approval. Guests are welcome, please add their name to the RSVP or send me a message with their name and I will take care of the rest. Thanks, Eila

  • Hands on coding session - Multilayer perceptron using Tensorflow library

    Location visible to members


    Hello all, I will run beginners hands on session using Google Tensorflow library & python. No background knowledge is needed! The meeting will be our first online training events using hangout on Air with YouTube Live. Hangout on Air allows all participants to actively contribute to the session by screen sharing, talking and texting. To participate you will need: - Computer with microphone, - Google cloud account. Anyone can get $300 to use for 12 months and the class will cost you few cents (maybe more but not a lot - depends on your usage) from that budget. Additional details about the meeting will be published soon. If there will be any unexpected technical challenges, the meeting might last longer. Best, Eila