Attacking a Machine Learning Model - Data Science After Dark April 2020


Details
Attacking a Machine Learning Model - Why we must protect ML models critical to our business:
Machine learning models are designed to analyze input data and provide desired output data. What if we can manipulate the output data? Jason Klein will demonstrate how easily we can attack an image classification model. Jason will feed an image of a specific animal into the image classification model and demonstrate how we can modify a single pixel in the original image to convince the model that the image is a different specific/desired animal.
If you train any type of model for your organization, be aware that similar techniques can be used to bypass your model if an attacker can directly access your model. For example, an attacker could feed a fraudulent transaction into a fraud detection model and determine what transaction detail can be changed to fool the model into believing the transaction is NOT fraudulent.
Details:
Location is now online, by zoom meeting, which we will not post in the description, for hijackers to see, but will be made available to you if you RSVP.
Agenda:
5:30-6:00 – Arrival, Virtual Happy Hour, News, and Networking
6:00-7:00 – Presentation
About Our Presenter:
Jason Klein has been working with data for 15+ years. He takes a special interest in data analysis and machine learning in his role with an online restaurant reporting platform. Find slides for this talk, as well as recordings and slides for his past talks on his Talks page (https://jrklein.com/talks).

Attacking a Machine Learning Model - Data Science After Dark April 2020