Attacking a Machine Learning Model - Data Science After Dark April 2020

Data Science After Dark
Data Science After Dark
Public group


405 N Jefferson Ave · Springfield, MO

How to find us

We will be in the BKD Room @ the eFactory (405 N Jefferson Ave). Exterior doors lock at 6pm for security - if you arrive later and are locked out text me at 469-406-4699 and I'll let you in.

Location image of event venue


Attacking a Machine Learning Model - Why we must protect ML models critical to our business:
Machine learning models are designed to analyze input data and provide desired output data. What if we can manipulate the output data? I will demonstrate how easily we can attack an image classification model. We will feed an image of a specific animal into the image classification model and demonstrate how we can modify a single pixel in the original image to convince the model that the image is a different specific/desired animal.

If you train any type of model for your organization, be aware that similar techniques can be used to bypass your model if an attacker can directly access your model. For example, an attacker could feed a fraudulent transaction into a fraud detection model and determine what transaction detail can be changed to fool the model into believing the transaction is NOT fraudulent.

We’ll be meeting at The eFactory (405 N. Jefferson Ave.) on Tuesday November 19th from 5:30p to 7:00p. We'll be in the BKD room (Enter front doors, first room on left.) Food and drink will be provided while everyone settles in.

RSVP on now so we can plan food, drinks, and meeting space

5:30-6:00 – Arrival, Food, Drinks, Networking
6:00-7:00 – Presentation

About Our Presenter:
Jason Klein has been working with data for 15+ years. He takes a special interest in data analysis and machine learning in his role with an online restaurant reporting platform. Find slides for this talk, as well as recordings and slides for his past talks on his Talks page (

eFactory Parking:
Meetup page: