Steering toward tomorrow

Are you going?

29 people going

Share:
Location image of event venue

What we'll do

Talk 1 - Deep Learning for Self Driving at the Uber lab
Inmar Givoni - Senior Autonomy Engineering Manager at Uber

Talk 2 - Zara Syed

Talk 3 - Explainable AI for Enterprise Systems
Benny Cheung, Senior Technical Architect at Jonah Group

*

Talk 1
Deep Learning for Self Driving
Recent advancements from the Toronto Uber lab and their approach for taking it into production.

Bio: Inmar Givoni is a Senior Autonomy Engineering Manager at Uber Advanced Technology Group, Toronto, where she leads a team whose mission is to bring from research and into production cutting-edge deep-learning models for self-driving vehicles.

She received her Ph.D. (Computer Science) in 2011 from the University of Toronto, specializing in machine learning, and was a visiting scholar at the University of Cambridge. She worked at Microsoft Research, Altera (now Intel), Kobo, and Kindred at roles ranging from research scientist to VP, Big Data, applying machine learning techniques to various problem domains and taking concepts from research to production systems. She is an inventor of several patents and has authored numerous top-tier academic publications in the areas of machine learning, computer vision, and computational biology.

She is a regular speaker at AI events, and is particularly interested in outreach activities for young women, encouraging them to choose technical career paths. For her volunteering efforts, she has received the 2017 Arbor Award from UofT. In 2018 she was recognized as one of Canada’s 50 inspiring women in STEM.

Talk 2 - coming soon

Talk 3 - Explainable AI for Enterprise Systems
Even with the great accuracy and precision of recent Deep Learning techniques, many enterprises are hesitant to deploy such solutions because of the lack of explainability when it comes to the results. They are responsible for many regulations that are designed to protect human rights and fairness. Without an explainable result, they have a hard time to justify their use in reality. Since the goal of Explainable AI (XAI) is an important AI's system feature envisioned by the early pioneers, we should not be blind sighted simply by the success of recent applications of Deep Learning. We should take a step back to understand the challenges and potential techniques to design and apply explainability into our AI systems. Subsequently, we can help the wider adoption of AI in the enterprise world. This presentation will discuss the rationality and techniques of explanation in a human understandable form. We will cover both philosophical and historical points while keeping key modern advances of AI in mind.

Bio: While performing his senior technical architect roles in the Jonah Group, Benny helps to establish the Jonah’s AI Lab to expand into machine learning and deep learning business. Fully aware of the Blockchain impact on business, Benny has established a strong technical knowledge on Hyperledger and Ethereum by successfully designed and built systems for the medical health records and tokens exchange.

Benny has a B.Sc. and M. Sc in Computer Science from the University of Guelph, where his master thesis is on AI’s expert system technologies. Benny constantly posts AI, machine learning and deep learning topics in his blog site: http://bennycheung.github.io