IBM Cloud Data Science & AI #1 (CIMON / Trust & Explainability of AI)

This is a past event

54 people went

IBM Germany GmbH

Beim Strohhause 17 · Hamburg

How to find us

Wenn du vor dem Gebäude stehst - einfach links bei der Tür neben den zwei Drehtüren klingeln - dann zu Fuß (oder per Lift ;) in den ersten Stock und dort links durch die Türe - dann beim Empfang melden.

Location image of event venue

Details

Location:
IBM Hamburg
https://goo.gl/maps/xUs2YYktS6N2

Opens at:
6:30 pm

Agenda:
+ Networking
+ Small intro (Starts at 7 pm)
+ Tech Talk 1
CIMON – the first autonomously flying robot jointly created by Airbus, DLR and IBM by Sophie Richter-Mendau, AI Specialist @ IBM

In 2018, German astronaut Alexander Gerst accomplished his second six-month mission to the International Space Station (ISS), serving as station commander in the second half of his stay. On this mission, Gerst had the chance to work with CIMON (Crew Interactive Mobile Companion): The five-kilogram, 3D-printed, medicine ball sized robot is the first astronaut assistant in space to be equipped with artificial intelligence and is a technology experiment for human-machine interaction in a cosmic setting.
In this talk, you will learn more about the first autonomously flying robot jointly created by Airbus, DLR and IBM and how AI technology is entering human spaceflight.

+ Break & Networking

+ Tech Talk 2
Trust and Explainability of AI algorithm - It’s time to start breaking open the black box of AI by David Mesterhazy, Data Science Expert @ IBM

Machine learning models are increasingly used to inform high-stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Bias in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.
AI Fairness 360 (AIF360), a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. We invite you to use it and contribute to it to help engender trust in AI and make the world more equitable for all.

Looking forward to meet you at this event 🙂

Sophie, David, Jochen