AI Night - Azure Machine Learning (Boelman) & Ethics & Risks of AI (Bugnion)
Openbare groep

Erasmushogeschool Brussel

Quai de l'Industrie 170 · Anderlecht

Hoe vind je ons

Locatieafbeelding van evenementslocatie

Wat we doen

Practical info:

Welcome is from 6PM onwards
First session will start at 6:30PM

Erasmushogeschool Brussel
Quai de l'Industrie 170 - Industrie Kaai[masked] Anderlecht

There is free parking behind the building

Public Transport:
There is a metro stop 5 minutes away of the venue - Delacroix
Train station Midi/Zuid is 20 minute walk.


Come an listen to two experienced speakers. Both are Cloud Advocates for Microsoft. And don't worry to ask them questions about any AI or development topic. They know what they are talking about.

Everyone is welcome at 6PM.
Sessions will start at 6:30 PM.

Henk Boelman (Cloud Advocate @ Microsoft, the Netherlands) @HBoelman
Getting started with Azure Machine Learning services
With machine learning becoming more and more an engineering problem the need to track, work together and easily deploy ML experiments with integrated CI/CD tooling is becoming more relevant then ever.

In this session we take a deep-dive into Azure Machine Learning service, a cloud service that you can use to track as you build, train, deploy, and manage models. We zoom into the building blocks provided and show, through some demos, how to use them.

At the end of this session you have a good grasp of the technological building blocks of Azure machine learning services. Just waiting to be used in your own projects afterwards.

Laurent Bugnion (Cloud Advocate @ Microsoft, Switzerland) @LBugnion
The ethical implications and risks of Artificial Intelligence and Deep Learning
There is no question that Artificial Intelligence and Deep Learning will play an important role in the future (and the present!) of humanity.

Taking advantage of faster and faster computers, larger and larger
databases, we are able to run very complex algorithms against humongous amounts of data. This allows the creation of tools that can help us in complex areas of our lives. From autonomous vehicles to image and speech recognition, from assisting impaired humans to saving lives in critical situations, from inspecting industrial installations to sending machines in deep space or deep waters, the possibilities are amazing.

But this power comes with great responsibilities. How do we take steps to minimize flaws in the data we use for our models? How do we build machines that act for the greater good? What are the risks? In this session, Laurent Bugnion, a Senior Cloud Advocate for Microsoft will talk about what could happen, and what we can do to try and prevent it.