Past Meetup

The impact of bias in the society and how to fix it with AI.

This Meetup is past

99 people went

Johnny River

Joan Muyskenweg 22 · Amsterdam

How to find us

It's in the black building named 'Johnny River'. You can find the main entrance in front of the 'Van der Valk' hotel. Our employees are there to welcome you and let you in.

Location image of event venue


Once more, we are getting together to talk about Artificial Intelligence. This time, though, we will focus on how AI powered systems can affect society due to problems with biased data. Our goal is to bring general awareness info together with concrete examples!

We hope to see a lot of you in our second Toon Tech Talks.

== Talk #1 ==

Technology for everyone, by Marion Mulder

We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk?

All questions we need to think about when we are advancing technology for the benefit of humanity.

Marion will be sharing what she’s learned with you.

#AIethics #ResponsibleData #ResponsibleAI

= About Marion =

Marion Mulder (MuldiMedia) helps organisations leverage digital technology, especially Chatbots, Voice Assistants, AI & AR, for optimal Customer Service*.

Marion has a background in intranet, digital workplace, app and customer service portal development and works both as Product Manager, consultant and flow designer.

Next to making digital technology work for you, Marion has a passion for diversity and inclusion. She worked as diversity manager for ING global and is co-founder and board member of Workplace Pride, a foundation for LGBTI inclusion at work.

* Customer Service = how we serve our customers, not just the help-desk!

== Talk #2 ==

A Practical Approach to Remove Bias from Word Embeddings, by Wilder Rodrigues

The spread of Artificial Intelligence, mostly in the form of representational learning, has introduced an issue that, at first glance, seems difficult to get fixed: bias.

Throughout the last years, more and more examples have emerged where people got mistaken by animals, bots became racists and an increase on segregation based on gender by machine learning solutions, chiefly with recruitment and translation services.

In this talk, we will see the impact bias has on people and how to fix it without having to dive deep into the data and remove it manually.

= About Wilder =

Wilder Rodrigues works as a Machine Learning Engineer at Quby, the company behind the Toon Smart Thermostat. When he is not spending time with his family, he is probably studying/teaching something about AI or trying to come up with new algorithms or architectures for Deep Learning.

He is also a City AI Ambassador, an IBM Watson AI XPRIZE Contestant, a Dean of the School of AI community and Committer and PMC member of the Apache Software Foundation. He was a guest attendee at AI for Good Global Summit at the United Nations.

If you know Keras, you should probably have a look at the activation function he created: the SineReLU.

== Agenda ==

18:00 - Doors open; drinks and snacks.
18:30 - Introduction to Quby
18:40 - First talk with Marion Mulder.
19:10 - Break for some refreshments.
19:30 - Second talk with Wilder Rodrigues.
20:00 - Networking with drinks until 20:30

RSVP will be closed on Thursday 27 September at 15h00.