What we're about

A meetup to LEARN machine learning. This is a hands-on meetup with both theory and practice and time to learn and work together.

Join us on slack: wlml.slack.com (https://join.slack.com/t/wlml/shared_invite/enQtNDE2NzAwNDU5MzY3LTc4MjI5MDEzNmVhOWJiNGQyNDBhOTYyMzVkOWU1ZDQ1YmEzYjg2MmJjMWFiMmUzYzg5YzhkNTI3YjM1MzQ0ZjE)

Checkout previous session on Youtube (https://www.youtube.com/channel/UCkw-xtPTXWc0P23yHEBpmjg)

This meetup is not recommended for absolute beginners as some level of math and programming is required.

If you are a doer and ready to learn we are waiting for you :)

Come if:

- You want to learn
- You are not afraid of learning difficult stuff
- You want to see real examples
- You like to share your skills with others and be challenged
- You have some cool ML project you are working on and want to share it

Feel free to fill up this very short survey (https://goo.gl/FgKZAX) to help us prepare meetups catered to your interested!

Upcoming events (1)

Hands-on: Correcting social bias in AI training, strategies and techniques.

In this meetup, Stefan Van den Borre data scientist at IBM, will explore the 'dangers of AI' being bias, (lack of) explanation and robustness issues. Next to that we will explore the AI Fairness 360 toolkit and IBM's Trust & Transparency service as a simple way to introduce methods and algorithms to reduce social bias in ML. As hands on examples will be available, bring your laptop for maximum fun! About the speaker: Stefan Van den Borre has been active in data management for the biggest part of his carrier. MORE INFO: As enterprises build and deploy artificial intelligence systems, it's important to understand the ethical considerations of our work. Ethics are not a separate business objective bolted on after an AI system has been deployed. They are part of business performance. Only by embedding ethical principles into AI applications and processes can we build systems that people can trust. As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. To encourage the adoption of AI, we must ensure it does not take on and amplify our biases and knowing how an AI system arrives at an outcome is key to trust, particularly for enterprise AI. IBM Research has open-sourced AI Fairness 360 (http://aif360.mybluemix.net), a comprehensive open-source toolkit of metrics and algorithms to check for and mitigate unwanted bias in AI, to help the community engender trust in AI. IBM also launched its Trust & Transparency service as part of Watson OpenScale (https://www.ibm.com/cloud/watson-openscale). This service provides explanations into how AI decisions are being made, and automatically detect and mitigate bias to produce fair, trusted outcomes. In this meetup we will explore the 'dangers of AI' being bias, (lack of) explanation and robustness issues. Next to that we will explore the AI Fairness 360 toolkit and IBM's Trust & Transparency service. Hands on examples will be available. Stefan Van den Borre has been active in data management for the biggest part of his carrier. He started as database and data warehouse specialist, later he moved to the big data world were he worked with several big data inspired technologies. Having this sound data engineering background, he is now slowly moving into data science territory. He is currently working as technical specialist for IBM’s data science product set.

Past events (18)

Hands-On: Object Detection with OpenCV

DataCamp

Photos (7)