Ethics f(ai)ls


Details
How can we design a fairer future without visibility over the myriad and sometimes mundane ways new technologies can fall down? Our collective desire to avoid public calamities can lead to unhelpful information asymmetries. “Like a pony wearing blinkers, tech companies reproduce the same ethical blind spots across the entire industry through a process rooted in what DiMaggio and Powell (1983) call “institutional isomorphism.“”[1] Let’s add some visibility to these small, every-day product decisions, and the ways they may have succeeded and failed.
In this Chatham House Rule styled-event, we’ll open up the floor to share stories, unpick root causes, and consider alternate endings for our technology projects. Ethics success stories are also welcome, and no f(ai)l is too small or too large to share!
https://www.chathamhouse.org/chatham-house-rule
Our focus will be on live products implementing Machine Learning and Data Science, but feel free to broaden the field to related stories about security, privacy, consent or deployment.
We just ask that whatever you share is true, and that you avoid revealing any identifying details about the company or product you’re discussing.
For those who want to share a story anonymously for someone else in the group to read out on the night, we welcome submissions through this form: http://bit.ly/2NSbVid
[1] Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics https://datasociety.net/wp-content/uploads/2019/09/Owning-Ethics-PDF-version-2.pdf

Ethics f(ai)ls