Computer scientists at Harvard built Artificial Intelligence (AI) which could identify gang crimes. This development in the practice of predictive policing has immense potential to "ignite an ethical firestorm" as stated by Science Magazine. Yet, when asked about potential misuses, the researcher responded, "I'm just an engineer."
Do developers have an ethical responsibility when it comes to developing tech that ultimately becomes a tool of state violence? Should they have a say in how the tech they build should be used once it’s delivered to the client?
In this talk, I will examine how efforts to move tech forward are built on statistical biases, and why those who build these things should care. I will also discuss what a code of ethics could look like when approached by developers.