Artificial intelligence (AI) plays an increasing role in our daily lives. Computers are being trained how to do many things, including making medical diagnoses. For example, AI can diagnose skin cancer from skin images as reliably as dermatologists, and this clever software is only going to get better. But how do we know whether a diagnosis we get is accurate? If we are given it by a human doctor, we can ask for an explanation. However, the most advanced AI systems do not just act according to pre-defined rules, but continue to “learn”, and it may not be possible to explain to a patient how the computer reached its diagnosis.
If you were given a diagnosis by a computer and were given the choice, would you always prefer to be given an explanation of how the computer reached its diagnosis even if that meant the computer’s diagnosis was likely to be a little less accurate?
There are many ethical questions like this arising from advances in AI. How should such questions be answered? Is it the job of independent experts, politicians or public bodies like the Information Commissioner’s Office? Should the public have a say, and if so, whose voices should be heard and how?
Dr Malcolm Oswald, Director of Citizens Juries c.i.c. and Honorary Research Fellow in Law at the University of Manchester will explore these questions on 12th of December. He will describe how “citizens’ juries” are to be used to explore what the public think about how AI performance and AI “explainability” should be balanced. The results of the two “citizens’ juries”, to be held in February 2019, and commissioned by the University of Manchester and the Information Commissioner’s Office will feed directly into Information Commissioner’s Office national guidance.
The talk will be from 6.45, including plenty of time for your questions and to share your thinking on this issue. We hope you can come along.