Dr Raymond Sheh, Intelligent Robots Group, Department of Computing, Curtin University
Machine Learning (ML) in general, and Deep Learning in particular, has seen incredible advances in predictive accuracy and task complexity over the last few years. As these advances find their way into more mission, safety and social critical applications, there is also an increasing demand for these systems to be explainable, transparent and accountable. But what does this mean? In this talk, we will discuss what it means for an AI system to be explainable, the differing definitions and requirements of various applications and the capabilities of various AI techniques to answer these requirements. We will also talk about how recent changes in global government regulations and social sentiment might affect the way we design our AI systems now and in the future.
Dr Sheh is a Senior Lecturer at Curtin University and has been working in AI research since 2002. His research area is in trusted autonomous systems with a focus on performance testing and explainability. His active collaborators include the US National Institute of Standards and Technology (NIST), the US Naval Research Laboratory (NRL) and the University of New South Wales (UNSW). His work is currently being funded by NIST, the US Air Force Office of Scientific Research (AFOSR) and the Japan Science and Technology Agency (JST).