Explainable Artificial Intelligence (XAI) is a broad field of Artificial Intelligence (AI) that concerns the ability of AI systems to not only make decisions but to also explain these decisions. As AI becomes ever more embedded in the decision making that affects our lives, we run into a dangerous situation whereby we are no longer able to determine why crucial decisions are being made, verify that they are being made for the right reasons or have any confidence in our ability to effectively detect and correct errors. The study of XAI develops solutions to these problems, ensuring that AI can be confidently deployed in critical systems.
Dr. Raymond Sheh is a Senior Lecturer at the Department of Computing, Curtin University where he leads the Intelligent Robots Group, and a Guest Researcher at the Intelligent Systems Division of the U.S. National Institute of Standards and Technology (NIST). He specialises in Trusted Autonomous Systems, focusing on developing standard test methods for robots (trusted abilities), investigating new forms of explainable artificial intelligence (trusted decisions) and working at the interface between artificial intelligence and cyber security (trusted integrity). Current collaborators include NIST, the U.S. Naval Research Laboratory, the Japan Science and Technology Agency and the Japan Atomic Energy Agency.
Venue: Curtin University, Building[masked]AB:EX
Free event - hosted by the IEEE Computer Society