Testing for Trust: A New Method for Building End User Reliance in AI/ML models


Details
Hey Team, we are bringing a timely and valuable topic for this meetup.
There is a fundamental difference between AI and ML which requires us to shift how we think about our testing practices. When someone is going to rely on your model to share in or take on a decision, testing becomes our principal way of building trust but only if we can link it to qualities of reliability that actually matter. In this talk, we are going to explore what it means to identify the qualities required of a model before it can take on a decision, how to test those qualities and leverage those tests to build trust, and why trust is the essential ingredient for impacting with AI.
Speaker:
Joshua Fourie is one of the founders of Decoded.AI, an AI research lab specialising in developing and applying AI risk mitigation strategies and controls. They have received multiple grants from the Australian government, private equity investment to fund their work and Josh has led research grants for defence initiatives.
Josh is a speaker, researcher and contributor in the AI community where his work explores a post-modern theory of AI development to build better AI systems, faster. Prior to Decoded.AI, Josh worked in building cryptographically secure AI systems for high-security applications and strict regulatory environments.

Testing for Trust: A New Method for Building End User Reliance in AI/ML models