How to design ML Observability for high-risk AI use cases
Hosted by Big Data Demystified - Washington
Details
MLOps simplified the baseline processes making it easy to build models at scale today. But there has little or no focus on ML acceptance. Any AI/ML model can fail, models are not explainable by design, models can carry the risk of usage during production and model auditing is very complex. Deploying AI for mission-critical use cases requires additional layers like explainability, monitoring, auditability, data privacy and risk mitigation to ensure the AI solution is acceptable to all stakeholders.
Agenda:
- Introducing ML Observability
- Using ML Observability for model monitoring, model explainability and auditing.
- Designing the policy layers to manage model usage risk in ML Observability.
Lecturer: Vinay Kumar Sankarapu- the Co-Founder and CEO of Arya.ai. He did his Bachelor's and Masters in Mechanical Engineering at IIT Bombay. He started Arya.ai in 2013, along with Deekshith while in college. Vinay Kumar leads R&D of AryaXAI product. He wrote multiple guest articles on ‘Responsible AI’, ‘AI usage risks in BFSIs’ and ‘AI Governance framework’. He is presented technical and industry presentations across multiple conferences globally - Nvidia GTC, ReWork, Cypher, Nasscom, TEDx etc. He was the youngest member of ‘AI task force’ set up by the Indian Commerce and Ministry in 2017 to provide inputs on policy and to support AI adoption as part of Industry 4.0. He was listed in Forbes Asia 30-Under-30 under the technology section. He represented India in Worldcup Technology Challenge in 2015 among 54 other countries in the finals.
