How can a good ML Observability framework delivers acceptability and trust in mission-critical AI solutions?
‘AI’ used in mission-critical uses cases like autonomous cars, financial institutions, healthcare etc., carries a
substantial financial and societal risk. It is required to address the specifications/requirements of all stakeholders
like – Customers, Business owners, Product owners, the Data Science team, Risk managers and Regulators.
Requirements like – how did the model work for a prediction? Was there a bias? Did the team follow all necessary
steps before pushing the model into production? Can I provide satisfactory evidence about model functioning to
regulators? How can I audit my AI models? Was the model consistent in production? When should I retrain my
model? What was the reason for the error in production? etc.
A good ML Observability framework can solve this problem at scale for businesses. In this session, you’ll learn
more about the components of good ML Observability tools for mission-critical AI solutions and successful case