Watch how LEIP accelerates time to deployment at GTC 2024 March 18-21 Learn more

 

Why Building Trust in AI Begins in Development

by Sek Chai | Posted Apr 14, 2023
Share
Endpoint Security - Endpoint Protection Concept - Multiple Devices Secured Within a Network - Security Cloud - Cloud-based Cybersecurity Software Solutions - 3D Illustration

As AI becomes further integrated into all aspects of our lives, there is a strong need for us to be able to trust its decisions. Trust is the central part of our relationship with AI. If an AI system operates competently, interacts appropriately, and delivers reliably in an ethical manner, this trust can be established. 

Consider the latest advancements in Large Language Models (LLMs) such as ChatGPT. It can be used to generate essays and hold natural-sounding conversations, among many things — but can we trust it to do entrusted work like a human being? Many have been enamored by how LLMs can write fact-packed news articles and even pass standardized tests. However, many have also noted its numerous failures in the most basic common-sense questions and prompts. While LLMs will continue to evolve and improve, the truth of the matter is that even if LLMs can give a more accurate medical summary than a human, we are still more likely to go with our own doctor’s advice. The majority of the population believes humans are generally more capable, empathic, and responsive. 

It is easy to evaluate an AI model based on its intended functionality. We can enable measurable metrics such as accuracy and latency to establish evidence that the AI model is still trustworthy. However, trust extends beyond what the model is trained and functionally able to do. Implicitly, trust also comes from inherent expectations that the AI system behaves consistently with our moral beliefs, natural law, and our understanding of common sense. To that end, trust is a feeling of confidence towards the AI system to operate dependably even in unknown situations. 

At Latent AI, we advocate that building trust in the AI system begins in development phases as early as the supply chain and continues into a model’s deployment. Trust must be established for both developer and operator because it is important to establish the data and model provenance. We believe in explainability, interpretability and auditability as key factors in establishing trust in an AI model. Uncertainty for a model comes from a lack of understanding and expectation of what the model is trained and functionally able to do. Our goal is to provide trusted tools to build trustworthy AI within an MLOps (Machine Learning Operations) framework to optimize and secure the AI model. Our tools provide developers the options to choose operating points between accuracy, performance, and security.  

For more information, visit latentai.com or contact info@latentai.com

Share
View All
Tags

Related