Trusted AI sounds quite romantic - but what is trust?

11-03-2024
Trust in AI – towards a possible definition and how to cultivate trust in AI systems.

Trust is crucial, as data quality and integrity are essential for viable AI systems. In the realm of data science, trust is a precarious balance between reliance and skepticism. This blog post gives a potential definition of trust and outlines how it can be cultivated with regard to AI systems.

Trusted AI sounds quite romantic – but what is trust?

In the realm of data science, the concept of trust is of vital importance, as data quality and integrity are decisive for creating viable AI (Artificial Intelligence) approaches. Yet, trust in artificial intelligence is not merely a technical matter; it is a deeply human one, resonating with almost romantic notions of reliability, understanding and confidence.

Trust – a potential definition

At its core, AI represents a paradox of trust. As humans, trust is woven into our social fabric and the basis for relationships, cooperation and progress [1]. However, when it comes to AI, trust becomes a precarious balance between reliance and skepticism. While the incredible capabilities of new AI approaches are astonishing, it is also an inherent factor of AI that it is not possible to explain exactly how an AI has come to a decision. 

Trust in interactions between humans and AI (Human-AI) involves the belief that the AI will act in the user's best interest, mitigating risk throughout. According to Jacovi et al.[2], trust is dependent on the presence of risks, as it establishes a foundation for trust to exist between users and the AI tool.

Additionally, the authors discuss the concept of distrust which arises when users perceive risk and aim to avoid unfavorable outcomes. It is important to understand that distrust is not merely the absence of trust; rather it means holding negative beliefs about how AI behaves. Both trust and distrust involve anticipating AI behavior but do not necessarily indicate the presence or absence of trust [3].

Trust in AI systems – how can it be cultivated?

Before attempting to increase trust in AI models themselves, trust in the underlying data must be established, which will ultimately enhance trust in the AI model itself. Trust in data refers to confidence in its accuracy, reliability, and ethical correctness, which is crucial in a variety of domains, finance and academia. This goal can be achieved using a concept from the domain of human-computer interaction (HCI), which is called “provenance”. Provenance is closely related to trust and involves providing users with the means to explore the lineage and history of data so they can judge its trustworthiness [4]. Understanding the provenance of increases trust by providing transparency and accountability. Trusting the data used to train and validate an AI system is the first step to build trust in the AI approach itself.

Transparency and explainability are the cornerstones of a trustworthy AI solution. Users need to understand how AI algorithms work and the general rationale behind their decisions. Ethical considerations are paramount as well. Developers must navigate the ethical minefield and ensure that AI systems adhere to moral standards and respect human values.

Augmenting human judgment with AI insights holds great promise, yet also skepticism towards AI as algorithms with a “human touch” remains. Establishing trust in AI-driven recommendations requires robust validation, transparency, and a human-centric approach. Building trust in AI inevitably presents several challenges. Biases and prejudices can be encoded into algorithms, perpetuating inequalities and eroding trust by generating biased results. Addressing these issues demands vigilance, accountability, and a commitment to fairness. These challenges can be overcome through the collaborative efforts of developers, regulators and users, paving the way for a future where trust in AI is not just a lofty ideal, but a tangible reality.

Looking ahead, the future of trust in AI holds promise, possibility and potential. As technology evolves, so will our understanding of trust in AI. Emerging trends in AI research and developments offer glimpses into a future where trust is not just a theoretical concept but a cornerstone of AI innovation.

In conclusion, fostering trust in AI may sound romantic, but its implications are far-reaching and profound. It is our task to ensure not only the technical quality of AI models, but also their ethical integrity and accessibility to human beings. 

[1] Misztal, Barbara. Trust in modern societies: The search for the bases of social order. John Wiley & Sons, 2013.
[2] Jacovi, Alon, et al. "Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
[3] McLeod, Carolyn, and Edward N. Zalta. "Trust In Stanford encyclopedia of philosophy." Metaphysics Research Lab, Stanford University (2006).

[4] Buneman, Peter, Sanjeev Khanna, and Tan Wang-Chiew. "Why and where: A characterization of data provenance." Database Theory—ICDT 2001: 8th International Conference London, UK, January 4–6, 2001 Proceedings 8. Springer Berlin Heidelberg, 2001.

Thierry Kellerhals

Director, Financial Services, Digital Innovation

KPMG Switzerland

Blog author Isabel Piljek
Isabel Piljek

Expert,Data Science

KPMG Switzerland