How to get started: Your first actions toward Trusted AI

Building trust in AI is crucial as it integrates into business. Let's move toward Trusted AI now.

As artificial intelligence continues to become a fundamental part of various organizational processes, it is becoming increasingly important to build a strong foundation of trust in these AI systems. This blog post offers a comprehensive guide on the key steps necessary to build trust in your AI initiatives, covering everything from initial analysis to continuous enhancement.

What is trusted AI?

Trusted AI refers to artificial intelligence systems that are designed and operated in a manner that is ethical, transparent, reliable and respectful of user privacy and human rights. The EU Commission’s ethical guidelines for trustworthy AI include the following key elements:

  • Lawful: AI must respect all applicable laws and regulations. This includes adhering to privacy laws such as the General Data Protection Regulation (GDPR) as well as other sector-specific regulations.
  • Ethical: AI should adhere to ethical principles and values. This means ensuring fairness, transparency and accountability in AI systems.
  • Robust: both from a technical and social perspective. This involves ensuring that AI systems are secure, reliable and resilient to errors or attempts at manipulation.

Steps towards trusted AI

In the ever-evolving world of artificial intelligence, building trust in AI systems is paramount. As organizations increasingly integrate AI into their operations, the need for a framework that ensures these systems are reliable, ethical and transparent becomes critical. This blog post is designed to guide you through the foundational steps necessary to build and maintain trust in your AI initiatives. From analyzing your current setup to continuous monitoring and improvement, we will delve into the essential actions that form the backbone of trusted AI.

  1. Analyze current and planned AI portfolio
    Begin by conducting a comprehensive analysis of your current AI setup. This involves assessing existing data sources, algorithms and infrastructure. Understand how and where AI is currently used in your organization, the accuracy and efficiency of these systems and any potential biases or ethical concerns they may be subject to. Identifying strengths, weaknesses and areas for improvement will create a solid foundation for building a more trusted AI environment.

  2. Establishing an AI governance
    Establishing a strong AI governance is crucial. This step involves creating policies and guidelines that dictate how AI should be developed, used and maintained in your organization. Effective governance ensures accountability, compliance with laws and ethical standards and aligns AI strategies with your organization's values and objectives. It also involves setting up a framework for decision-making and oversight, including the roles and responsibilities of different stakeholders.

  3. Building a multidisciplinary team 
    Trusted AI is not just a technical challenge; it requires a holistic approach. Assemble a multidisciplinary team that consists of data scientists, ethicists, legal experts and business leaders. Collaboration among these diverse professionals will foster innovation and responsible AI development while maximizing business benefits.

  4. Implementing transparency and explainability
    Transparency and explainability are key pillars in building trust in AI systems. Implementing these principles means ensuring that stakeholders can understand and trust the processes and outcomes of AI applications:
    1. Transparency involves openly communicating about the design, development and deployment processes of AI systems, including the data sources, algorithms used and decision-making criteria.
    2. Explainability goes a step further, focusing on making AI decisions understandable to end-users and stakeholders. This involves developing AI models that are interpretable and provide clear, comprehensible explanations for their outputs, especially in high-stakes areas such as healthcare or finance.
  5. Prioritizing privacy and security 
    Privacy and security are critical concerns in the realm of AI, necessitating their prioritization in every AI initiative. In an era where AI systems process vast amounts of data, including sensitive personal information, ensuring the privacy and security of this data is paramount. Companies must implement strong data governance practices to manage data throughout its lifecycle, from collection to disposal. By prioritizing privacy and security, companies not only protect their customers and their own reputations but also ensure the long-term sustainability and acceptability of their AI systems. 

  6. Continuous monitoring and iterative improvement 
    Finally, Trusted AI is not a one-time achievement but an ongoing process. Continuous monitoring of AI systems is necessary to ensure that they function as intended and do not deviate from their ethical guidelines. This includes regular checks for biases, inaccuracies, and unintended consequences. Feedback from users and stakeholders should be actively sought and used to make iterative improvements to the AI systems. This approach ensures that AI systems remain reliable, effective and ethical in the long run, adapting to new challenges and evolving societal norms.

Additionally, documenting AI processes and decisions as well as providing training and support for users are essential for making AI systems more transparent and explainable.

Wrap up

In conclusion, the journey towards Trusted AI in companies is multifaceted and ongoing, requiring a strategic and thoughtful approach. By establishing a robust AI governance framework, building a multidisciplinary team, implementing transparency and explainability, prioritizing privacy and security, and committing to continuous monitoring and iterative improvement, organizations can realize the full potential of AI while maintaining ethical integrity and public trust. These steps are not just a blueprint for risk mitigation but a pathway to fostering innovative, responsible AI practices that align with both corporate values and societal expectations.

Bobby Zarkov

Partner, Financial Services

KPMG Switzerland

Blog author Jan Bieser
Jan Bieser

Expert,Digital Innovation

KPMG Switzerland