Skip to main content

      Highlights:

      • In AI systems, reliability, explainability, transparency and responsibility for external impact and liability issues are of great importance.
      • The ethical guidelines for trustworthy AI commissioned by the European Commission set out the basic principles and core requirements as a framework.
      • The Institute of Public Auditors in Germany is developing the IDW PS 861 auditing standard for auditing AI systems.
      • Explainable AI is an important field of research that deals with the development of AI systems that can make their decisions and processes more understandable and comprehensible.

      The use of artificial intelligence (AI) has brought benefits and successes in many areas. At the same time, however, risks and challenges are also becoming apparent. Although AI systems are able to optimise or redesign business processes, it is not just the performance of the AI systems that is important in sensitive use cases. Aspects such as reliability, explainability, transparency and responsibility are also very important for external impact and liability issues.

      Ethical guidelines for trustworthy AI

      The European Commission has set up a high-level expert group on artificial intelligence. The group recently published ethical guidelines, in which it derives seven core requirements for trustworthy AI based on three basic principles:

      The basic principles:

      1. It should be lawful and thus comply with applicable law and all legal requirements.
      2. It should be ethical and thus guarantee compliance with ethical principles and values.
      3. It should be robust, both technically and socially, as AI systems may cause unintended harm even if they are based on good intentions.

      The core requirements:

      1. Primacy of human agency and human oversight
      2. Technical robustness and security
      3. Privacy protection and data quality management
      4. transparency
      5. Diversity, non-discrimination and fairness
      6. Social and environmental wellbeing
      7. Accountability

      The implementation of these requirements necessitates increased cooperation between government, companies, research institutions and society and requires regulation and monitoring of AI systems.

      Audit standards for AI systems

      In the context of the increasing spread of artificial intelligence, the auditing and assessment of AI systems is becoming more important as part of risk management. In this context, the Institute of Public Auditors in Germany is developing the IDW PS 861 auditing standard, which is based on the International Standard on Assurance Engagements (ISAE) 3000 (Revised) and is geared towards the auditing of AI systems.

      In addition, the Artificial Intelligence Risk Management Framework offers an international approach for assessing and improving the trustworthiness of artificial intelligence. The framework is made up of three areas: governance and ethics, risk assessment and management, and transparency, and is designed to help companies and organisations identify and manage risks associated with AI systems.

      To ensure the security of AI systems in the cloud, the German Federal Office for Information Security has published the AIC4 criteria catalogue. This catalogue defines requirements for the security of AI cloud services and is intended to help companies select suitable cloud providers and implement appropriate security measures.

      Explainable AI (XAI)

      Explainable AI is an important area of artificial intelligence that focuses on creating systems that can make their decisions and processes understandable and comprehensible. By making decisions explainable, users can better understand the behaviour of AI systems and develop trust in the decisions made. There are currently many AI systems in use that make complex decisions based on machine learning algorithms whose decision-making processes are difficult or impossible for humans to understand. This can lead to problems as users want to have confidence in the decisions made by AI systems in order to justify their use and promote their acceptance.

      Integrating Explainable AI into a wide variety of systems can also help to maximise the potential of artificial intelligence while ensuring that its impact on society is positive. For example, explainable AI can make medical diagnoses more accurate and understandable, which can enable better treatments and outcomes for patients. In other application areas, the explainability of decisions and processes can help to avoid or reduce prejudice and discrimination through AI systems.

      Our solution: AI in Control

      In order to minimise risks and enable effective management of AI solutions, we have developed the "AI in Control" framework, which we are continuously expanding. The framework deals with the management of risks and the control of AI solutions across the areas of company, solution & data, technology and project. In addition, we offer technical expertise and analysis tools that can be integrated into existing system environments for the comprehensive management and control of AI solutions. Our AI experts support customers in designing and implementing their own framework tailored to their requirements in order to make their individual AI solutions transparent and comprehensible.

      More KPMG insights for you

      Artificial intelligence: how to use the technology for your company

      Increase productivity and efficiency, optimise processes: Transform your company with reliable AI solutions
      You can with AI - Slogan

      Your contact

      Andreas Steffens

      Director, Audit, Regulatory Advisory, Digital Process Compliance

      KPMG AG Wirtschaftsprüfungsgesellschaft