- In AI systems, reliability, explainability, transparency and responsibility for external impact and liability issues are of great importance.
- The ethical guidelines for trustworthy AI commissioned by the European Commission set out the basic principles and core requirements as a framework.
- The Institute of Public Auditors in Germany is developing the IDW PS 861 auditing standard for auditing AI systems.
- Explainable AI is an important field of research that deals with the development of AI systems that can make their decisions and processes more understandable and comprehensible.
The use of artificial intelligence (AI) has brought benefits and successes in many areas. At the same time, however, risks and challenges are also becoming apparent. Although AI systems are able to optimise or redesign business processes, it is not just the performance of the AI systems that is important in sensitive use cases. Aspects such as reliability, explainability, transparency and responsibility are also very important for external impact and liability issues.