cancel

The AI journey – is there an overview?

07-05-2024
Commentary on the Stanford AI Index report: Key takeaways, balancing the opportunities and risks of AI, and exploring evolving technology trends.

We are inundated with use cases for Artificial Intelligence (AI). Every company is now exploring and showcasing what they can achieve with the help of AI. This often involves improving the client experience, achieving efficiency gains and overcoming the challenges of limited internal resources.

As always new technologies require a balanced view between opportunities and risks. This seems to be business as usual; however, with the huge technological push of AI, it can be difficult to assess this properly.

Stanford University with the help of key partners published the seventh edition of their Artificial Intelligence Index Report[1]. This report can be seen as a comprehensive look at where AI stands and where its future lies. It is based on in-depth academic research and the help of business partners. This year, the report’s scope has been broadened to cover key trends such as the technical advances in AI more broadly, public perceptions of the technology and the geopolitical dynamics surrounding its development.

The report highlights the following top 10 takeaways:

  1. AI beats humans at some tasks, but not at all.
  2. Industry continues to dominate frontier AI research.
  3. Frontier models get much more expensive.
  4. The United States leads China, the EU and the UK as the leading source of top AI models.
  5. There is a serious lack of robust and standardized assessments of LLM accountability.
  6. Investments into generative AI are skyrocketing.
  7. The data is in: AI makes workers more productive and leads to higher-quality work.
  8. Scientific progress accelerates even further, thanks to AI.
  9. The number of AI regulations in the United States is sharply increasing.
  10. People across the globe have become more aware of AI’s potential impact—and more nervous.

Let’s focus on responsible AI, which is getting a lot of attention with the new EU AI Act. The Stanford report contains a full chapter on this topic, creating insights in the risks arising from the use of AI. As the report states: in 2023, 123 incidents were reported, a 32.3% increase from 2022. Since 2013, AI incidents have increased more than twenty-fold. The continuous increase in reported incidents likely arises from both greater integration of AI into real-world applications and heightened awareness of its potential for ethical misuse. However, it is important to note that as awareness grows, incident tracking and reporting also improve, indicating that earlier incidents may have been underreported.

The risk survey, performed as part of the 2024 index report, learns that data governance and privacy are identified as key risks especially in Europe and Asia. In the US, these risks are perceived to be of a less serious nature. Many more details can be found in the report. It is worthwhile to explore these.

In our practice we use the responsible AI model ourselves intensively not only to help businesses but also to review our own AI tools and developments. As you can imagine, our services and solutions are seriously affected by AI. Audit, tax and advisory services will change, and we have only just begun this journey ourselves. Obviously, the human touch is needed to remain successful and manage this journey properly.

[1] Stanford University, Human Centered Artificial Intelligence, Artificial Intelligence Index Report 2024

Prof. Dr. Rob Fijneman

Partner, Audit

KPMG Switzerland