You’ve probably noticed that there are many definitions of AI today. One widely accepted definition comes from the Oxford Dictionary: “Artificial Intelligence (AI) is the capacity of computers or other machines to exhibit or simulate intelligent behaviour, and the field of study concerned with this.”
However, for instance regulatory frameworks often use more technical definitions. For example, the EU AI Act defines an AI system as: “A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition can feel overwhelming – and for good reason. Under such wording, almost any modern automation solution could be classified as an AI system. This creates challenges for compliance and legal interpretation. For instance, when deploying a tool in a project, determining which regulations apply can become a complex exercise. The takeaway? Always go the extra mile to ensure your definitions are precise and aligned with the relevant context. Misalignment can lead to costly misunderstandings.
Author
Alexander Zagnetko
KPMG Global AI Initiative Coordinator
The Pillars of AI
AI systems rely on three fundamental components: algorithms (or the logic and models that enable learning and decision-making), data (the raw material from which patterns and insights are derived), and computing power (or the engine that makes large-scale processing possible).
While algorithms and data quality often receive the most attention, computing power is frequently underestimated. Yet, it plays a critical role in enabling advanced AI capabilities. No doubt, the principle of “garbage in, garbage out” still applies to most automation systems – poor data leads to poor outcomes. However, without sufficient computational resources, even the best algorithms and datasets cannot deliver sufficient results.
How Are ML Systems Trained?
There are three primary approaches to ML training:
These approaches to some extent resemble different parenting styles – each with its own strengths and limitations. In many cases, labelled data and the logic derived from it can be more valuable than all the physical assets a modern company owns. Complex solutions, such as Generative AI (GenAI), often leverage all three learning types at different stages of development.
Intelligent Industries
Generative and Agentic AI: Next Frontiers
GenAI remains one of the hottest topics in technology today. Models like ChatGPT are trained on massive datasets of text (or other content types). Large Language Models (LLMs) are trained to “understand” and generate human-like text by learning statistical patterns in language. At their core, these models operate by predicting the next token in a sequence, given the preceding context.
Unlike traditional rule-based software, modern AI systems are probabilistic, not deterministic. This means that even with identical inputs, the outputs can vary significantly because responses are generated based on probabilities.
Another emerging trend is Agentic AI. Many organizations are already experimenting with it. In essence, AI agents are software entities that achieve goals by planning and taking actions. They use reasoning engines like LLMs and integrate with digital tools to act on your behalf. While automation has been around for years, the rise of GenAI and Agentic AI makes human-in-the-loop approaches more critical than ever. For core or mission-critical processes, subject matter experts must monitor, validate, and assess AI-driven outcomes responsibly.
The Current State of AI and the AGI Debate
About 15 years ago, some visionaries predicted the emergence of Artificial General Intelligence (AGI) by 2040, followed by Artificial Superintelligence (ASI) soon after. However, today all existing AI systems remain narrow AI (ANI). They excel at specific tasks and can learn and improve within a defined domain, but they cannot autonomously expand their functional scope.
Training an AI model for even a relatively small task requires enormous amounts of data. In contrast, theoretical AGI would need to perform complex tasks with minimal data input. Most AI algorithms are conceptually simple, but achieving high productivity demands staggering computational resources – trillions of operations and exabytes of storage. Until around 2010, IT infrastructure was prohibitively expensive, making advanced AI development unattainable for most organizations.
Yet training a LLM from scratch remains out of reach for all but a handful of tech giants. Most organizations don’t actually “train” LLMs even if the state it – they rely on pre-trained models, fine-tuning only the outer layers while the core architecture remains a black box. The possibility of Artificial Superintelligence is still more a matter of philosophy than engineering. We lack a complete understanding of human consciousness, making it hard to predict when – or if – quantity will turn into quality.
It’s also important to keep expectations realistic. Just a year ago, some experts - including CEOs of major tech firms - predicted that by now, 90% of all code would be generated by AI. Reality check: we’re nowhere near that level.
Check out more articles
Challenges and Risks
The rapid rise of AI brings numerous challenges – social, professional, ethical, and even existential. Ensuring responsible AI use worldwide is extremely difficult. Companies often consider only profits, not consequences, while some regimes and malicious actors exploit AI for harmful purposes.
One critical issue is bias. Bias can stem from incomplete or non-representative data, flawed algorithms, or intentional manipulation. GenAI can produce outputs that sound confident and plausible but are factually incorrect – known as hallucinations. They can be highly convincing because they often align with common assumptions. For example, many would accept a claim that Einstein won the Nobel Prize for the Theory of Relativity, since he is strongly associated with it – despite the fact that his Nobel was awarded for work on the photoelectric effect.
This is why healthy skepticism is essential when working with GenAI. Always assess AI outputs carefully. For routine tasks, weigh the time saved against the time needed for validation. For critical processes, reports, or deliverables, thorough checks are non-negotiable. In some cases, it’s better not to use AI at all – especially when generating content on topics you’re unfamiliar with and cannot easily verify.
Another major topic is adversarial AI. The AI boom has expanded the attack surface – the number of exploitable vulnerabilities – dramatically over the past five years. Here are four major types of adversarial attacks:
- Poisoning Attacks: Manipulate training data to produce incorrect results.
- Evasion Attacks: Alter input data to trick models into wrong outputs.
- Inference Attacks: Extract sensitive information from model outputs.
- Extraction Attacks: Reverse-engineer models to steal functionality.
Currently, most companies lack comprehensive policies and solutions to identify and mitigate risks associated with adversarial AI, even in mature markets. Many organizations rely on third-party/ service providers when it comes to AI security and incident prevention. However, adversarial attacks are neither a buzzword nor a hypothetical threat – they occur daily, cost billions, and can potentially cripple businesses if not properly addressed.
For example, generative AI models powered by LLMs learn from diverse data sources across the internet. Malicious actors have already created thousands of resources specifically designed to poison these models with fabricated information, aiming to spread fake news, misinformation, and disinformation. This is why fact-checking and information validation are more critical today than ever before.
Certainly, not all AI risks involve malicious actors. Many stem from poor data quality, flawed model design, lack of expertise, inadequate supervision, regulatory uncertainty, and misuse – intentional or accidental. The guiding principle should be clear: AI must be designed and implemented with safeguards to prevent harm to people and property.
Key Takeaways for Safe and Effective AI Use
In our blog series prepared by Alexander Zagnetko, KPMG Global AI Initiative Coordinator, we provide an overview of frameworks and solutions designed to help organizations harness the full potential of modern AI – while ensuring its safe and responsible use.
Contact us
Should you wish more information on how we can help your business or to arrange a meeting for personal presentation of our services, please contact us.