Skip to main content

      Artificial Intelligence (AI) is no longer a distant concept – it’s already embedded in our daily lives. Most people interact with AI tools regularly, often without realizing it. Even if you think you don’t use AI, likely you do – almost every modern application or platform incorporates some level of AI capability.

      You’ve probably noticed that there are many definitions of AI today. One widely accepted definition comes from the Oxford Dictionary: “Artificial Intelligence (AI) is the capacity of computers or other machines to exhibit or simulate intelligent behaviour, and the field of study concerned with this.”

      However, for instance regulatory frameworks often use more technical definitions. For example, the EU AI Act defines an AI system as: “A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

      This definition can feel overwhelming – and for good reason. Under such wording, almost any modern automation solution could be classified as an AI system. This creates challenges for compliance and legal interpretation. For instance, when deploying a tool in a project, determining which regulations apply can become a complex exercise. The takeaway? Always go the extra mile to ensure your definitions are precise and aligned with the relevant context. Misalignment can lead to costly misunderstandings.

      Author


      Alexander Zagnetko

      KPMG Global AI Initiative Coordinator

      The Pillars of AI

      AI systems rely on three fundamental components: algorithms (or the logic and models that enable learning and decision-making), data (the raw material from which patterns and insights are derived), and computing power (or the engine that makes large-scale processing possible).

      While algorithms and data quality often receive the most attention, computing power is frequently underestimated. Yet, it plays a critical role in enabling advanced AI capabilities. No doubt, the principle of “garbage in, garbage out” still applies to most automation systems – poor data leads to poor outcomes. However, without sufficient computational resources, even the best algorithms and datasets cannot deliver sufficient results.




      How Are ML Systems Trained?

      filter_1

      Machine Learning (ML)

      Is a subset of AI that has powered most of the recent breakthroughs. Interestingly, AI is not a new concept – it was coined back in 1956 at the Dartmouth Conference. By the late 1950s, many foundational principles of ML were already described in theory.

      filter_2

      Deep Learning

      Refers to ML techniques based on artificial neural networks (ANNs). These networks loosely mimic the structure of the human brain. We call it “deep” learning because these systems consist of multiple layers of interconnected nodes (artificial neurons) involved in the training process. The first prototype of such a network – the perceptron – was also created in the mid-20th century.


      There are three primary approaches to ML training:



      • Supervised Learning

        The model is trained on labelled data – datasets that include both inputs and correct outputs. Think of it like a teacher providing math problems with solutions so students can learn patterns and apply them to new problems.

      • Unsupervised Learning

        Here, the data has no labels. The model identifies patterns or structures on its own, such as grouping similar items together. It’s like giving students a box of puzzle pieces without the final picture and asking them to figure out how they fit.

      • Reinforcement Learning

        This method involves an agent interacting with an environment and learning through feedback – rewards or penalties. It’s similar to how children learn to ride a bike: falling, adjusting, and eventually mastering the skill through experience.

      These approaches to some extent resemble different parenting styles – each with its own strengths and limitations. In many cases, labelled data and the logic derived from it can be more valuable than all the physical assets a modern company owns. Complex solutions, such as Generative AI (GenAI), often leverage all three learning types at different stages of development.


      Intelligent Industries

      AI revolúcia v maloobchode už prináša prvé výsledky, odhalil nový prieskum KPMG

      Takmer tri štvrtiny výrobných firiem začleňujú AI do vývoja produktov a služieb, odhalil nový prieskum KPMG.

      Umelá inteligencia sa stáva sa kľúčovým nástrojom na budovanie dôvery v digitálnom bankovníctve, ukazuje prieskum KPMG


      Generative and Agentic AI: Next Frontiers

      GenAI remains one of the hottest topics in technology today. Models like ChatGPT are trained on massive datasets of text (or other content types). Large Language Models (LLMs) are trained to “understand” and generate human-like text by learning statistical patterns in language. At their core, these models operate by predicting the next token in a sequence, given the preceding context.


      • Tokenization

        Text is broken down into discrete units called tokens. A token can be a word, subword, or even a character, depending on the model's design. 

      • Token Embeddings

        Each token is mapped to a high-dimensional vector, known as an embedding. These embeddings capture semantic and syntactic relationships between tokens, enabling the model to understand context and meaning.

      • Sequence Modeling

        Using deep neural networks – typically transformer architectures – the model processes the sequence of token embeddings to learn dependencies and patterns across the text.

      • Autoregressive Generation

        Text is generated one token at a time. For each step, the model predicts the most likely next token based on the previous ones. This continues until a predefined stop condition is met, such as a special end-of-sequence token or a maximum token limit.


      Unlike traditional rule-based software, modern AI systems are probabilistic, not deterministic. This means that even with identical inputs, the outputs can vary significantly because responses are generated based on probabilities.

      Another emerging trend is Agentic AI. Many organizations are already experimenting with it. In essence, AI agents are software entities that achieve goals by planning and taking actions. They use reasoning engines like LLMs and integrate with digital tools to act on your behalf. While automation has been around for years, the rise of GenAI and Agentic AI makes human-in-the-loop approaches more critical than ever. For core or mission-critical processes, subject matter experts must monitor, validate, and assess AI-driven outcomes responsibly.



      The Current State of AI and the AGI Debate

      About 15 years ago, some visionaries predicted the emergence of Artificial General Intelligence (AGI) by 2040, followed by Artificial Superintelligence (ASI) soon after. However, today all existing AI systems remain narrow AI (ANI). They excel at specific tasks and can learn and improve within a defined domain, but they cannot autonomously expand their functional scope.

      Training an AI model for even a relatively small task requires enormous amounts of data. In contrast, theoretical AGI would need to perform complex tasks with minimal data input. Most AI algorithms are conceptually simple, but achieving high productivity demands staggering computational resources – trillions of operations and exabytes of storage. Until around 2010, IT infrastructure was prohibitively expensive, making advanced AI development unattainable for most organizations.

      Yet training a LLM from scratch remains out of reach for all but a handful of tech giants. Most organizations don’t actually “train” LLMs even if the state it – they rely on pre-trained models, fine-tuning only the outer layers while the core architecture remains a black box. The possibility of Artificial Superintelligence is still more a matter of philosophy than engineering. We lack a complete understanding of human consciousness, making it hard to predict when – or if – quantity will turn into quality.

      It’s also important to keep expectations realistic. Just a year ago, some experts - including CEOs of major tech firms - predicted that by now, 90% of all code would be generated by AI. Reality check: we’re nowhere near that level.


      Check out more articles

      CEOs doubling down on AI and talent investment as the keys to resilience and growth

      From risk management and transparency to customer trust and sustainable growth


      Challenges and Risks

      The rapid rise of AI brings numerous challenges – social, professional, ethical, and even existential. Ensuring responsible AI use worldwide is extremely difficult. Companies often consider only profits, not consequences, while some regimes and malicious actors exploit AI for harmful purposes.

      One critical issue is bias. Bias can stem from incomplete or non-representative data, flawed algorithms, or intentional manipulation. GenAI can produce outputs that sound confident and plausible but are factually incorrect – known as hallucinations. They can be highly convincing because they often align with common assumptions. For example, many would accept a claim that Einstein won the Nobel Prize for the Theory of Relativity, since he is strongly associated with it – despite the fact that his Nobel was awarded for work on the photoelectric effect.

      This is why healthy skepticism is essential when working with GenAI. Always assess AI outputs carefully. For routine tasks, weigh the time saved against the time needed for validation. For critical processes, reports, or deliverables, thorough checks are non-negotiable. In some cases, it’s better not to use AI at all – especially when generating content on topics you’re unfamiliar with and cannot easily verify.

      Another major topic is adversarial AI. The AI boom has expanded the attack surface – the number of exploitable vulnerabilities – dramatically over the past five years. Here are four major types of adversarial attacks:

      • Poisoning Attacks: Manipulate training data to produce incorrect results.
      • Evasion Attacks: Alter input data to trick models into wrong outputs.
      • Inference Attacks: Extract sensitive information from model outputs.
      • Extraction Attacks: Reverse-engineer models to steal functionality.

      Currently, most companies lack comprehensive policies and solutions to identify and mitigate risks associated with adversarial AI, even in mature markets. Many organizations rely on third-party/ service providers when it comes to AI security and incident prevention. However, adversarial attacks are neither a buzzword nor a hypothetical threat – they occur daily, cost billions, and can potentially cripple businesses if not properly addressed.

      For example, generative AI models powered by LLMs learn from diverse data sources across the internet. Malicious actors have already created thousands of resources specifically designed to poison these models with fabricated information, aiming to spread fake news, misinformation, and disinformation. This is why fact-checking and information validation are more critical today than ever before.

      Certainly, not all AI risks involve malicious actors. Many stem from poor data quality, flawed model design, lack of expertise, inadequate supervision, regulatory uncertainty, and misuse – intentional or accidental. The guiding principle should be clear: AI must be designed and implemented with safeguards to prevent harm to people and property.


      Key Takeaways for Safe and Effective AI Use


      • Keep learning and stay curious

        Foster a culture of learning and experimentation. The world of AI is evolving fast – curiosity and openness to new approaches are key ingredients for success.

      • Stay on top of new technologies and trends

        New tools and solutions are emerging every day. Be curious and ready to turn innovation to your advantage.

      • Understand that AI results are probabilistic

        AI systems don’t follow fixed rules – they generate outputs based on probabilities. That’s why results can differ even when you use the same inputs.

      • Be part of the decision-making process

        Human oversight is essential. AI should be a helpful assistant – not the one making decisions for you.

      • Keep a healthy dose of skepticism

        Approach AI outputs with critical thinking. Check the facts, ask questions, and don’t be afraid to challenge assumptions. That’s how you’ll achieve the best results.




      In our blog series prepared by Alexander Zagnetko, KPMG Global AI Initiative Coordinator, we provide an overview of frameworks and solutions designed to help organizations harness the full potential of modern AI while ensuring its safe and responsible use.




      Contact us

      Should you wish more information on how we can help your business or to arrange a meeting for personal presentation of our services, please contact us.


      Alexander Zagnetko

      KPMG Global AI Initiative Coordinator

      KPMG in Slovakia

      Strategic insights for business leaders

      Book a free consultation

      Submit your enquiry and connect with KPMG professionals.

      NEW JERSEY - MARCH 20: Empty Sky Memorial with sunshine on March 20, 2014 in New Jersey. It is the official New Jersey September 11 memorial to the victims of the September 11 attacks.


      Related content

      A practical overview of obligations, risks, and recommendations for businesses in the era of regulated AI

      The AIMA Framework Helps Identify Gaps and Build a Corporate AI Strategy.

      Artificial intelligence that can think and act independently is no longer just a vision of the future — it’s becoming a competitive advantage

      Make the most of new technologies while avoiding potential risks