Artificial intelligence (AI) has rapidly become an everyday companion for professionals across all sectors. Whether drafting reports, analyzing data, preparing presentations, or sparking new ideas, AI now supports a wide range of tasks. Because these systems respond directly to the prompts we give them, the way we formulate our questions, structure our context, and guide the model increasingly determines the quality, fairness, and reliability of the outputs we obtain. In reality, prompting has become a new, yet essential professional skill that influences the effectiveness and quality of the work we deliver.
With AI so deeply embedded in daily workflows, these issues become organizational risks, not just individual mistakes. Inconsistent prompting practices can lead to operational inefficiencies, loss of trust, reputational damage, and even compliance challenges. These risks are not inevitable. They can be reduced through simple, structured prompting techniques that every professional can learn.
While AI usage has grown dramatically, understanding of the technology has not kept the same pace. What stands out is that people genuinely care about using AI responsibly and want to learn how. According to the Trust, attitudes and use of artificial intelligence - A Global Study 2025 from KPMG, the majority of people have not received a formal AI train ing. Only 39% report having any training at all; nearly 48% say they have limited knowledge of AI, and just 21% feel highly knowledgeable. At the same time, interest is high: in many emerging economies, over 90% of people want to learn more about AI.
This gap between heavy usage and low literacy contributes to widespread worries about AI’s reliability. The same study shows that 68% of people are concerned about bias, 69% worry about the environmental impact of AI, 82% fear misinformation, and 77% are concerned about inaccurate outcomes. These concerns show up in daily work. There are now numerous cases of fabricated citations appearing in corporate documents, AI-generated statements being mistaken for fact, or public figures sharing hallucinated content.
This is the purpose of the Responsible Prompting framework: a practical, accessible approach that supports people in producing AI generated outputs that are accurate, fair, and resource-efficient. At KPMG, we care about building the skills and confidence people need to use AI responsibly.
Our tips to reduce the limitations of AI systems through prompting
To appreciate why responsible prompting matters, it is important to understand why (Generative) AI behaves the way it does. AI does not “think” like humans. It does not know and retrieve facts like a database, neither does it understand meaning or have emotions. Instead, AI generates predictions based on patterns. Large Language Models (LLMs), a subset of AI, has been trained on huge amounts of data and uses patterns to predict the words most likely to follow. These create three recurring risks:
- Hallucinations (confident but false answers)
- Bias (reproduction of unfair patterns)
- Ecological impact (high computational and energy usage)
These risks and the good practices that address these are visualized in the figure below, which introduces the three pillars of our Responsible Prompting framework: Design for Accuracy, Design for Fairness, and Design for Efficiency.
Hallucinations: When AI sounds confident, but is completely wrong
Hallucinations occur when AI produces information that looks polished and credible but is factually incorrect or entirely invented. Because AI predicts the likely next word rather than retrieving facts, it can generate convincing explanations, invented statistics, or fabricated citations.
Hallucinations often appear when prompts:
- lack context
- include assumptions
- push toward a preferred answer
- request information that does not exist
When trusted without verification, hallucinations can lead to inaccurate analyses, misleading summaries, and flawed decision-making. In professional contexts where precision is essential, this risk must not be ignored.
The solution is found in the Design for Accuracy pillar:
- Be clear and specific: Provide the context the model needs by adding a file or pasting extra information in the chat.
- Avoid leading questions: Keep prompts objective and avoid pushing for a desired answer.
- Ask to admit uncertainty and request sources to validate: “Knowing yourself is the beginning of all wisdom.” Reward chatbots that admit when they do not know the answer. When they give sources, quickly check them.
When applied consistently, these behaviors will reduce hallucinations and lead to outputs that are more grounded and transparent.
Figure 1: Avoid leading questions: “Why is Vendor X the best solution?” forces the model to defend a premise it cannot validate. Instead of challenging the assumption, it will construct a plausible argument supporting it.
Note: This section has solely focused on improvements through prompting techniques; to more drastically reduce hallucinations, you must look for more complex set-ups including finetuning models, RAG systems, or data agents.
Bias: Patterns in data that reinforce unfair outcomes
AI systems learn from large datasets filled with human language, culture, and history, and therefore inherit the biases present in these sources. This implies that, without guidance, AI may default to skewed perspectives, reinforce stereotypes, or overlook underrepresented groups.
Bias can also manifest subtly in tone, sentiment, or the examples the model chooses in its answers. Left unchecked, these patterns can influence how ideas, roles, or groups are represented in the output used in your documents, reports, or analyses.
The solution lies in the Design for Fairness pillar:
- Use neutral language to avoid suggesting assumptions. Language bias is real!
- Ask for diverse and inclusive perspectives which will explicitly push the LLM to broaden its reasoning.
- Avoid stereotypes by adding sufficient detail and context. The less information you give, the more assumptions the AI will make.
Another helpful method is role flipping. Here you are asking the AI to adopt the perspective of an atypical persona. Instead of mimicking an expert, ask it to answer as a beginner. Instead of being a professor, ask it to be a student. Instead of a recruiter, as a candidate. And so on. Doing this broadens the model’s response space and leads to more representative output.
By applying fairness focused prompting, AI becomes a tool that supports inclusion rather than reinforcing past patterns.
Figure 2: Use neutral language to avoid suggesting assumptions. Describing roles without context may trigger narrow assumptions
Ecological impact: AI is powerful but energy-intensive
Every AI interaction requires computational power and advanced models consume significant energy. Although invisible to the user, prompting behavior directly affects how much energy the system uses.
Energy inefficient behaviors include:
- starting new chats every time or sticking too long in one chat
- requesting extremely long outputs
- using the most powerful model for simple tasks
The solution is the Design for Efficiency pillar:
- Choose when to start a new chat to use threads effectively
- Specify output constraints such as length or format
- Choose the right model for the job
These simple actions support more sustainable AI usage without compromising output quality.
Figure 3: GPT 5 Nano solved 3 challenges successfully that GPT 4 models struggled with… within the same prompt! And… 25x cheaper than GPT 5!
While small in isolation, these behaviors quickly add up across bigger organizations. Aside from the sustainability impact, this also makes a big difference financially. Let’s look solely at the use case of email generation. With GPT-5 nano being 25x cheaper than the default GPT-5, an organization of 1,000 employees can save between US$ 2,500-10,000 every year by choosing the nano model and adding a 150-words limit (assuming 10 generated emails per employee every workday, typically each having a length of 400 words). Numbers used from OpenAI’s publicly listed API prices.
Let’s start your AI literacy journey
Responsible prompting is just one part of a broader shift toward AI literacy. At KPMG, we believe that equipping professionals with the right knowledge and habits is essential to thrive in an AI-driven environment. Our AI Literacy program empowers individuals to use AI more confidently, more effectively, and more responsibly.
By integrating responsible prompting into your daily work, you not only enhance your own productivity, but you also help build a culture of ethical, consistent, and trusted AI use across the organization.
Ready to continue your journey?
If you're interested in strengthening your AI literacy or want support applying Responsible Prompting within your team, feel free to reach out. We’re here to help you take the next step toward making AI work responsibly and confidently across your organization.
Authors:
Edouard De Caluwe, Junior Advisor & Olivier Mees, Manager Advisor
Explore
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia