The AI-empowered worker isn’t a future concept—the disruption is already here. And rather than waiting to see how generative artificial intelligence (GenAI) plays out, companies need to be in it to win it. Whatever your business vision—to boost competitiveness, to drive productivity, to increase sales—you need to be thinking about how you can leverage AI, and GenAI in particular, to drive that vision.
Employees are already using the technology to boost their productivity, whether or not their workplace has a formal policy. In fact, new research from KPMG in Canada found that one in five Canadians are using GenAI tools to help them with their work or studies. Of those, 67 per cent say GenAI allows them to take on additional work they otherwise wouldn’t have capacity for.
Workers equipped with AI tools have seen their productivity skyrocket, with gains comparable to those sparked by the introduction of the personal computer. White-collar workers are among the biggest beneficiaries, streamlining tasks like research, planning and data analysis. For organizations just getting started with GenAI, providing guardrails through a responsible AI framework can help to protect the business while empowering workers.
Note: empowering workers, not replacing them
There’s fear that AI will replace us, but the reality is more nuanced. AI models provide a means to generate creative, insightful and novel outputs in a variety of fields. Rather than replacing jobs, it’s augmenting them, allowing us to amplify production by accomplishing tasks at a scale previously unimaginable.
AI-empowered workers will also play a critical role as the “human in the loop.” For example, they need to evaluate the AI’s output against their original goal, checking whether it meets their requirements and safeguarding against risks such as intellectual property violations, inaccuracies and biases. Then, depending on the evaluation’s outcome, they may need to modify the context or their instructions.
This powerful human-AI relationship helps to facilitate meaningful results and ensures AI’s responsible use. The “human in the loop” provides details and context to the AI model so it can generate relevant outputs, while understanding that certain prompts could create inherent bias. That means experienced workers—those who have a rich understanding of context—are now invaluable in this new landscape.
I’ve been paying attention to trends and developments in this space for more than 20 years, and it’s clear to me that the latest breakthroughs signify a profound shift in computational accessibility. Our survey found that more than half of workers (55 per cent) say GenAI tools save them between one to five hours per week, and 65 per cent say GenAI is now essential in helping them manage their workloads.
That leaves more time for innovation. While predictive AI offers a data-driven lens to anticipate future trends, GenAI acts as a “creative artist” within the AI domain, creating novel and sophisticated content from deep learning algorithms.
Yet, the possibilities remain largely untapped and untested. While there’s a treasure trove of creativity and innovation waiting to be explored, companies are grappling with issues of governance, security, ethics and data residency as they look to integrate AI into their operations.
Making your framework work
The AI-empowered worker isn’t just an employee who uses AI. Rather, it’s an employee interacting with and leveraging the benefits of AI models whilst keeping their information secure and private. But without a responsible, adaptive and comprehensible AI framework that allows that employee to make the most of GenAI’s potential—and therefore their own—while also building trust into the equation, some workers may end up using the models in ways that put the organization at risk.
In our survey, 10 per cent of respondents who use GenAI said they include private financial data about their company in GenAI prompts, and 13 per cent use information about customers, including their names. What this means for employers is that there’s no turning back. Whether or not companies have formally adopted GenAI, it’s important to start creating policies and building guardrails to ensure its responsible use.
This process starts by developing and operationalizing a responsible AI framework. This framework should be based on transparency, ethics and good governance, and should include policies, practices and tools to manage risk. It should also include a robust governance plan that addresses data integrity, privacy, reliability, accountability, fairness and security—along with mechanisms to ensure workers stick to it.
Along with a responsible AI framework, upskilling your people is essential. After all, technology is only as good as the people who use it. That should include training on the principles of responsible AI and how to use AI frameworks and tools effectively—and safely—in the workplace. At the same time, it’s important to foster a culture of curiosity and innovation, where experimenting with trusted and secure new technologies is encouraged and rewarded.
Companies shouldn’t push off decisions about GenAI until tomorrow. Workers are already embracing GenAI and using it in the workplace, with or without their employer’s knowledge. Companies that furnish their workers with AI tools and a responsible AI framework will be in a much better position to boost their current performance—and set themselves up for continuous improvement for years to come.
Tenez-vous au courant de sujets qui vous intéressent.
Inscrivez-vous aujourd’hui pour avoir accès à du contenu personnalisé en fonction de vos intérêts.