The adoption of artificial intelligence ("AI") in the workplace is increasing rapidly. A 2023 survey by KPMG in Canada found that 65% of companies in the United States ("U.S.") regularly use ChatGPT to improve operations, compared to 37% in Canada.1 As Canadian businesses catch up, advance planning will be crucial on the part of employers to ensure they reach legal compliance. The following article will provide insight into the key impacts and relevant considerations for employers when leveraging AI technologies in the workplace.
AI's role in the hiring process
Generative AI, which can produce various forms of original content in responses to prompts, is revolutionizing the hiring and recruitment landscape. Historically, the technology in the recruitment space was limited to job boards and applicant tracking systems. Today, AI provides recruiters with a range of more sophisticated and impactful tools to connect employers with potential employees. Depicted below are some prominent uses and key considerations around incorporating AI into these processes.
Accessibility, utilization, and broad impact on jobs
AI technology is now widely accessible and relatively easy to use. Over recent months, AI is becoming more mainstream as many companies and start-ups are investing in building a trustworthy and open source AI ecosystem. Open source refers to software which is readily available to the public, allows users to modify the code and is typically free or inexpensive, as compared to closed source which is limited to a restricted audience and generally expensive.2 Thus, In an open source environment, anyone, regardless of their background, can use AI tools such as ChatGPT and other chatbots to generate practical and creative outputs. This democratization of AI tools has accelerated advancements in the technology within a short span of time. The rapid adoption of AI had also been unlike anything previously seen in AI technology, as ChatGPT 3.5 had over a million users within five days of OpenAI releasing it.3
Generative AI's capabilities extend across various roles and industries. According to research conducted by OpenAI, approximately 80% of jobs can integrate generative AI into their workflows.4 This broad applicability of AI represents an important shift in how talent and jobs are managed, moving beyond traditional methods.
Reshaping recruitment and accelerating onboarding
AI has reshaped the recruitment process in two primary ways—by improving job descriptions and allowing for increased candidate personalization. Generative AI helps managers draft precise job requirements by identifying essential skills, although human oversight is still necessary. Furthermore, AI enhances candidate engagement by personalizing communications based on the applicant's profile, which can streamline the recruitment process for large organizations.
Moreover, generative AI can accelerate onboarding by providing new employees with instant access to an organization's institutional knowledge. For performance reviews, AI can assist in drafting evaluations by synthesizing data from multiple sources, though it relies on consistent data entry by managers. This aids in creating comprehensive performance assessments and facilitating constructive feedback conversations.
Globally, AI promotes a shift from traditional credentials to skills-based assessments. By tagging unstructured data, AI can identify candidates with relevant experience and skills, even if they do not possess formal degrees. This opens opportunities for individuals with practical, on-the-job learning, which contributes to making the hiring process more inclusive.
AI in candidate screening and employment contract generation
Beyond recruitment and onboarding, AI is increasingly being integrated into candidate screening and the creation of employment contracts.
Indeed, AI screening tools are becoming more prevalent in the hiring process. These tools can efficiently sift through large volumes of applications, identifying the most suitable candidates based on predefined criteria. The use of AI to draft employment contracts is another significant development. AI can streamline the contract creation process, ensuring consistency and reducing the time required for legal professionals to draft documents.
Risks and challenges of AI in hiring
The use of AI in the hiring process does not come without its fair share of risks. One major concern is around "hallucinations," where AI might produce outputs that seem logical but are factually incorrect.5 Another issue is the potential loss of creativity. For instance, a journalist relying on AI to increase the number of articles they publish may have the unintended consequence of reducing the amount of time they spend on creative thinking, which often occurs during downtime.
While AI is an effective recruitment tool for well-defined roles with a large talent pool, it struggles with novel or significantly altered jobs. Validating criteria for new roles requires extensive data, which is often challenging to gather without the risk of AI tools infringing on proprietary information. With that said, ensuring that AI algorithms are trained on diverse and representative data sets is key to avoid perpetuating existing biases.
It is also important to keep in mind that some regulatory environments, particularly in Europe, mandate human oversight in high risk areas such as employment to ensure transparency and fairness.6 Thus, while AI can augment the recruitment process, the involvement of human judgment remains necessary to mitigate the risks associated with this nascent technology.
In the legal field particularly, the reliance on AI-generated contracts must be balanced with thorough review by trained lawyers. As will be discussed below, this is critical to ensure legal compliance and the consideration of unique contractual nuances.
Legal considerations and guidance for legal professionals
The integration of AI in legal processes has prompted significant changes in professional conduct standards. For instance, in British Columbia, a lawyer faced penalties over the inclusion of fictitious AI-generated cases in legal documents.7 Although there was no intent to deceive, this incident highlighted the need for technological competence among legal professionals. Consequently, various provincial codes of professional conduct now mandate technological competence as part of the standard for competent legal practice.
The Law Society of Ontario ("LSO") recently published a white paper providing guidance on the use of generative AI for its licensees.8 This document emphasizes the importance of understanding the risks associated with AI, such as potential inaccuracies and ethical considerations. Legal professionals must remain informed and educated about these risks to maintain the integrity of the legal system while embracing new technologies.
Moreover, in the realm of arbitration, AI has already established a significant presence. This includes its application in evidence production, such as AI-generated performance evaluations in employee disciplinary decisions, and the reliance of arbitrators on AI for decision-making.9 However, many risks associated with these advancements remain unaddressed.
Particularly, evidence accuracy and reliability, which is paramount in the arbitral setting, may be challenged if it was produced by AI. A party may very well attempt to resist enforcement of an award on the grounds that the use of AI by the opposing party should not be permitted. The issue of forged digital evidence, such as AI-generated images, also poses a unique challenge in arbitration. It may soon become necessary for lawyers to provide statements of authenticity to ensure the genuineness of exhibits.10
Privacy, data, and human rights concerns
The integration of generative AI into the workplace also introduces important concerns regarding privacy, data security, and human rights. These issues necessitate careful consideration and proactive measures on the part of the employer to mitigate risks and uphold ethical standards.
Data privacy and intellectual property
First, generative AI models can inadvertently disclose sensitive information from user inputs. Employees using such AI tools risk breaching privacy statutes and confidentiality obligations if they input confidential or personal information into the tool. This can result in the unintended disclosure of sensitive data in responses to subsequent users. Similarly, employers risk infringing on individuals' privacy rights or the intellectual property of other organizations when using AI-generated outputs.
To mitigate this risk, employers should review the privacy policies and data practices of generative AI developers to ensure compliance with applicable laws, as well as undertake privacy impact assessments prior to implementing AI tools in their workplace. In some organizations, such as the federal public-sector in Canada, these assessments are mandatory as they play an important role in evaluating the potential impact of AI on individuals while ensuring algorithmic transparency.
Production of biased outputs
Second, generative AI can produce biased outputs that adversely affect individuals based on protected grounds under human rights legislation, such as the Canadian Human Rights Act or the Canadian Charter of Rights and Freedoms. This can be particularly problematic in customer service, social media, marketing, employee performance reviews, and candidate screening during hiring processes.
Mitigation strategies include reviewing public-facing AI outputs to guard against biased outputs or limit the use of AI to lower-risk use cases, where the decision-making process is transparent and explainable. Reliance on generative AI for significant decisions should be minimized unless its outputs can be thoroughly understood and justified. Some specific strategies to avoid overreliance on AI which can be implemented by employers include requesting that employees using generative AI provide an explanation for answers generated, verbalizing first-person expressions of uncertainty alongside any outputs generated, and asking questions with prompts to promote critical thinking.11
Complexities with workplace investigations
Third, generative AI's ability to create convincing imitations using "deepfake" technology poses risks for workplace investigations. This technology can be misused to fabricate evidence, which complicates the investigative process. For example, electronic evidence submitted by an employee in support of their allegations, such as screenshots, could easily be falsified through being fabricated by AI.12
Consequently, workplace investigators should be trained to recognize and mitigate the potential misuse of generative AI. Implementing detection techniques and protocols to identify and address deepfake content can help maintain the integrity of workplace investigations. Providing the accused with an opportunity to respond to the allegations, reviewing alternative sources of information to verify claims, and resorting to live verification of electronic evidence to confirm an individual's credibility are examples of key steps which should be taken by workplace investigators to strengthen AI-driven conclusions.
Cybersecurity-related risks
Lastly, generative AI enhances the sophistication of phishing attacks, malware campaigns, and social engineering tactics. When used to create convincing impersonations, this can lead to more effective and frequent cybersecurity breaches.
Employers looking to mitigate this risk should adopt policies and protocols to verify the identities of individuals in virtual communications, which reduces the risk of social engineering attacks. Frequent training on cybersecurity controls and awareness should also be provided, to help employees recognize and respond to sophisticated cyber threats.
Employee replacement concerns
Generative AI can accomplish tasks quickly and effectively, often at an equivalent or superior quality compared to human labour. This potential for improved productivity and reduced costs makes AI an attractive option for employers. However, given its implementation can impact job security and working conditions, some legal considerations for both unionized and non-unionized workplaces should be kept in mind.
Non-unionized workplaces
In non-unionized settings, the legal implications of using generative AI to replace employees are governed by standard termination-related considerations. Employers must ensure that replaced employees receive their appropriate statutory, contractual, and common law entitlements.
- Termination considerations: Employers must conduct termination decisions in a good faith manner, avoiding arbitrary or discriminatory practices. Special care should be taken to avoid age discrimination by assuming older employees cannot learn or use generative AI.
- Constructive dismissal risks: If generative AI significantly alters job duties and responsibilities, it may lead to claims of constructive dismissal, which occurs when an employer unilaterally changes fundamental terms or conditions of employment. Prudent employers should include contractual provisions allowing unilateral changes to job duties and provide reasonable notice in advance of any contractual change to mitigate the risk of constructive dismissal.
Unionized workplaces
In unionized settings, employers generally have the ability to alter job duties under their management rights, subject to the language of the collective agreement and any applicable statutes. Nevertheless, they are often required to engage in discussions with the unions and issue advance notice before introducing any alterations related to AI that could potentially affect the terms of employment.
- Collective agreement restrictions: Employers may face limitations on using non-bargaining unit employees or contractors trained in generative AI to perform tasks traditionally done by unionized employees, as this could be considered outsourcing or contracting out.
- Technological change provisions: Collective agreements often require employers to consult with unions and provide notice before implementing technological changes that affect job security or working conditions. Employers may also need to pay premiums at the time of termination for employees affected by such changes.
- Statutory requirements: The Canada Labour Code and other labour statutes require notice of technological changes likely to affect employment terms and security.13 Whether these requirements apply to generative AI depends on factors like the definition of technological change and the impact on employees.
General considerations and best practices
When it comes to best practices, employers should update contracts, policies, and procedures to address issues related to AI in the workplace, including human rights, privacy, and contractual obligations. They should also ensure to provide necessary training and support to employees on the use of AI technologies as they are rolled out within the organization. Finally, to avoid violating federal and provincial employment and labour laws, employers should implement guardrails when introducing AI.
Updates on AI-related legislation
As AI technology rapidly evolves and integrates into various sectors, legislative frameworks are adapting to address the unique challenges and implications posed by these advancements. Staying informed about these updates is crucial for employers to ensure compliance and effectively navigate the evolving legal landscape.
Bill C-27: Digital Charter Implementation Act
Bill C-27, also known as Canada's first AI legislation, aims to introduce significant changes to Canada's privacy and AI regulatory framework by replacing parts of the Personal Information Protection and Electronic Documents Act (PIPEDA) with the Consumer Privacy Protection Act and enacting the Artificial Intelligence and Data Act (AIDA).14 If passed, this bill would fill existing gaps in the legislation and directly regulate AI across specific sectors.
Artificial Intelligence and Data Act (AIDA)
The AIDA, which is currently under consideration by the Canadian Parliament as part of Bill C-27, seeks to establish comprehensive regulations for high-impact AI systems.15 The primary goal of AIDA is to ensure that the implementation of AI technologies does not compromise privacy, fairness, and transparency.
The initial text of AIDA left the definition of "high impact" AI systems to be determined by future regulations. This lack of clarity prompted stakeholders to request more explicit criteria from the Canadian government, where the proposed amendments to AIDA outline seven classes of high-impact systems. These include AI systems used for employment-related decisions, such as recruitment, hiring, promotion, and termination. As discussed, these regulations aim to address the concern that AI can perpetuate existing biases, impacting crucial employment decisions.
Ontario's Bill 149: Working for Workers Four Act, 2024
Bill 149, which received royal assent on March 21, 2024, introduces new requirements for pay transparency and the use of AI in job postings. One of the key provisions mandates that employers disclose the use of AI in publicly advertised job postings.16 This measure aims to ensure transparency, protect worker privacy, and prevent technological biases from excluding candidates. It is worth noting, however, that there are specific criteria under which the disclosure requirement may not apply. These criteria will be defined by future regulations.
Bill 149 also anticipates the release of clarifying regulations to provide detailed guidance on pay transparency and AI disclosure requirements. Employers should stay updated on these regulations to ensure readiness when they come into effect. In the interim, they should proactively review job posting and candidate screening practices.
What jobs can AI do?
Generative AI technology is poised to revolutionize the workforce, with research from OpenAI estimating that 80% of jobs can incorporate AI capabilities into current activities.17 KPMG recently conducted a high-level comparative analysis which depicts the strengths of AI around certain key capabilities (see figure 1). This significant potential for impact necessitates a strategic approach from leaders to modernize talent capabilities and manage workforce transitions effectively.
Figure 1: KPMG LLP Presentation "The emergence of a right-brain economy"
Capability | People | Generative AI | Comparative example |
---|---|---|---|
Analytical abilities | Excellent | Good | Analyzing complex financial data for investment decisions |
Language processing | Excellent | Good | Engaging in natural conversations and understand context |
Critical thinking | Excellent | Good | Identifying and resolving issues in a complex project |
Creativity and imagination | Excellent | Good | Painting an emotionally evocative masterpiece |
Spatial abilities | Excellent | Good | Navigating through a complex, unfamiliar terrain |
Emotional processing | Excellent | Good | Recognizing subtle emotional cues in a conversation |
Modernizing capabilities and managing the workforce
Indeed, leaders play a critical role in modernizing their functions and managing the workforce shift driven by AI integration. With the potential for 80% of the workforce to be affected, leaders must guide the transition to ensure it benefits both the organization and its employees.
Focusing on opportunities in human resources ("HR"), AI offers HR departments the chance to enhance access to opportunities for a broad segment of the workforce. It can help managers achieve higher performance levels by reducing administrative tasks and providing quicker, more accurate insights. This shift allows HR professionals to focus on strategic initiatives and improve overall workforce productivity.18
Risks in creative fields and workflow design impacts
While AI can automate many tasks, its use in creative fields poses the risk of diminished creativity. The reliance on AI-generated content might lead to a loss of originality and uniqueness, impacting the overall quality and appeal of creative outputs.
Additionally, while AI integration presents an opportunity for change management by reshaping job roles, workflows, and collaboration models, organizations should carefully consider how technology changes the nature of work. In doing so, they should aim to ensure that any additional time saved is directed only toward value-added activities.
Concluding thoughts and KPMG use case
While AI presents many opportunities for increased productivity and streamlining operations, employers must carefully navigate the associated risks and ensure that the integration of AI technologies within their organizations enhances rather than diminishes the quality of work and employee satisfaction.
Should employers want specific examples around the leveraging of AI in the workplace, KPMG's Kleo provides a recent use case of how AI can drive transformation within an organization. In further support of this point, the following paragraph was entirely generated using Kleo.
"KPMG's Kleo features a generalized chatbot (built on GPT-4), along with specific functional chatbots for HR, IT and Risk Management designed to assist employees and their managers in the workplace. Kleo provides a range of services, including answering questions about company policies, providing information on employee benefits, and assisting with HR-related tasks. For managers, Kleo can provide insights into team performance, help with talent management, and assist in decision-making processes. In addition, Kleo can automate routine tasks, freeing up employees and managers to focus on more complex and strategic tasks. This not only improves efficiency but also enhances the overall productivity of the team."
KPMG Law's Employment and Labour Law team is ready to help your organization manage the implementation of AI chatbots in your workplace. Please contact our human lawyers for more information about our services.
Special thank you to Camille Arseneault for her contributions to this article.
- U.S. outpacing Canada in business adoption of AI, KPMG Canada, April 19, 2024.
- Open Source vs Closed Source: What's the Difference?, Kinsta, May 29, 2024.
- Generative AI and the future of HR, Hancock, B. et al., McKinsey & Company, June 5, 2023.
- Supra, note 3. .
- What are AI hallucinations, IBM, March 29, 2023.
- AI Act enters into force, Directorate-General for Communication, European Commission, August 1st, 2024.
- Court hits B.C. lawyer with costs over fake AI-generated cases, despite no intent to deceive, Little, S., Global News, February 26, 2024.
- Licensee's use of generative artificial intelligence, Law Society of Ontario, April 11, 2024.
- Artificial intelligence and performance management, Varma, A. et al., Organizational Dynamics, February 28, 2024.
- Judicial Errors: Fake Imaging and the Modern Law of Evidence, Alon, G. et al., UIC Review of Intellectual Property Law, 2022.
- Appropriate reliance on GenAI: Research synthesis, Passi, S. et al., March 21, 2024.
- AI and Deepfakes Complicate Evidence in Workplace Investigations, Dill, J., Bloomberg Law, February 27, 2024.
- RSC 1985, c L-2, s 52.
- Bill C-27: Canada's first artificial intelligence legislation has arrived, Medeiros, M. & Beatson, J., Norton Rose Fulbright, June 23, 2022.
- The Great Attrition is making hiring harder. Are you searching the right talent pools?, De Smet, A. et al., McKinsey & Company, July 13, 2022.
- Ontario's Bill 149 proposes new requirements for pay transparency, use of AI in job postings and other changes, Silverman, J., Osler, November 28, 2023.
- Supra, note 3.
- Supra, note 1.
Insights and resources
Connect with us
Stay up to date with what matters to you
Gain access to personalized content based on your interests by signing up today
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia