Australian CEOs have identified risks associated with emerging technologies as their greatest threat to growth. Yet many organisations are pushing ahead blindly without any way to assess what problems may arise.

Organisations are embracing emerging technologies like never before. New and disruptive technologies such as artificial intelligence (AI), machine learning, Internet of Things (IoT), and automation are helping to deliver better outcomes and experiences for employees and customers alike, and at lower costs.

But the flipside of emerging technologies is changing the risk profile. The rapid increase in the trialling and use of emerging technologies has the potential to magnify small issues quickly and create unforeseeable obstacles that can cause considerable damage to an organisation.

 

CEOs recognise the potential risks of utilising emerging technologies. In KPMG’s CEO Outlook Survey 2019, emerging and disruptive technologies were the second biggest threat for Australian CEOs. Globally, it was also the second biggest threat.  

Yet despite this being front of mind in many boardrooms, risk managers are rarely in a position where they can effectively prepare the organisation. Few risk managers have the right capabilities or understanding of the underlying algorithms to properly assess where the risks lie and how they can be managed, while too often they are only engaged once the technology has been developed or implemented.

With regulators turning their attention towards the use of these technologies with greater focus, and new waves of legislation around the corner, it is imperative that our CEOs and boards act to ensure they have a risk framework in place that’s fit for purpose and reflects the organisation’s culture, conduct and ethics.

Kevin Smout

Partner, Risk Strategy & Technology | Global Leader, Governance, Risk & Assurance

KPMG Australia

Email

Five key questions every board and C-suite should be asking about its approach to emerging technology risk.

1. Is our risk framework up-to-date and fit for purpose?

The biggest challenge is that organisations don’t have a risk framework that is fit for purpose to manage the implementation and use of emerging technology. Just because a technology is available to use, that doesn’t mean it is something an organisation wants or should be doing.

If a company is thinking of introducing a new technology, such as AI into a call centre, how will that impact on an organisation’s risk profile? And is it within the risk appetite statement the board has set? These are two critical questions that need to be assessed at the beginning to understand whether an organisation will want to go down that path and determine whether it actually will get a better risk outcome.

A risk framework that can’t stand up

A large technology firm engaged with a vehicle manufacturer to supply algorithms to power a car's internal computer system. However, without the appropriate risk management framework around installing the technology, serious security vulnerabilities were overlooked.

A firmware update left a security flaw that allowed hackers to remotely hijack the car’s operations. The brand reputation of both firms suffered as a result, with the regulator calling them out for not being complaint with safety regulations.

2. Does our risk team have the appropriate skills?

The role and responsibilities of risk professionals is to not only to oversee and advise on the implementation of technologies like AI and machine learning, but find holes in an organisation’s plans and be able to challenge. Currently there are not many professionals in Australia that know how to effectively implement AI and what the associated risks are.

It’s clear that risk and audit teams within organisations require the same level of investment to upskill capability like the rest of the organisation.. They need to be able to effectively challenge, advise and monitor the implementation of various emerging technologies in the business to effectively fulfil their role.

Errors can be costly

A company developing self-driving cars had one of its fleet fail to recognise a red light, culminating in it driving through one while a pedestrian was present. A year later, a pedestrian was killed by a self-driven car, which the company attributed to human error.

Regulators have emplaced strict guidelines and public confidence in the technology has waned. Self-driving cars have the potential to revolutionise the transportation industry, but this frightening mistake shows the potential consequences of failing to account for all of the risks associated.

3. How do we ensure risk management allows maximum returns?

How do organisations make sure the emerging technology investments being made will achieve the right returns and serve as long-term sustainable solutions? Because many of these technologies are so new, it can be very complicated to measure potential impact. Regularly, organisations aren’t able to demonstrate or show the return on the investment.

A proof of concept may outline that a company is spending a certain amount of money to achieve a customer outcome, but has it taken into consideration all the things needed to maintain emerging technology and ongoing monitoring?

Think before you leap

A large technology company launched a chatbot on a social media account that was designed to learn from its users. However, within just 16 hours, the bot became a reflection of the negative behaviours and inappropriate language of its users.

The bot started to post inflammatory and offensive messages. The company suffered reputational damage and led to considerations around responsibility and accountability.

4. What data points do we have to provide oversight?

Data is critical to emerging technology; to implement AI, machine learning or IoT, organisations need data that is accurate and reliable. However, many companies don’t have data sets that are complete, accurate and up to date enough to be fully integrated and used in emerging technologies.

But using data is not just about collecting as much as possible. Regulators around the world are increasingly scrutinising the use of people’s personal data. Organisations need to ensure the collection and use of data is done in a way that reflects its culture, conduct and ethics.

Third parties and ethical use of personal data

A political consultancy firm used a social media platform to harvest the personal data from millions of people without their consent for political advertising purposes. An app on the platform allowed the collection of data from not only people who has expressed consent, but all of the people in their network without their knowledge.

As a result, the social media outlet’s market value dropped significantly, it was fined billions of dollars and led to intense scrutiny from legislators, the media and public. The brand’s reputation has also suffered considerable damage.

5. Have we assessed the risk exposure from our third parties?

When implementing emerging technologies, organisations are generally relying on third parties in some form. But many organisations forget about covering third party risk before engaging a vendor to partner on trialling or implementing emerging technology.

What’s the third party’s risk for the vendor if engaged? Organisations need to assess its framework, data privacy and risk management. If a company is relying on a third party and that organisation is not up to the same standards, then it’s the organisation’s risk that will be exposed.

The benefits and value of emerging technology to customer, supplier, employee, financial and risk outcomes are clear and undisputable – but taking a pragmatic risk lens to using emerging technology from the outset is a must.

Find out more