• 1000

The rapid evolution of technology including artificial intelligence (AI) is enabling banks to continually refine their services to customers and improve the efficiency of their back-office operations. But with evolving technology comes evolving risk, and banks must ensure that they are protected as they roll out digitalisation programmes and adopt new tech.

One of the most high-profile developments recently has been generative AI, such as ChatGPT, which can create new content including text, images and videos in response to prompts. ChatGPT was only released to the public in November last year but has already attracted a huge amount of attention. Generative AI is more sophisticated in responding to requests than previously available AI technology and can be customised more easily. For banks, it has potential use cases in many areas including chatbots, marketing, AML and cybersecurity.

Banks in Hong Kong have already been using AI in their customer service chatbots. The more advanced AI that is emerging means that the chatbots will be able to answer more complicated questions and provide more precise and detailed answers. Banks in the United States and some other jurisdictions have already been using AI in various areas, and it has been estimated that this can reduce operational costs by as much as 40%. 

Managing risk

With generative AI, from the technology risk perspective, one of the hot topics is the integration of responsible AI practices into AI adoption as new use cases are adopted. 

For example, data privacy issues could arise in the conversations between customers and chatbots. The question is whether, and to what extent, these conversations will be used to train the AI models, and what that means for the customer’s data privacy. More generally, banks must consider the risk implications as they increasingly incorporate AI in areas including customer communication and data analytics, as well as streamlining and automating their manual processes.

To use AI effectively while protecting against risk there are some key areas banks should be focusing on: data quality, transparency, compliance, cybersecurity and governance. 

Governance and Oversight:

Proper governance and oversight are essential to ensure that AI systems are making decisions that align with the bank's values and goals.





Data quality:

Banks need to ensure that the AI data is reliable. AI systems are only as good as the data they are trained on, and poor quality data can lead to inaccurate predictions, which can have serious consequences.


Explainability and transparency:

AI models should be transparent, and aspects such as how the training algorithm works should be able to be explained, to build trust with stakeholders.


Compliance:

AI systems must comply with relevant regulations to avoid legal and financial consequences, including on data privacy.





Cybersecurity:

Robust cybersecurity protocols are needed to protect AI systems from cyber threats. Hackers could exploit vulnerabilities in AI systems to gain access to sensitive data or cause other damage.

The regulators and government in Hong Kong have been keeping track of technology developments and the risks involved, and have provided guidance to banks and other businesses. 

The Office of the Government Chief Information Officer (OGCIO), Hong Kong SAR Government, has also released the Ethical Artificial Intelligence Framework, to provide guidance to businesses and sectors on key areas to consider when organising AI projects. These areas include, for example, privacy, safety and accountability, among others.

For banks, the HKMA issued a circular, titled High-level Principles on Artificial Intelligence, in 2019. The circular provides an overview of the key concerns around the use of AI and proposed solutions. These include areas such as board and senior management oversight on the AI applications, the required expertise, periodic reviews and ongoing monitoring, and contingency plans in case of any identified issues.

Emerging use cases

While AI offers new opportunities and potential use cases in a number of areas beyond the chatbot, it is important to ensure that potential risks associated with its use.

For example, AI can identify unusual patterns in transactions that may be suspicious when reviewing fund transfers. Currently, AML involves a lot of manual handling and time spent dealing with exceptions. Reducing these exceptions through AI adoption, instead of pre-defined rule-sets, will help productivity and increase efficiency. 

Another promising use case is robo-advisory in wealth management, which can provide advice to customers on reaching their wealth goals. This market has been expanding in recent years so it is a good time for banks to explore how AI can help create more product options for clients. 

Regarding cybersecurity, banks are automating more of their processes, including how to use technology to respond to the threats automatically. In the future, AI might also be able to reduce the risk further by identifying hacker behaviour. 

It is highly likely that AI and other emerging technologies will become even more important to banks across a variety of areas of operation in the next few years. Banks should be preparing now to ensure that they have the risk frameworks in place, as well as the right talent to support them as AI becomes even more critical to banking operations and services.