• Marilyn Abate, Author |
4 min read

Imagine this scenario: You are on an overseas business trip. It’s the holiday season and you’re excited to soon return home to spend time with friends and family. Your elderly parents, who haven’t seen you in ages, are especially excited, and your mother is planning all your favourite meals and desserts. Then, one evening, the phone rings in your parents’ home. They answer, and the voice on the other end says that you’ve been detained and the only way for you to be released is for your parents to wire $10,000 to an offshore account. Bewildered and afraid, they comply. After you arrive home, safe and sound, your parents learn the call was a scam. They call the bank to report the fraud only to learn that the money is already gone.

Here’s a more common—and more expensive—scenario: You’re in the accounts payable department of a large corporation. In the otherwise normal course of business, a vendor sends an invoice by email that also says they’ve updated their banking information and advises you to submit payment with the new details. Nothing else about these routine exchanges seems to have changed, so you comply. Next thing you know, you’re finding out that a successful phishing attack infiltrated the vendor and you’ve just “paid” a fraudster.

These kinds of things happen all the time, but the good news is they don’t have to. The banking industry already uses several methods to detect anomalous behaviour on their customer’s accounts. But as the tactics of threat actors continue to increase in sophistication (especially in light of the pandemic), banks need to address the expectations of both consumers and regulators to identify and address fraudulent payments like the ones I’ve just described.

The road most traveled
Advanced data analytics techniques related to financial crime have progressed over the years. Rules and models have focused on detecting behaviour that does not resemble the customer, including logins from unusual locations or devices. Many solutions focus on the customer’s login behaviour, typical phone use habits, and even their typing patterns and techniques to help determine if the customer is legitimately signing into their online banking account.

However, consumers continue to be affected by romance scams, business email compromise, investment scams, cryptocurrency scams—the list goes on. There is a real need for banks to further enhance their customer behaviour analytics to focus on other customer activities, such as payments. Case in point: had those parents in the first scenario ever wired money to an offshore account before? Let’s assume not. Increasingly, banks should be paying attention to this kind of thing because it will make a huge difference in protecting their customers and their money—and also the banks themselves.

The fact is, threat actors are constantly adapting to the new tools, strategies and technologies that banks are implementing to protect their customers and transactions. With the launch of digital banking, threat actors have moved beyond simple online account takeovers from their own devices and IP addresses. Now, threat actors typically mask login activity to impersonate the customer, infiltrate devices through malware, or take over the customer’s phone number in order to circumvent multi-factor authentication. Phishing also remains common: customers are sent suspicious links through which threat actors obtain confidential information to access online banking or install malware onto a customer’s computer.

As my two introductory scenarios hopefully make clear, customers are sometimes coerced into voluntarily sending payments that they believe to be for legitimate means, only to learn they’ve fallen victim to a scam. In many cases, customers are convinced to share their one-time passcodes with threat actors who claim to be calling as a representative of the bank.

Paradigm shift time
There are increasing expectations that banks need to identify fraudulent payments when they appear to be anomalous behaviour for the client, even if a customer has voluntarily initiated and authorized the payment. The UK courts have established the “Quincecare duty,” which can hold banks liable for obeying customers’ instructions when they are “put on enquiry” that following them might facilitate a fraud on their customer. The application of the Quincecare duty is shifting such liability onto the banks. This means that greater expectations are being placed on the banks with respect to their role in preventing their customers from making fraudulent payments, even when the customer is the victim of a scam.

I believe two key updates to current approaches are critical: (1) providing customers with training and resources, and (2) using behaviour analytics to better manage relationships. On the former, banks should continue to educate customers to help them understand the various scams that are out there and teach them how to protect themselves. For the latter, while banks have rightly put a focus on Know Your Customer (KYC) principles as part of their regulatory regime, that focus needs to shift. Know Your (Customer’s) Business (KYB) should be the new standard. This can be achieved through advanced analytics on the customer’s transaction behaviours.

The KYC rigor enacted at onboarding needs to be maintained throughout the customer lifecycle, but institutions also need to continue to learn about their customers and know their customers’ business, as well. This can be done through monitoring the customers’ spending habits and payment behaviours to help determine whether or not a given payment is anomalous or not. The regulators will appreciate it, your commercial clients will thank you—and those elderly parents will thank you perhaps most of all.

Multilingual post

This post is also available in the following languages

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today