• 1000

As featured on PhilStar:  Defending against deceptive advances

If people living in the year 2000 were to see the technology that exists today, they would likely be shocked and astonished by the advancements that have taken place over the past two decades, like outsourcing math to calculators and various applications, spelling to spell-check, memory to the internet and cloud, automation of manual labor, and to a certain extent, outsourcing the writing to ChatGPT. While some of these advancements may have been difficult to imagine at that time, they have since become integral parts of the daily lives of many people.

What is ChatGPT

As many of you may have heard or seen on social media, ChatGPT has become increasingly popular as an easily accessible and user-friendly application capable of engaging in conversations and generating contextually appropriate responses, in most cases and albeit with some limitations. ChatGPT is an Artificial Intelligence (AI) language model developed by a company called OpenAI that is getting billions of American dollars in funding. Its ability to mimic human conversation and writing style are nothing short of impressive.

How much buzz is ChatGPT in the Tech World?

It's a big deal because it has the potential to be a disruptive technology with both immense potential and inherent risks.

"(The) recent advances in AI will surely usher in a period of hardship and economic pain for some whose jobs are directly impacted and who find it hard to adapt," said Ajay Agrawal et al. of Harvard Review Press. 

e-Commerce was disruptive in many ways too. It provided a convenient online marketplace platform where buying and selling goods can take place in new and efficient ways. Its extensive distribution network contributed to the decline of traditional retail stores and shopping malls.

"ChatGPT is scary good. We are not far from dangerously strong AI," said Elon Musk via a tweet on Twitter, and which Sam Altman, OpenAI's chief, responded by saying, "I agree on being close to dangerously strong AI in the sense of an AI that poses, e.g., a huge cybersecurity risk. And I think we could get to real AGI (Artificial General Intelligence) in the next decade, so we have to take the risk of that extremely seriously too."

While ChatGPT can bring about benefits through its potentially disruptive technology, it has also been associated with certain security threats, including the increase in phishing, spear-phishing, spamming, malware attacks, scams and fraud.

ChatGPT to revolutionize the playing field for Phishing

Based on a survey conducted by Statista, a Research Company based in Germany, during the first half of 2022, the country experienced a significant increase in the number of phishing attacks, exceeding the total number of attacks recorded in the entire year of 2021. Over 1.8 million attacks were detected in this period, compared to 1.34 million attacks in 2021. Moreover, during the fourth quarter of 2022 in the Philippines, 51 percent of respondents who had experienced digital fraud attempts were targeted with phishing attacks. Notably, e-commerce shops, payment systems, and local banks were among the most targeted entities in the country.

In the past, social engineering and phishing attacks were often characterized by known red flags, such as confusing requests or offers, urgent or high-priority messaging, misspelled names and poor grammar. These red flags were considered typical signs of a phishing email, and most users could quickly identify those, allowing them to delete or ignore the email. However, recent advances in AI have removed these clear markers, making it increasingly difficult for regular users and technical experts alike to distinguish between a legitimate email and a phishing campaign, creating a vulnerability for cyber attacks, and leaving everyone susceptible to deceptive schemes aimed at collecting sensitive information.

At the core of successful phishing campaigns lies the art of persuasive communication. In today's world, cybercriminals need not to rely solely on technical proficiency to be effective. They also manipulate human psychology, leverage persuasive language and use manipulative tactics to create a sense of urgency, fear and legitimacy. Consequently, ChatGPT is becoming the one-stop shop for cybercriminals to enhance their phishing emails with improved communication and persuasion skills they may lack. This AI-powered chatbot enables even inexperienced threat actors to elevate their game by producing convincing emails that are indistinguishable from legitimate ones. Unlike in the past when misspellings and poor grammar were used to raise immediate doubts, ChatGPT eliminates such misgivings. Although the chatbot has safeguards to prevent misuse, threat actors can easily manipulate the application by having their workaround it.  

Navigating through other Security Threats of ChatGPT

Recently, cybersecurity professionals have observed instances where users managed to exploit ChatGPT, bypassing its ethical filters and leading to the generation of code for malicious software applications. This practice, known as jailbreaking, enables users to manipulate ChatGPT's responses for potentially unethical purposes. However, to our slight relief, the current threat posed by AI-written malware code remains minimal due to significant flaws and too basic malicious codes generated thus far. 

Have you ever found yourself in a situation where you received an email seemingly from someone you knew well, such as your boss, a colleague, or even an estranged friend, whose contents sound urgent and insist on immediate action? You have checked all the indicators, believing it was legitimate only to discover that you had unwittingly opened suspicious links, potentially exposing yourself or your company's network to significant harm. Congratulations, you fell into a spear-phishing trap!

Shockingly, ChatGPT can be exploited to assume various roles and personas based on scams that are personalized to target a specific person or referred to as spear-phishing. Whether it's the notorious Nigerian Prince scheme or some romance scams that play on emotions, ChatGPT can be programmed to adopt convincing personas, further enhancing the effectiveness of these fraudulent activities. 

Bolster your Digital Fortifications

ChatGPT undoubtedly ushers us into a new era in the digital world, particularly in social engineering. You may say goodbye to what we have all learned about phishing emails, as this chatbot enables a new level of sophistication in deceptive communications. More than ever, we should be poised to evolve alongside these new capabilities and adapt to the inherent security threats they pose. Stay ahead of the curve as we navigate this ever-changing digital and cybersecurity landscape.

• Phishing Campaigns:

Be more vigilant than ever. Despite the absence of traditional red flags, be more cautious and skeptical of all incoming emails, especially those requesting sensitive information or urging immediate action. Think before you click links or download attachments from strange emails, even if they were written in perfect English. Remember, phishing emails leverage psychological tactics to manipulate recipients into hasty actions.

• AI-written Malware:

Keep your devices and software up to date with the latest security patches. Install reputable anti-malware software and conduct regular scans to detect and eliminate potential threats. Remember, prevention is always better than cure. 

• Spear-Phishing Traps:

Be more cautious than ever because AI can impersonate anyone. Treat unexpected emails or messages with skepticism, even if they appear to come from familiar sources and seem tailored to your interests or address you by name. Verify the sender's identity through alternative channels before clicking links or sharing sensitive information. Remember, stay vigilant for requests that seem out of the ordinary and take the time to verify their authenticity.

• Fraud:

Keep an eye out for unsolicited calls, texts, emails or direct messages asking for personal information, financial investment or assistance. Always verify the legitimacy of any request through official channels, and never share sensitive data without confirming the recipient's identity. Remember, don't become a statistic in the world of fraud.

• Romance Scams:

Always keep in mind that scammers prey on people's emotions. While there's a possibility that you may not always think or act clearly brought by internet love, sweet messages don't last forever. AI still isn't always the smoothest operator when it comes to human language. It often uses short responses, reuses the same set of words, and outrageously generates a lot of content without saying much at all, say, a lack of substance. Remember, always put your mind over the matters of the heart. 

Additionally, strengthen your digital hygiene by maintaining robust passwords for each online account, enabling two-factor authentication whenever possible and utilizing a virtual private network (VPN) when connecting to public Wi-Fi networks.

Knowledge and vigilance remain our most potent weapons in the face of ever-evolving security threats. By staying informed about the latest threats and practicing vigilance in our online activities, we can confidently embrace the benefits of AI while safeguarding our digital lives. 

Timothy John C. Paz
Advisory Manager
KPMG in the Philippines

Timothy John C. Paz is an Advisory Manager from KPMG in the Philippines (R.G. Manabat & Co.), a Philippine partnership and a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. The firm has been recognized as a Tier 1 in Transfer Pricing Practice and in General Corporate Tax Practice by the International Tax Review. For more information, you may reach out to Advisory Manager Timothy John C. Paz or Technology Consulting Partner Jallain S. Manrique through ph-kpmgmla@kpmg.com, social media or visit www.home.kpmg/ph.

This article is for general information purposes only and should not be considered as professional advice to a specific issue or entity. The views and opinions expressed herein are those of the author and do not necessarily represent KPMG International or KPMG in the Philippines.

https://cfonewshubb.com/2022/12/12/chatgpt-and-how-ai-disrupts-industries/
https://twitter.com/elonmusk
https://www.statista.com/statistics/1349352/philippines-number-of-phishing-attacks/
https://www.csoonline.com/article/3685368/study-shows-attackers
-can-use-chatgpt-to-significantly-enhance-phishing-and-bec-scams.html