Early “Nigerian prince” emails were a topic of popular humor more than a genuine threat. In this early iteration of telecom fraud, characterized by English that was clearly non-native and sometimes difficult to understand, victims would be promised large sums of money once they paid certain fees. As the name suggests, this model was typically based out of Africa, which was difficult to conceal to targets due to the large culture gap.

In the recent wave of romance scams, the scammers also frequently promise large sums in exchange for small, although with the addition of much more sophisticated social engineering. This model is often based out of South Asia, and the personae are often Chinese, even when targeting other parts of the world, in order to explain scammers’ non-native language ability.

Social engineering is a complex area which seems ripe for improvement through the application of large language models (LLMs). The Global Anti-Scam Asia Summit, held in Taipei in November, focused to a large extent on the emerging role of AI in scam ecosystems. The situation will get worse before it gets better, and not just for telecom fraud, but also more sophisticated cyberattacks. Nevertheless, despite all this bad news, a number of approaches were also presented at the conference to leverage new technologies for the defensive side. One promising approach is to use social engineering back against attackers.

Grim tidings

LLMs can not only help scammers cover up their geographical origin, but also potentially help automate the early stages of the process, which involve frequent rejection and significant repetition. They could eventually displace the “opener” role in the emerging division of labor amid ongoing industrialization of the scam business, helping focus human resources on the higher-value “closer” role. In fact, recent reports indicate that in addition to these positions, scam complexes will often have entire departments devoted to AI integration, similar to the way that corporations might have a digitalization department.

Early hopes for strong AI regulation– to the extent they were ever credible in the first place – have probably been dashed for good with the December release of Mistral, an open-source model with capabilities between ChatGPT 3.5 and 4.0. From now on, all efforts to deal with the effects of AI in an adversarial context will take place on the defensive side.

LLMs currently lack the capacity for type of psychological manipulation needed to fully automate long-term “pig butchering” schemes. They do possess emotional intelligence and can respond to the tone of a conversation, which could be useful in therapeutic contexts. Scamming however means driving, rather than responding to the emotional tone of an interaction, which is a very different skill set. Moreover, it is also fortunately not something legitimate AI researchers have put much effort into training their models for.

That said, the lack of this specific capability is not a major obstacle, and operators can input their intentions into their prompts. Social engineering includes a large category of tactics, including for attacks which fall under the more traditional definition of cybersecurity. Phishing, often used for initial penetration, must pass through a human actor, aided by research on public data sources like LinkedIn. Ransomware could be called an ‘upmarket’ version of telecom scams, and it also contains social elements. The ransom negotiation process is more effective when guided by real background information on a company’s financial status, which may be gained from hacked sources, in addition to the open internet. Each of these steps has significant potential for automation.

Who has time to respond to scam solicitations?

At the summit, law enforcement agencies reported a significant improvement in the quality of phishing emails. Gone are the days of badly formed sentences and far-fetched backstories. We can expect that long-running interactions will not be far behind. The scam model has continued to spread as its economics are increasingly understood by criminals; an operation announced by Interpol on December 8 uncovered human trafficking between Asia and other locations like South America and Africa in both directions.

As a general intuition, in cybersecurity, the attacker only needs to succeed once, while the defender can never let its guard down. Generative AI – the LLMs which have taken the world by storm over the past year – is well-suited for offense, while classification is used more for defense. Development in the former type has far outpaced the latter in recent years. What hope is there for respite from the rapidly growing sophistication and reach of these operations?

Participants in the Summit mentioned several good ideas for application of generative AI on the defensive side, including for what might be termed reverse social engineering. Sean Lyons, Chief Online Safety Officer of Netsafe New Zealand, described a project called Re:scam which created a bot to engage with scammers who had been reported by the public, thereby wasting their time. This idea marks a more professional implementation of the internet phenomenon of ‘scambaiting,’ which has been around at least since the “Nigerian prince” era.

The bot engaged scammers in long conversations, even accusing them of being bots back. Besides wasting five years of scammers’ time, according to its statistics, it also helped New Zealand police gather information about the tactics, tools, and procedures (TTM) used in real attacks.

Defensive social engineering has the benefit of lower requirements for emotional affect, requiring the bot only to convincingly mirror the tone set by the adversary. One could imagine using AI to extend this technique to higher-tech attacks as well. It is essentially a honeypot technique, although honeypots have generally not been considered within the realm of social engineering. Perhaps effort could be put into scaling using generative AI to make increasingly realistic decoys. Hackers have human needs too, reflected in their time constraints.

A personal secretary

Many of the largest defensive benefits of AI will also come from more straightforward methods, integrating it into various platforms. Jeff Kuo, President of Gogolook, which calls itself a “TechTrust company” and makes the Whoscall app, and which also co-organized the Summit, spoke of the potential for AI personal assistants. While positioned to the consumer as an organization tool, they could also warn users about fraudulent patterns in their conversations.

Similarly, on the more technical side, AI has found many uses in combination with the cloud, such as helping guide data governance during cloud migration. It can help manage attack surfaces, an increasingly important facet of security as companies find themselves using more and more external services.

In the end, some degree of “proper friction” may be necessary through deeper and more stringent know your customer (KYC) procedures, suggested Jorij Abraham, Managing Director of the Global Anti-Scam Alliance (GASA), also a co-organizer of the Summit. This idea would slow or even reverse progress in connectivity and inclusion made by fintech and other areas, but it may prove reasonable as the threat continues to grow and diversify.

An AI arms race dynamic is foreseeable. The EU recently passed the AI Act, the world’s first attempt at comprehensive legislation on the technology. Clearly AI will have a wide array of effects on various regulated activities. Regarding social engineering, however, rather than attempting to stop generation of malicious content at the source, it may well be more worthwhile to simply create defensive applications directly. It is not entirely obvious which parts of the scam process should be attacked, but experimentation may yield fruitful results. Certainly adversaries have shown no lack of creative thinking.