ChatGPT has been accused of many things: creating a new way for students to cheat, becoming a job killer, creating a cheap source of bad copy, and, my personal favorite: being a dangerous and unholy evil that will threaten the moral fabric of society.
But people have been doing, or been paid to do, all those things for decades. OpenAI and other generative artificial intelligence providers just make them cheaper. Mostly, it promises to eliminate a huge amount of the drudgery in white-collar work, just as looms, assembly lines, and robots did for blue-collar work over the last century.
Like any tool or technological innovation, generative AI tools have the capacity to be used to great benefit and great damage.
What are Large Language Models (LLMs) Good at?
The use of ChatGPT, Microsoft Bard, and other tools have made it a lot easier to glean useful insights from forecasting… but the uses go beyond just finance.
LLMs are persuasive. They know, with mathematical precision, what arrangements of words people find most compelling, and they’re trained to color inside those lines.
They have vast knowledge of facts and details.
They can summarize and rephrase text in any style you choose.
They aren’t, however, good at telling the truth. Computers, known for a rigid restriction to logic, paradoxically have in LLMs an inability to tell truth from fiction and are fully capable of fabricating lies to respond to questions, then seamlessly reinforcing those lies with complete confidence.
What this Means for Us
I don’t want to get too heady about it all but, at this point in history, things are happening.
Technical, political, and economic developments have combined to confound many of the indicators people use to assess credibility and assign trust. Just as fake photos on social media confused people, ChatGPT and other LLMs are now a part of the mix. As a society, we will once again have to adjust, learning new ways to discern truth from misinformation.
Same old people, same old story.
How Cyberattacks and Phishing Scams Work Today
High philosophy aside, the rise of LLMs poses a particular threat to businesses: lowering the cost of quality phishing attacks.
“People Getting Tricked” is the single biggest source of practical cybersecurity loss. While the industry focuses on technical threats and solutions, the overwhelming majority of successful cybercrime relies on fooling someone at some point in the process.
Today, there is a thriving industry of call centers dedicated to fraud and cybercrime. They have suppliers and vendors, procedures and HR, and probably occasional leadership retreats to celebrate a job well done. It’s a whole thing.
Some parts of the fraud pipeline are automated: the initial email looking for love or to tell you about a great business opportunity are recycled templates – you’ll find the same email in your neighbor’s inbox, maybe tweaked in their endless A/B testing for better conversion rates.
Once you or your neighbor clicks or responds, someone in the call center takes over, following procedures and using skills honed over years to move the “lead” (you) through the “sales pipeline” (scam) and “close the sale” (defraud you) as efficiently as possible.
These employees have performance evaluations, key metrics, bonus structures, and sick days. They are the biggest cost of running a phishing call center. Just like any business, higher-performing firms and individuals yield better conversion rates and revenues, and thus, command higher rates.
There’s a bit of natural segmentation that happens in the fraud “market” based on the expected “yield” (dollars you can scam) of targets:
- Low-cost, low-performance firms usually pursue a higher number of low-yield targets. This is the fraud equivalent of Instagram ads for gimmicky low-cost knick-knacks; try to get it in front of as many people as possible and you might get a few sales.
- High-performance teams pursue higher-yield targets and invest more in each one. These are the large B2B accounts of the fraud world; conduct extensive research and get your best sales and customer service reps on it.
It makes poor business sense to assign low-yield leads to high-performance teams, so rarely do low-yield targets (i.e. me and most of you) receive sophisticated attempts run by skilled fraudsters.
How LLMs Will Change the Game
Let’s go over what we know already:
- AI tools are good at sounding persuasive.
- AI models are good at digesting a bunch of detailed information from a dataset and spitting out something unique, specific, and plausible.
- Generative AI has perfect fluency in English and an encyclopedic background knowledge of every subject on the internet.
- GPT’s algorithms can be tuned to write in the style of any demographic.
What’s more, AI disadvantages are inconsequential to the fraud industry: truth doesn’t matter. As long as the mark falls for the scam, the management doesn’t care what bologna it takes.
One approach not working out? Don’t worry, machine learning has no ego, it’ll switch it up without a second thought (or a first, for that matter…).
The cherry on top? AI tools cost almost nothing compared to a call center full of con artists.
Timeline to Launch
Unfortunately, the crime world moves fast. They don’t have to worry about reputation, there’s less red tape, and it’s key to be early in this “market”.
One crime-focused LLM product, WormGPT, has already become popular enough to make mainstream news for its malware-friendly AI chatbot. Others are doubtless available too and it won’t be long before competitive pressure and the socialization of best practices drive operational costs down enough to make LLM-backed phishing the dominant method of getting things done.
Let’s take a moment for the soon-to-be-unemployed Scam Operations Associate.
As the phishing industry undergoes this transformation in the next several years, the cost of running a sophisticated phishing operation will drop like a rock, causing the now autonomous “top salesmen” and other cyber criminals to start targeting lower-yield opportunities. Many more of us will be exposed to a flood of high-quality phishing and many will fall for it.
Time to Change Authentication Protocols
Because of phishing scams and hackers getting into accounts, many of us have learned to take messages received over email and text with a grain of salt. I, and other experts, often recommend using the phone to verify confidential information needed in risky activities such as high-value transactions; title companies and law firms have used this strategy successfully for years.
I believe that will stop working. Believable LLM-generated phone calls are not far off already. Existing products can use voice samples to create audio of any person saying anything. As of writing this article, this doesn’t work in real-time; it takes a while to generate each clip. Luckily, most of us will not be fooled by five or ten-second delays in a conversation, no matter how much the caller sounds like Johnny Cash.
But computers and tech have a funny way of getting faster. I have every confidence that incremental progress in AI voice generation tools and processing speed will make LLM-backed calling a commodity offering within a decade. We will soon lose the ability to know a person is legitimate by their voice alone; an opportunity bad actors can’t pass up.
Seeing Might Not be Believing
Voice synthesis is not the only impersonation software in the hopper; generating a believable video of someone based on past recordings is already out there (and improving). Video is substantially harder than voice – believable clips usually take long hours, significant skill, and artistry by an expert.
But just like audio synthesis, the development of an automated, real-time version is just a matter of years. It may take a couple of decades, especially to become cheap enough for crime rings, but it will come.
Next Steps for Organizational Safety
Are you prepared for a world where every virtual communication could be fake? Is data privacy dead? Are we headed for a collapse of civilization and a return to the stone age?
Not quite.
Math to the Rescue
Today, we use voice, appearance, and mannerisms to identify someone. In cybersecurity terms, these factors authenticate them to us; we use them to determine if the person speaking to us is authentically who they say they are. We depend on this strategy without thinking about it and, for the last 7000+ years, it’s worked great.
This new reality of LLMs and AI-generated impersonation promises to eliminate that ability. We will need to replace it with another way to authenticate when communicating online.
Here, there is hope: We have other techniques that work, they are just not yet popular. The math-first authentication tech developed over the last two decades will not be affected by these changes, and it is already cheap and easy. Many systems already depend on them without users realizing it. Loss of virtual recognition may push many toward cryptographically rigorous authentication and away from weak methods like a regular phone call or Zoom.
Tell Me Something Good
For better or worse, society will figure itself out. Luckily, you can take a more active role in your firm’s future and prevent this threat from affecting your business. It’s not even that hard.
First, move all sensitive information behind a federated and cryptographically rigorous identity provider. Then, make it clear in your company that you’re to depend on that instead of stopping at recognizing people by writing style or voice.
And, lastly, you can focus on what you’re best at and let others, like me, read about the rest. Subscribe to The CFO Club’s newsletter if you want to be the first to know when I see the next threat to society.