AI and Phishing: What to Watch for in 2025

Share:

2024 was a great year for hackers. 

Healthcare organizations reported an increase in attacks, one so bad it forced the UK NHS to shut down for weeks.

Bad actors from China ramped up attacks on the U.S. and other countries. 

Even cities like Columbus, Ohio, endured relentless ransomware attacks. 

Phishing continued to be the most popular threat vector last year, with most hacks originating from webapps, social media and email. 

But thanks to AI, things are about to get a whole lot more interesting in 2025. 

How AI empowers bad actors

Hackers are crafty. They’re always looking for new, more creative ways to get into our inboxes and accounts. 

In recent months, cybersecurity experts have seen a dramatic spike in the use of AI and machine learning to make life easy for bad actors. 

It’s gotten so bad that the FBI issued a warning for individuals and businesses to be on the alert for more sophisticated AI-powered phishing/social engineering attacks and voice/video cloning scams. 

Hackers can use AI to collect and analyze online personal data in greater detail, especially over social media. They can also use AI-automated brute force attacks to guess username and password combinations. AI can even help them create undetectable malware.

And that’s not all. 

AI-powered large language models (LLMs) can analyze massive amounts of data to create more believable phishing emails, translate languages and even engage in dialogue with the victim. 

Then there’s AI-powered voice-based phishing, which leverages mobile device takeover tools to empower hackers to impersonate everyone from IT techs to banking customer service. 

A closer look at AI-powered voice-based phishing

Here’s an example of how a voice-based phishing scam works:

A hacker uses AI and artificial voice generators to send a text message to a mobile phone customer informing them of a supposed overcharge. The text includes a phone number the customer can call for a refund. 

The user clicks the link to dial the number. The call is intercepted and routed to an AI agent posed as a service representative ready to take the customer’s bank account number or credit card information.   

On top of that, the hacker might also send a QR code that gives them access to the user’s phone and personal data.  

Terrifying. 

No user or business is safe

Carefully targeted AI-powered voice-based phishing campaigns can make the person on the other end of the call seem very real. 

For example, a Facebook search might reveal that the person the hacker wants to impersonate is a cat lover. When they call one of the person’s colleagues or vendors, the hacker adds the sound of meowing in the background to put the victim at ease. 

These detail-rich cons can fool even those who are relatively tech-savvy. C-level executives, with their access to sensitive data and ability to approve transactions, are especially prime targets.

Using AI to fight back

In 2024, we already saw some companies use AI to fight back against hackers.

Virgin Media O2, a UK broadband mobile network company, created a human-like AI bot called “Daisy” to waste hundreds of hours of scammers’ time. Daisy can answer calls in real-time, keeping hackers on the phone (and away from sensitive data) for hours. 

Daisy sounds like an elderly lady, fooling scammers into thinking they’ve found a perfect target when, really, she’s beating them at their own twisted game.  

We hope to see more of this in 2025. 

How to keep your users safe

As hackers and scammers up their game with AI, IT pros will have to do the same. But don’t worry, you don’t have to go as far as developing an AI-powered bot like Daisy to keep end users safe. 

Even with all of the advancements in technology, old-school trainings are still one of the best ways to protect end users. But don’t just rely on the annual company-wide training. Instead, host quarterly sessions personalized to each department or business unit. 

These are more relevant and interesting, which means people will actually pay attention.

You can also use phishing simulators to impersonate hackers and simulate real-world attacks. (Be kind to the users who fall for them.)

Cover your SaaS with SaaS Alerts

Sadly, no matter how much training you provide, all it takes is one errant click on one malicious link to bring down an entire organization.

SaaS Alerts provides 24/7 monitoring to help identify potential breaches and keep your end users safe.

We also use AI against hackers. Our machine learning pattern detection can automatically lock down impacted accounts and stop dangerous end-user file sharing activity, saving you time and stress.

Sign up for a free demo to see how we make it easy to cover your SaaS — in 2025 and beyond.

Get Started

Request a Demo