Remember when you could spot a phishing email because it had terrible grammar or came from a weird email address?
Those days are over.
Research from Hoxhunt showed that by March 2025, AI-generated phishing attacks had become more effective than those created by elite human security experts. The AI didn’t just catch up, but surpassed the best humans at social engineering.
Let that sink in. The people whose entire job is creating realistic phishing simulations to test your employees? AI is better at it than they are.
The Scale of the AI Phishing Problem
According to the World Economic Forum, phishing and social engineering attacks increased 42% in 2024. That was before AI really hit its stride.
The attacks aren’t just better written anymore. They’re contextual and arrive at the exact right time. They reference real projects, real people in your organization, and real deadlines.
Google’s 2026 forecast warns that attackers are using AI to create emails that are essentially indistinguishable from legitimate communication.
This is what that looks like in practice:
You receive an email from your CFO requesting an urgent invoice payment. It uses her exact writing style. It references the specific vendor you’ve been working with. It arrives right when you’d expect such a request. The email address looks right. The signature looks right. Everything looks right.
Except it’s not from your CFO. It’s from an AI that studied 50 of her previous emails and generated a perfect forgery.
Voice Cloning: The New Frontier
Email isn’t even the scariest part anymore.
A tech journalist recently demonstrated that she could clone her own voice using cheap AI tools and fool her bank’s phone system – both the automated system and a live agent – in a five-minute call.
Think about what that means for your business. Your CFO gets a call that sounds exactly like your CEO: voice, cadence, the way they clear their throat, everything. It’s asking for an urgent wire transfer for a time-sensitive deal.
How do you defend against that?
Why Traditional Phishing Training Fails Against AI
Your annual security training tells employees to look for:
- Spelling and grammar errors (AI doesn’t make these mistakes)
- Generic greetings (AI personalizes everything)
- Suspicious sender addresses (AI uses compromised legitimate accounts)
- Urgent requests (legitimate urgent requests also sound urgent)
- Links that don’t match the display text (AI uses legitimate-looking domains)
Every single indicator you’ve trained people to watch for? AI bypasses them.
What Actually Works Against AI Generated Phishing
The old training about “look for spelling errors” is dead. Your employees need to understand that verification matters more than urgency.
Use this to protect you and your team:
Slow down when things feel urgent. Urgency is the weapon. If someone’s asking for sensitive information or money transfers, that urgency should trigger caution, not immediate compliance.
Verify through a different channel. Email says it’s from your CEO? Call them on a known number. Text message from your bank? Call the number on your card, not the one in the message. Voice call asking for a transfer? Hang up and call back.
Trust your judgment about whether requests make sense. Does your CEO normally ask for wire transfers via text? Does your IT department usually request password resets through email? If the method doesn’t match the request, verify.
Create a culture where questioning is safe. Your employees need to know they won’t get fired for double-checking whether the CEO really sent that request. These attacks exploit hierarchy and time pressure.
The Reality for Professional Services Firms
The accounting firms, law offices, and property management companies we work with are particularly vulnerable to these attacks because:
- They handle sensitive financial information
- They regularly process wire transfers
- They work with clients who expect fast responses
- They have hierarchical structures that discourage questioning authority
One immigration law firm we work with almost lost $180,000 to an AI-generated email that perfectly mimicked its managing partner’s communication style, requesting an urgent retainer transfer. The only thing that saved them was an associate who thought the request was weird enough to verify in person.
That associate didn’t stop the attack because they spotted technical indicators. They stopped it because something felt off, and they were empowered to question it.
What This Means for Your Business
You need to update your security training immediately. Not next quarter. Not when the budget allows. Now.
The training needs to focus on:
- Verification procedures that work regardless of how legitimate something appears
- Creating psychological safety for employees to question urgent requests
- Understanding that AI can fake anything visual or auditory
- Practicing what to do when something seems both urgent and suspicious
You need to practice these procedures regularly. Not once a year during security awareness month. Monthly at minimum.
Because the attacks are getting better every single day. Criminals using them no longer need your employees to click a suspicious link. They need your employees to trust their eyes and ears when they shouldn’t.
The Quick and Easy: AI-generated phishing attacks now outperform human security experts, with attacks increasing 42% in 2024. AI generates emails and phone calls that are indistinguishable from legitimate communication, bypassing traditional phishing indicators such as spelling errors, generic greetings, and suspicious links. Voice cloning technology can fool both automated systems and live humans. Traditional training focusing on spotting errors no longer works. Instead, businesses need verification procedures that work regardless of appearance, cultures where questioning authority is safe, and regular practice with realistic scenarios. Professional services firms are particularly vulnerable due to their hierarchical structures and regular financial transactions. The key defense is slowing down when things feel urgent and verifying through different channels.
The uncomfortable truth is your employees are using AI tools you don’t know about. Right now. Today.
IBM’s latest research found that 20% of organizations already suffered a breach due to what they’re calling “shadow AI” – employees using unauthorized AI tools without IT’s knowledge. The kicker is that those breaches added an average of $200,000 to remediation costs.
Think about that for a second. The issue is not the technology failing or hackers breaking through your firewall. The cause is your own people, trying to do their jobs faster, pasting proprietary information into ChatGPT, Gemini, or whatever AI tool made their work easier that day.
Why Shadow AI Happens (And Why You Can’t Stop It)
Varonis found that 98% of employees use unsanctioned apps. That’s not a typo. Ninety-eight percent. If you think your company is the exception, you’re wrong.
Why does this happen? Because your employees are struggling. They’re being asked to do more with less, and they’re exhausted. Then they discover this magical tool that can summarize a 50-page document in 30 seconds or write that email they’ve been dreading. Of course, they’re going to use it.
The problem isn’t that they’re lazy or malicious. The problem is that they have no idea what happens to the data they feed into these systems. Some AI services train their models on your inputs. Some store everything you type. Some have security controls. Most don’t.
Why Banning AI Tools Doesn’t Work
Banning these tools outright works. Right? Gartner predicts that by 2027, 75% of employees will acquire or create technology outside IT’s visibility. Bans just push people to hide what they’re doing better.
This happens constantly with the accounting firms and law offices we work with. A partner bans ChatGPT, but an associate uses it on their phone anyway. Now, instead of managing the risk, you’ve just lost visibility into it entirely.
The Real Cost of Shadow AI
The financial impact goes beyond the $200,000 average breach cost. Consider what happens when:
- Your proprietary client data gets fed into a public AI model
- Your trade secrets become part of an AI training dataset
- Your confidential legal strategy gets stored on servers you don’t control
- Your financial projections end up accessible to your competitors
These aren’t theoretical risks. These are things happening right now to businesses that thought their employees would never do something that careless.
What You Actually Need to Do About Shadow AI
You need an actual policy about AI use. Not a ban. A policy.
This is what works:
Identify which AI tools are safe for your business. Not every AI tool is a security nightmare. Some have proper data handling. Some don’t train on your inputs. Figure out which ones meet your requirements.
Make approved tools easy to access. If your employees need AI to do their jobs effectively, give them a way to use it safely. The property management firms we work with that have implemented approved AI tools see almost zero shadow AI usage.
Train people on what they can and cannot share. Most people don’t realize that pasting client information into ChatGPT might expose it. They’re not trying to cause a breach. They’re trying to work faster. Teach them the difference between safe and unsafe usage.
Create a culture where people can ask questions. Your employees should feel comfortable asking, “Is this AI tool safe to use?” instead of just using it and hoping for the best.
The Bottom Line on Shadow AI
This isn’t going away. The only question is whether you’re managing it or pretending it doesn’t exist.
The firms sleeping well at night aren’t the ones who banned AI. They’re the ones who acknowledged it exists and created safe pathways for using it.
Because your employees are already using these tools, you just don’t know about it yet.
The Quick and Easy: Shadow AI, unauthorized AI tool usage by employees, has already caused breaches in 20% of organizations, costing an average of $200,000 each. With 98% of employees using unsanctioned apps and 75% projected to acquire technology outside IT visibility by 2027, banning AI tools doesn’t work. Instead, businesses need clear AI usage policies, approved tools that are easy to access, employee training on safe data sharing, and a culture that allows people to ask questions before using new tools. Technology isn’t the risk, but using it without oversight or understanding the consequences.
It seems rare to find feel-good information about artificial intelligence lately, so when I spot it, I like to share it, especially when it has to compete for attention with discouraging stories where AI technology is enabling the spread of hate and misinformation. After a young woman in Rhode Island lost her voice due to a brain tumor removal, a team of Doctor’s and ChatGPT-maker OpenAI were able to utilize a short voice recording to create a specialized app that allows her to speak again with an accurate recreation of her own voice.
Stop there if you want to keep feeling good.
Still reading? OK, I warned you. While I’m not going to go into details because I don’t want to give them any more publicity, there are plenty of other AI startups who aren’t focused on limiting their platforms to only medical applications where all parties involved are providing ongoing consent for the use of their voice. While I’m sure they claim their software can only be used in completely consensual situations, we’ve already seen spreading usage of deepfakes for political propaganda, extortion and reputation destruction.
Given the potential AI has, many companies may have started out with stars in their eyes for all the imagined possibilities the technology could enable, but once buyers with suitcases of money started showing up, ethics seem to be relegated to a secondary (or lower) concern, if it was ever a constraint in the first place. As always, money and politics complicate matters, and we humans are not known for approaching things cautiously, especially when being first to the prize means establishing dominance. It’s heartening to know that at least some folks are intent on developing genuinely helpful applications of AI technology, even if in the end, their efforts will most assuredly be monetized. Our hope here, like 3-D printing, is that the technology becomes so widespread that it levels the playing field for (most) everyone.
Image by bamenny from Pixabay
In 2019 I wrote about the arrival of deep fakes and posited that it might take an election being stolen before anyone in the country takes it seriously. Welcome to 2024 where someone engineered a robocall in New Hampshire designed to suppress the vote in that state’s January 23rd primary elections. The call featured what appears to be an artificial intelligence-generated clone of President Biden’s voice telling callers that their votes mattered more in November than in today’s primary. To put a nice ironic cherry on top, the robocallers seemed to have spoofed a phone number from a Democrat PAC that supports Biden’s efforts in New Hampshire. Here is the actual release from the NH Department of Justice website that signals the official investigation, in case you are skeptical of the above website’s veracity.
What this means for you
I imagine that regardless of which side of the political spectrum you sit on, this presents a very scary future where we cannot trust our eyes or ears or practically anything on the internet at a time when truth and objective reasoning are crucial. The technology to do the above is readily available and accessible, and it seems a small but influential number of us cannot be trusted to act responsibly with powerful technology. If you are thinking, “well, let them duke it out in their political battles over there, I don’t need to worry about AI fakes affecting me,” let me spin a “fanciful” situation for you to consider. Let’s say you have a disgruntled ex-employee who is looking to strike back at you or your company and decides to use the above tool to fake a harassing phone call from someone in company leadership to someone else in your organization. Do I even have to tell you that this service is likely already on offer in questionable corners of the internet? What can you do?
Make your voice heard in the upcoming elections by voting for leaders that represent your values (which are hopefully based on lifting people up instead of pushing them down). How do you know who that might be? Time to step up and ask directly. Don’t rely on third parties to put words in their mouths. It’s time for direct accountability, for you, me and them.
Register to vote. Get out and vote.
Image courtesy of Stuart Miles at FreeDigitalPhotos.net
I know some of you are Trekkies, and even if you aren’t a fan, you’ve more than likely heard the phrase, “You will be assimilated. Resistance is futile,” chanted by Star Trek’s hive-mind aliens, the Borg. Though they pale in comparison to some of the movies and series’ most iconic nemeses like Khan and the omnipotent Q, their constant drive to absorb beings and technology to improve the collective are proving to be hauntingly prescient when compared to certain modern-day companies seemingly bent on assimilating the internet to feed the AI beast.
“I am the beginning, the end, the one who is many. I am the Borg.”
When the Borg appeared for the first time on Star Trek in 1989, repulsion to their “otherness” came from our culture’s inherent dislike of the concept of individuality and freedom being made subservient to a collective will. While AI was not new to science fiction at the time – it had already become infamous decades before in the sci-fi classic 2001: A Space Odyssey – it was viewed as something maybe possible in the distant future. Luckily, we got Y2K instead of HAL when the new millennium rolled around, but now, just 20-ish years later, we are faced with the reality of web-crawling bots hoovering up everything on the internet to fuel “large language model” AI platforms. It’s hard not to draw comparisons to the Borg in this regard. Human content creators are already having to resort to legal measures against various companies for “assimilating” their original work into AI-generated copycat products that are being sold on platforms like Amazon (a company often compared to the Borg) or appearing in YouTube videos (another very Borg-like company), or in sound-alike songs on Spotify.
“We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us.”
Star Trek: First Contact (1996)
Image by PIRO from Pixabay
The FBI held a press conference last week to confirm what we figured was already a thing the moment open-source AI projects started surfacing: threat actors are using artificial intelligence to write malware, build ransomware websites and to put more teeth in their phishing campaigns. And as if we didn’t need more nightmare fuel, the FBI also shared this little nugget: terrorists groups are using AI to research deadlier “projects” like more potent chemical attacks.
If you can dream it, you can build it.
Unfortunately for us, dreams aren’t limited to those of us who are just trying to make our way through life without hurting anyone while having some fun along the way. Criminals aren’t hampered by ethics or compassion, and neither are AI’s, even when the programmers try to put in safeguards. As I’ve always maintained, anything built by humans will be subject to our flaws, and I don’t know that I’m willing to trust that any AI that becomes self-aware will be able to differentiate between good and evil with the amount of garbage we have piled onto the internet. At this point, unless you happened to be a multi-billionaire with ethics and a hotline to folks in power, the best you can do is let your congress-critter know that we should be pumping the brakes on this runaway AI truck. While there have been some relatively feeble attempts from the established technology titans to put together something akin to a digital watermark that will help the rest of the world identify content created by an AI, there are probably hundreds of throne-contenders willing to ignore the rules for a chance at the top, humanity be damned, and you can bet that many of them already have their hands in the pockets of any government powerful enough to even try to regulate this technology.
Am I saying it’s time to start looking for bunker-friendly real estate in an under-developed country with robot unfriendly terrain? Not yet, but could we confidently say we would know when that moment has arrived, or maybe we’ve already crossed that threshold. Most of us can only cross our fingers and hope the future is more like Star Trek and nothing like Terminator.
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net







