Remember when you could spot a phishing email because it had terrible grammar or came from a weird email address?
Those days are over.
Research from Hoxhunt showed that by March 2025, AI-generated phishing attacks had become more effective than those created by elite human security experts. The AI didn’t just catch up, but surpassed the best humans at social engineering.
Let that sink in. The people whose entire job is creating realistic phishing simulations to test your employees? AI is better at it than they are.
The Scale of the AI Phishing Problem
According to the World Economic Forum, phishing and social engineering attacks increased 42% in 2024. That was before AI really hit its stride.
The attacks aren’t just better written anymore. They’re contextual and arrive at the exact right time. They reference real projects, real people in your organization, and real deadlines.
Google’s 2026 forecast warns that attackers are using AI to create emails that are essentially indistinguishable from legitimate communication.
This is what that looks like in practice:
You receive an email from your CFO requesting an urgent invoice payment. It uses her exact writing style. It references the specific vendor you’ve been working with. It arrives right when you’d expect such a request. The email address looks right. The signature looks right. Everything looks right.
Except it’s not from your CFO. It’s from an AI that studied 50 of her previous emails and generated a perfect forgery.
Voice Cloning: The New Frontier
Email isn’t even the scariest part anymore.
A tech journalist recently demonstrated that she could clone her own voice using cheap AI tools and fool her bank’s phone system – both the automated system and a live agent – in a five-minute call.
Think about what that means for your business. Your CFO gets a call that sounds exactly like your CEO: voice, cadence, the way they clear their throat, everything. It’s asking for an urgent wire transfer for a time-sensitive deal.
How do you defend against that?
Why Traditional Phishing Training Fails Against AI
Your annual security training tells employees to look for:
- Spelling and grammar errors (AI doesn’t make these mistakes)
- Generic greetings (AI personalizes everything)
- Suspicious sender addresses (AI uses compromised legitimate accounts)
- Urgent requests (legitimate urgent requests also sound urgent)
- Links that don’t match the display text (AI uses legitimate-looking domains)
Every single indicator you’ve trained people to watch for? AI bypasses them.
What Actually Works Against AI Generated Phishing
The old training about “look for spelling errors” is dead. Your employees need to understand that verification matters more than urgency.
Use this to protect you and your team:
Slow down when things feel urgent. Urgency is the weapon. If someone’s asking for sensitive information or money transfers, that urgency should trigger caution, not immediate compliance.
Verify through a different channel. Email says it’s from your CEO? Call them on a known number. Text message from your bank? Call the number on your card, not the one in the message. Voice call asking for a transfer? Hang up and call back.
Trust your judgment about whether requests make sense. Does your CEO normally ask for wire transfers via text? Does your IT department usually request password resets through email? If the method doesn’t match the request, verify.
Create a culture where questioning is safe. Your employees need to know they won’t get fired for double-checking whether the CEO really sent that request. These attacks exploit hierarchy and time pressure.
The Reality for Professional Services Firms
The accounting firms, law offices, and property management companies we work with are particularly vulnerable to these attacks because:
- They handle sensitive financial information
- They regularly process wire transfers
- They work with clients who expect fast responses
- They have hierarchical structures that discourage questioning authority
One immigration law firm we work with almost lost $180,000 to an AI-generated email that perfectly mimicked its managing partner’s communication style, requesting an urgent retainer transfer. The only thing that saved them was an associate who thought the request was weird enough to verify in person.
That associate didn’t stop the attack because they spotted technical indicators. They stopped it because something felt off, and they were empowered to question it.
What This Means for Your Business
You need to update your security training immediately. Not next quarter. Not when the budget allows. Now.
The training needs to focus on:
- Verification procedures that work regardless of how legitimate something appears
- Creating psychological safety for employees to question urgent requests
- Understanding that AI can fake anything visual or auditory
- Practicing what to do when something seems both urgent and suspicious
You need to practice these procedures regularly. Not once a year during security awareness month. Monthly at minimum.
Because the attacks are getting better every single day. Criminals using them no longer need your employees to click a suspicious link. They need your employees to trust their eyes and ears when they shouldn’t.
The Quick and Easy: AI-generated phishing attacks now outperform human security experts, with attacks increasing 42% in 2024. AI generates emails and phone calls that are indistinguishable from legitimate communication, bypassing traditional phishing indicators such as spelling errors, generic greetings, and suspicious links. Voice cloning technology can fool both automated systems and live humans. Traditional training focusing on spotting errors no longer works. Instead, businesses need verification procedures that work regardless of appearance, cultures where questioning authority is safe, and regular practice with realistic scenarios. Professional services firms are particularly vulnerable due to their hierarchical structures and regular financial transactions. The key defense is slowing down when things feel urgent and verifying through different channels.
The uncomfortable truth is your employees are using AI tools you don’t know about. Right now. Today.
IBM’s latest research found that 20% of organizations already suffered a breach due to what they’re calling “shadow AI” – employees using unauthorized AI tools without IT’s knowledge. The kicker is that those breaches added an average of $200,000 to remediation costs.
Think about that for a second. The issue is not the technology failing or hackers breaking through your firewall. The cause is your own people, trying to do their jobs faster, pasting proprietary information into ChatGPT, Gemini, or whatever AI tool made their work easier that day.
Why Shadow AI Happens (And Why You Can’t Stop It)
Varonis found that 98% of employees use unsanctioned apps. That’s not a typo. Ninety-eight percent. If you think your company is the exception, you’re wrong.
Why does this happen? Because your employees are struggling. They’re being asked to do more with less, and they’re exhausted. Then they discover this magical tool that can summarize a 50-page document in 30 seconds or write that email they’ve been dreading. Of course, they’re going to use it.
The problem isn’t that they’re lazy or malicious. The problem is that they have no idea what happens to the data they feed into these systems. Some AI services train their models on your inputs. Some store everything you type. Some have security controls. Most don’t.
Why Banning AI Tools Doesn’t Work
Banning these tools outright works. Right? Gartner predicts that by 2027, 75% of employees will acquire or create technology outside IT’s visibility. Bans just push people to hide what they’re doing better.
This happens constantly with the accounting firms and law offices we work with. A partner bans ChatGPT, but an associate uses it on their phone anyway. Now, instead of managing the risk, you’ve just lost visibility into it entirely.
The Real Cost of Shadow AI
The financial impact goes beyond the $200,000 average breach cost. Consider what happens when:
- Your proprietary client data gets fed into a public AI model
- Your trade secrets become part of an AI training dataset
- Your confidential legal strategy gets stored on servers you don’t control
- Your financial projections end up accessible to your competitors
These aren’t theoretical risks. These are things happening right now to businesses that thought their employees would never do something that careless.
What You Actually Need to Do About Shadow AI
You need an actual policy about AI use. Not a ban. A policy.
This is what works:
Identify which AI tools are safe for your business. Not every AI tool is a security nightmare. Some have proper data handling. Some don’t train on your inputs. Figure out which ones meet your requirements.
Make approved tools easy to access. If your employees need AI to do their jobs effectively, give them a way to use it safely. The property management firms we work with that have implemented approved AI tools see almost zero shadow AI usage.
Train people on what they can and cannot share. Most people don’t realize that pasting client information into ChatGPT might expose it. They’re not trying to cause a breach. They’re trying to work faster. Teach them the difference between safe and unsafe usage.
Create a culture where people can ask questions. Your employees should feel comfortable asking, “Is this AI tool safe to use?” instead of just using it and hoping for the best.
The Bottom Line on Shadow AI
This isn’t going away. The only question is whether you’re managing it or pretending it doesn’t exist.
The firms sleeping well at night aren’t the ones who banned AI. They’re the ones who acknowledged it exists and created safe pathways for using it.
Because your employees are already using these tools, you just don’t know about it yet.
The Quick and Easy: Shadow AI, unauthorized AI tool usage by employees, has already caused breaches in 20% of organizations, costing an average of $200,000 each. With 98% of employees using unsanctioned apps and 75% projected to acquire technology outside IT visibility by 2027, banning AI tools doesn’t work. Instead, businesses need clear AI usage policies, approved tools that are easy to access, employee training on safe data sharing, and a culture that allows people to ask questions before using new tools. Technology isn’t the risk, but using it without oversight or understanding the consequences.
I’ve written about this topic before, but it’s nice when major publications back your viewpoint. One of my favorite authors has a new book forthcoming, and as a sign of the times the title – which may have been scandalous in a previous, perhaps more innocent age – gets straight to the point: “Enshittification: Why Everything Suddenly Got Worse and What To Do About It“. And because everything these days is meta and Mr. Doctorow’s book isn’t even out, I read an advanced review of the book that contained praise as well as some criticisms which I think are valid and troubling to consider when asking the most important question.
What can we do about it?
In case you didn’t read my previous blog about this or don’t remember it (because we all have enough to worry about already, so I get it), “enshittification” is the concept that all good online services and websites will eventually be ruined by our society’s relentless pursuit of profit. The advanced review as it appears on the Current Affairs website does a pretty good job of explaining this topic, and if you don’t intend to purchase the book, I think the article provides enough of an overview for you to spot this trend in the world around you, which may or may not improve how you may feel about it. I’m going to read the book for myself before I render my own praise or criticism, but I have similar concerns to the reviewer’s when it comes to answering the question that you have all asked, “What can we do about it?” It sounds like Mr. Doctorow is calling for grassroots efforts and government intervention to counteract future enshittifications (the author seems to think it’s already too late for the likes of Amazon, Facebook, Netflix, etc. and I agree), but from where I’m sitting it seems like getting help from the government isn’t on the menu at the moment, and our grassroots are divided as we fight to maintain healthcare, livelihoods and just basic human decency. So what is my recommendation to you if your technology feels “shitty?”
Take matters into your own hands. If you have the option to use something else, do so and make sure you tell the losing platform why you moved (even if they will probably never read your feedback). If changing the technology isn’t an option, perhaps take a moment to clearly identify the crappy part for the purposes of determining if it’s something you have control or agency over (maybe a new setting or change in interface), or if it’s out of your hands, such as the price going up. If it’s out of your control, focus your energy on working around or through it, or changing something else so that you can eliminate it altogether. Using technology is unavoidable for most of us, but there is no reason to feel like you are a hostage to it, and the best way to manage this is to change the things that you can control, and asking for help or sympathy (or both!) on the things you can’t.
I’ve been working in tech long enough to remember when “automation” meant macros in Excel and AI was still the stuff of sci-fi. Today, artificial intelligence is everywhere—from customer service chatbots to advanced data analytics, predictive modeling, and content creation. It’s no longer a niche tool; it’s a foundational layer in how businesses operate. And while this explosion of AI capability is exciting, it’s also incredibly risky—especially for those who treat it like a shortcut instead of a tool.
Let me be clear: AI is not magic. It’s not intelligent in the human sense. It’s powerful, but it’s only as good as the data it learns from and the intent behind its use. I’ve watched companies implement AI without understanding how it works, leading to biased outcomes, false insights, or compliance violations. They feed it flawed data, make strategic decisions based on unverified outputs, or worse, let it replace human judgment entirely.
The danger lies not in the technology, but in the overconfidence that often accompanies it.
AI should augment decision-making, not replace it. When misused, it can erode trust, amplify existing inequalities, and expose companies to significant legal and reputational risk. If you’re using generative AI to write content, ask yourself—how do you verify it’s accurate? If you’re using AI to screen job candidates, are you confident it’s not introducing bias?
As a consultant, I encourage clients to treat AI the same way they would a junior employee: train it, supervise it, and never let it act without oversight.
The future of AI is promising, but only if we use it responsibly. Those who blindly chase efficiency without understanding the tool may find themselves solving one problem and creating five more. So take the time to understand what AI is—and more importantly, what it isn’t.
Want help making AI work for your business—safely and strategically? Reach out for a consultation.
Author’s Note: This blog post was written by ChatGPT using the following prompt, “Write a short blog from the perspective of an experienced technology consultant about the rising use of AI and the dangers it poses for those that use the tool incorrectly.” I did not touch-up or edit the text provided by that prompt in any way, shape or form other than to copy and paste it into this website. Anyone who’s followed my blog for awhile or knows me personally might have smelled something fishy, or maybe not. In reading the above, I can definitely say that I have written plenty of articles just as bland. Interestingly, ChatGPT included the last, italicised bit – it’s clearly been trained on plenty of marketing blogs like this one. I know that many of you actually read my blogs for my personal take on technology. If I were to feed my own AI engine the past 10 years of my articles so that it could perhaps get a sense for my writing style and personality, do you think it could produce more blogs that would be indistinguishable from what I wrote with my own two hands and one brain?
Image courtesy of TAW4 at FreeDigitalPhotos.net
We’ve discussed in previous blogs how technology things seem to be getting worse from just about every angle, whether it’s cost, quality or security. We can attribute a large chunk of this downward trend to the increasing profitability of cybercrime, which is itself a vicious, amplifying spiral of escalation. The more we try to keep ourselves safe, the more complicated it becomes to do so, and most regular folks don’t have the training or endurance to keep up, especially if you are a part of the growing elderly generations that are forced to use technology they barely understand just to stay alive and keep in contact with friends and family. With the recent (in my opinion ill-advised) downsizing the Cybersecurity and Infrastructure Security Agency (CISA) much of the this country’s organizational strength and operational efficiency in cataloging and combatting cybersecurity threats will be abandoned.
What this means for all of us
Regardless of whether you are a big or small organization, CISA’s leadership and work provided foundational guidance on all existing cybersecurity threats while constantly researching, investigating and publishing information on new threats as they discovered. One of the main reasons that governments exist is to provide funding, resources and scaled force for tasks that cannot (and should not) be handled by smaller groups or for-profit institutions, such as military defense, mail delivery, and national security. As has been demonstrated time and time again, for-profit companies cannot be trusted to put people before profits, and security oversight is definitely not something you want to enshittify. And yet, that is exactly where we are. In the absence of CISA leadership, organizations, whether they be ad-hoc coalitions of state-level agencies or, most likely, for-profit companies in the security industry, are now scrambling to fill the gigantic, CISA-shaped hole in our nation’s cybersecurity. Let’s be clear, security for small businesses was already well on its way to becoming difficult, expensive and onerous. Eliminating national leadership will most definitely lead to a fracturing of an already complicated security framework that will most assuredly weigh very heavily on those who can least afford to shoulder a burden that was formerly carried by those trained, equipped and funded to do so.
Two years ago, in 2023, Microsoft announced that over 36 million people were still using Skype daily to communicate via video and chat. The app was 20 years old at that time, and has been in Microsoft’s hands since 2011 when they bought it from eBay for $8.5 billion to replace their own popular (but less capable) Live Messenger service. On May 5, after 14 years in the trenches, Microsoft has shut down the service and has given users 60 days to move their content (contacts and messages) to the free version of Teams, or lose the data forever.
What this means for you
If you were a diehard Skype user hoping that Microsoft wasn’t going to make good on it’s February promise to close Skype permanently on May 5th, you are probably wondering what to do next. Fortunately, it seems that logging into Teams with your Skype credentials will ease the transition by automatically bringing over your chat history and contacts, because, in case you didn’t know, your Skype account was actually a full-blown Microsoft (personal) account all along. Unfortunately for many, the Teams replacement for Skype is not a feature-for-feature substitute, with the main loss being the ability to make phone calls to land lines and mobiles that don’t have internet access. This well-known “life-hack” trick was assuredly what kept Skype popular in face of the various other video chat apps that have come to dominate the space, and probably one of the main reasons Microsoft decided to shut down Skype in the end. If only a fraction of the 36 million Skype users were using Skype to make cheap or free long-distance calls, Microsoft was leaving a large amount of money on the table, even by their standards. Rest in power, Skype. You were a handy bit of software for many people.
I’m sure it’s still a thing for students today, but one of the phrases that always caused a groan in any class that involved solving equations was, “Make sure you show your work.” Whether it was pre-Algebra or Advanced Calculus, the only way you could prove that you actually understood the topic well enough to solve the problem was for you to write out each step of the solution. We had graphing calculators when I was going through high school, but even if we were allowed to use them during tests, more often than not there was going to be at least one instance where the calculator was only there to confirm the answer we arrived at after lines and lines of chicken scratch and piles of eraser crumbs.
There’s a point to this nostalgic indulgence
If you are a business owner or part of the executive team, you will likely be familiar with the technology security questionnaires that accompany your organization insurance renewals. Up until perhaps 2023, checking “yes” boxes on the questions or tossing in vague answers were typically enough to get you through the approval or renewal process, and I’m fairly certain that the application reviewers were just as cross-eyed as you were when filling them out. I’m (not really) sorry to say this “relaxed” approach to evaluating your security standards are in the rear-view mirror for everyone, regardless of the industry you are in or the size of your organization. Insurance carriers are reading your responses and are not taking “N/A” or “No” as an answer when asking if you have various security safeguards in place. At best, you may be encouraged by your insurance agent to, “Reconsider some of your responses,” and at worst it may lead to an outright denial of coverage and a mad scramble to find another carrier for your insurance needs. The insurance industry is already taking a beating on natural disaster claims (something not likely to abate given the world’s general dismissal of climate change), so they are definitely not going to be generous with the next most popular claim: cyberattacks. Don’t given them any excuse to deny a cyber liability claim by just checking a box. Show your work by actually implementing the security standards they are asking about, and if you don’t know where to start, get a professional like C2 on the job as soon as possible.










