Remember when you could spot a phishing email because it had terrible grammar or came from a weird email address?
Those days are over.
Research from Hoxhunt showed that by March 2025, AI-generated phishing attacks had become more effective than those created by elite human security experts. The AI didn’t just catch up, but surpassed the best humans at social engineering.
Let that sink in. The people whose entire job is creating realistic phishing simulations to test your employees? AI is better at it than they are.
The Scale of the AI Phishing Problem
According to the World Economic Forum, phishing and social engineering attacks increased 42% in 2024. That was before AI really hit its stride.
The attacks aren’t just better written anymore. They’re contextual and arrive at the exact right time. They reference real projects, real people in your organization, and real deadlines.
Google’s 2026 forecast warns that attackers are using AI to create emails that are essentially indistinguishable from legitimate communication.
This is what that looks like in practice:
You receive an email from your CFO requesting an urgent invoice payment. It uses her exact writing style. It references the specific vendor you’ve been working with. It arrives right when you’d expect such a request. The email address looks right. The signature looks right. Everything looks right.
Except it’s not from your CFO. It’s from an AI that studied 50 of her previous emails and generated a perfect forgery.
Voice Cloning: The New Frontier
Email isn’t even the scariest part anymore.
A tech journalist recently demonstrated that she could clone her own voice using cheap AI tools and fool her bank’s phone system – both the automated system and a live agent – in a five-minute call.
Think about what that means for your business. Your CFO gets a call that sounds exactly like your CEO: voice, cadence, the way they clear their throat, everything. It’s asking for an urgent wire transfer for a time-sensitive deal.
How do you defend against that?
Why Traditional Phishing Training Fails Against AI
Your annual security training tells employees to look for:
- Spelling and grammar errors (AI doesn’t make these mistakes)
- Generic greetings (AI personalizes everything)
- Suspicious sender addresses (AI uses compromised legitimate accounts)
- Urgent requests (legitimate urgent requests also sound urgent)
- Links that don’t match the display text (AI uses legitimate-looking domains)
Every single indicator you’ve trained people to watch for? AI bypasses them.
What Actually Works Against AI Generated Phishing
The old training about “look for spelling errors” is dead. Your employees need to understand that verification matters more than urgency.
Use this to protect you and your team:
Slow down when things feel urgent. Urgency is the weapon. If someone’s asking for sensitive information or money transfers, that urgency should trigger caution, not immediate compliance.
Verify through a different channel. Email says it’s from your CEO? Call them on a known number. Text message from your bank? Call the number on your card, not the one in the message. Voice call asking for a transfer? Hang up and call back.
Trust your judgment about whether requests make sense. Does your CEO normally ask for wire transfers via text? Does your IT department usually request password resets through email? If the method doesn’t match the request, verify.
Create a culture where questioning is safe. Your employees need to know they won’t get fired for double-checking whether the CEO really sent that request. These attacks exploit hierarchy and time pressure.
The Reality for Professional Services Firms
The accounting firms, law offices, and property management companies we work with are particularly vulnerable to these attacks because:
- They handle sensitive financial information
- They regularly process wire transfers
- They work with clients who expect fast responses
- They have hierarchical structures that discourage questioning authority
One immigration law firm we work with almost lost $180,000 to an AI-generated email that perfectly mimicked its managing partner’s communication style, requesting an urgent retainer transfer. The only thing that saved them was an associate who thought the request was weird enough to verify in person.
That associate didn’t stop the attack because they spotted technical indicators. They stopped it because something felt off, and they were empowered to question it.
What This Means for Your Business
You need to update your security training immediately. Not next quarter. Not when the budget allows. Now.
The training needs to focus on:
- Verification procedures that work regardless of how legitimate something appears
- Creating psychological safety for employees to question urgent requests
- Understanding that AI can fake anything visual or auditory
- Practicing what to do when something seems both urgent and suspicious
You need to practice these procedures regularly. Not once a year during security awareness month. Monthly at minimum.
Because the attacks are getting better every single day. Criminals using them no longer need your employees to click a suspicious link. They need your employees to trust their eyes and ears when they shouldn’t.
The Quick and Easy: AI-generated phishing attacks now outperform human security experts, with attacks increasing 42% in 2024. AI generates emails and phone calls that are indistinguishable from legitimate communication, bypassing traditional phishing indicators such as spelling errors, generic greetings, and suspicious links. Voice cloning technology can fool both automated systems and live humans. Traditional training focusing on spotting errors no longer works. Instead, businesses need verification procedures that work regardless of appearance, cultures where questioning authority is safe, and regular practice with realistic scenarios. Professional services firms are particularly vulnerable due to their hierarchical structures and regular financial transactions. The key defense is slowing down when things feel urgent and verifying through different channels.
The uncomfortable truth is your employees are using AI tools you don’t know about. Right now. Today.
IBM’s latest research found that 20% of organizations already suffered a breach due to what they’re calling “shadow AI” – employees using unauthorized AI tools without IT’s knowledge. The kicker is that those breaches added an average of $200,000 to remediation costs.
Think about that for a second. The issue is not the technology failing or hackers breaking through your firewall. The cause is your own people, trying to do their jobs faster, pasting proprietary information into ChatGPT, Gemini, or whatever AI tool made their work easier that day.
Why Shadow AI Happens (And Why You Can’t Stop It)
Varonis found that 98% of employees use unsanctioned apps. That’s not a typo. Ninety-eight percent. If you think your company is the exception, you’re wrong.
Why does this happen? Because your employees are struggling. They’re being asked to do more with less, and they’re exhausted. Then they discover this magical tool that can summarize a 50-page document in 30 seconds or write that email they’ve been dreading. Of course, they’re going to use it.
The problem isn’t that they’re lazy or malicious. The problem is that they have no idea what happens to the data they feed into these systems. Some AI services train their models on your inputs. Some store everything you type. Some have security controls. Most don’t.
Why Banning AI Tools Doesn’t Work
Banning these tools outright works. Right? Gartner predicts that by 2027, 75% of employees will acquire or create technology outside IT’s visibility. Bans just push people to hide what they’re doing better.
This happens constantly with the accounting firms and law offices we work with. A partner bans ChatGPT, but an associate uses it on their phone anyway. Now, instead of managing the risk, you’ve just lost visibility into it entirely.
The Real Cost of Shadow AI
The financial impact goes beyond the $200,000 average breach cost. Consider what happens when:
- Your proprietary client data gets fed into a public AI model
- Your trade secrets become part of an AI training dataset
- Your confidential legal strategy gets stored on servers you don’t control
- Your financial projections end up accessible to your competitors
These aren’t theoretical risks. These are things happening right now to businesses that thought their employees would never do something that careless.
What You Actually Need to Do About Shadow AI
You need an actual policy about AI use. Not a ban. A policy.
This is what works:
Identify which AI tools are safe for your business. Not every AI tool is a security nightmare. Some have proper data handling. Some don’t train on your inputs. Figure out which ones meet your requirements.
Make approved tools easy to access. If your employees need AI to do their jobs effectively, give them a way to use it safely. The property management firms we work with that have implemented approved AI tools see almost zero shadow AI usage.
Train people on what they can and cannot share. Most people don’t realize that pasting client information into ChatGPT might expose it. They’re not trying to cause a breach. They’re trying to work faster. Teach them the difference between safe and unsafe usage.
Create a culture where people can ask questions. Your employees should feel comfortable asking, “Is this AI tool safe to use?” instead of just using it and hoping for the best.
The Bottom Line on Shadow AI
This isn’t going away. The only question is whether you’re managing it or pretending it doesn’t exist.
The firms sleeping well at night aren’t the ones who banned AI. They’re the ones who acknowledged it exists and created safe pathways for using it.
Because your employees are already using these tools, you just don’t know about it yet.
The Quick and Easy: Shadow AI, unauthorized AI tool usage by employees, has already caused breaches in 20% of organizations, costing an average of $200,000 each. With 98% of employees using unsanctioned apps and 75% projected to acquire technology outside IT visibility by 2027, banning AI tools doesn’t work. Instead, businesses need clear AI usage policies, approved tools that are easy to access, employee training on safe data sharing, and a culture that allows people to ask questions before using new tools. Technology isn’t the risk, but using it without oversight or understanding the consequences.
We’ve discussed in previous blogs how technology things seem to be getting worse from just about every angle, whether it’s cost, quality or security. We can attribute a large chunk of this downward trend to the increasing profitability of cybercrime, which is itself a vicious, amplifying spiral of escalation. The more we try to keep ourselves safe, the more complicated it becomes to do so, and most regular folks don’t have the training or endurance to keep up, especially if you are a part of the growing elderly generations that are forced to use technology they barely understand just to stay alive and keep in contact with friends and family. With the recent (in my opinion ill-advised) downsizing the Cybersecurity and Infrastructure Security Agency (CISA) much of the this country’s organizational strength and operational efficiency in cataloging and combatting cybersecurity threats will be abandoned.
What this means for all of us
Regardless of whether you are a big or small organization, CISA’s leadership and work provided foundational guidance on all existing cybersecurity threats while constantly researching, investigating and publishing information on new threats as they discovered. One of the main reasons that governments exist is to provide funding, resources and scaled force for tasks that cannot (and should not) be handled by smaller groups or for-profit institutions, such as military defense, mail delivery, and national security. As has been demonstrated time and time again, for-profit companies cannot be trusted to put people before profits, and security oversight is definitely not something you want to enshittify. And yet, that is exactly where we are. In the absence of CISA leadership, organizations, whether they be ad-hoc coalitions of state-level agencies or, most likely, for-profit companies in the security industry, are now scrambling to fill the gigantic, CISA-shaped hole in our nation’s cybersecurity. Let’s be clear, security for small businesses was already well on its way to becoming difficult, expensive and onerous. Eliminating national leadership will most definitely lead to a fracturing of an already complicated security framework that will most assuredly weigh very heavily on those who can least afford to shoulder a burden that was formerly carried by those trained, equipped and funded to do so.
Per a recent updated report from the FBI and CISA, the telecomm hacks that had been previous announced (and most likely missed amidst the election and holidays) are now being regarded as much worse than previously thought, and that there is no anticipated ETA as to when the hackers can be evicted from the various compromised infrastructures. As such, the FBI and CISA are recommending everyone avoid unencrypted communications methods on their mobile devices, which includes SMS messaging between Android and Apple phones, and carrier-based cellular voice calls (which have never been encrypted).
What this means for you
If you are like 95% of the world, you are probably thinking, “Well, if China wants to know about the grocery list I texted to my spouse, they are welcome to it,” or “I’ve got nothing to hide,” or even more naively, “I’ve got nothing worth stealing.” Most people do not consider just how much they communicate via unsecured text – banking two-factors, prescription verifications, medical complaints to doctors, passwords to coworkers, driver’s license pictures, credit card pins – the list is endless, and extremely valuable to threat teams like Salt Typhoon, the APT allegedly behind this huge compromise. The reason that this is a big deal is that we as a society (at least in America) have grown overly comfortable with this lack of privacy, and on top of that, the market has encouraged a fractured and flawed approach to communications between the various community silos we have created for ourselves online. What you might not know is that messaging from iPhone to iPhone, and Android to Android, are fully encrypted, as well as messages in WhatsApp, Facebook Messenger and Signal, but as you consider your circle of family and friends, how many of them are on the same platform and use the same messaging apps to communicate? How many of your two-factor codes arrive via SMS?
To address this latter issue, you should move any multi-factor codes to an app like Microsoft or Google Authenticator (if the platform even allows it – many banks do not yet support apps). This process will be painful and tedious, but probably most important in terms of improving your personal safety. The messaging problem is not so “easily” solved at least from a friends and family perspective, but for business communications, you should consider moving everything to a platform like Microsoft Teams, Google Workspace, Slack, etc. And stop sharing passwords via text. More information to come as we learn more about the severity of this telco hack.
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net
Ever since they were hacked in 2023, genetics and ancestry website 23andMe has been more or less moribund, going from a high of $16 per share to $0.29 today and the resignation of their entire board of directors last month. When we last wrote about them in December of last year, the beleaguered DNA testing company had to revise their initial statement about only getting a “little” hacked (1.4M records) to admitting that they got majorly hacked (6.9M records). As you can imagine, this didn’t bode well for their marketability.
Why are we talking about them again?
It’s been nearly a year since the initial data breach, and judging by the lack of faith the recently departed board of directors had in the company’s founder, they aren’t likely to return to full potential any time soon, if ever. If you were one of the millions of people that sent them your DNA to analyze, you’ve probably already reaped whatever benefits (positive and negative) you will likely get from 23andMe, but they may not be done making money from your data. While they claim that much scientific good has been generated if you were one of the many who consented to allow your de-personalized data to be used by researchers, you may want to consider the consequences of letting a company who’s security practices led to their current downfall continue to have access to your data. Because you do have the option of asking them to delete your data. And seeing as you paid them for the privilege of providing your data, it seems rather mercenary for them to then take your data and continue to sell it without compensating you. Rather, they got hacked, exposed your confidential information, and then continued to (somewhat) operate. If you’d like to see some consequences, you can do your part by asking them to delete your data which can be done merely by logging into your account on their website and submitting that request. Do it. If a majority of their customers were to do this, perhaps it will send a warning to competitors to do a better job with your precious data, and a message to our government about doing a better job protecting our privacy.
Image courtesy of geralt at Pixabay
The past few days I’ve been working with several clients who are in various stages of being compromised or having their online accounts attacked. The recent surge of activity is possibly related to the recent RockYou2024 “publication” wherein a file containing nearly 10 billion passwords was posted to a popular online hacking forum on July 4th. Analysis of the new file demonstrated that the bulk of the data is a compilation other breaches, including the previous release of this compilation, RockYou2021, which contained over 8 billion passwords at the time. Regardless of whether it’s old or new, many people will continue to use old passwords across multiple accounts for years if they aren’t forced to change them, so it’s a good bet that a large majority of the information in this file is quite usable, adding significant firepower to any hacker’s arsenal.
Passwords alone aren’t safe enough
While I was working to restore some semblance of security to my clients, one of the things I noticed was that the various bank accounts they accessed via the web or their phone did not have multi-factor security enabled, nor were my clients aware that it wasn’t actually turned on, or even available to be enabled. I was always under the impression that banks were forcing this on everyone, as it was a constant struggle for many of my clients who are accountants or financial professionals, but for at least one of my clients, all four banking accounts did not have the full multi-factor security login process enabled. On top of this, it was a struggle sometimes to actually enable the multi-factor as each bank buries the settings in their gloriously bad interfaces, and the instructions to turn it on aren’t always clear. And if someone like ME struggles with enabling this type of security, imagine what your elderly parents might be facing. Do yourself a favor: if you don’t know for a fact that you have multi-factor enabled for your banking accounts, log in and check, or call the number on the back of your credit card or debit card to find out. You might be surprised at how unsecure you were.
Image by Manuela from Pixabay
One of the most appalling practices in the current world of online hacking and phishing is the constant attacks on our elderly friends and family because the attackers know they are easy targets. Unfortunately, I don’t see technology becoming any easier for anyone, especially the elderly, so if they are going to continue using technology for things like shopping, paying bills and handling various elements of their health and property, see if you can get them to abide by some simple but critical rules when they get into unfamiliar situations. This may mean more calls to you on trivial things, but if you are like me, I’d rather that then getting the, “I’ve been hacked,” call.
Rule Number One: “Never trust popups on your devices that warn you about something scary and ask you to call a number.” None of the legitimate malware protection software on the market will do this. This is nearly always a scam. If they get something like this on their computer, tell them to take a picture of it and then just power off the device, manually if it won’t shutdown normally, and physically by unplugging the cord if that doesn’t seem to be working. These fake popups are meant to be frightening, disorienting and sometimes incredibly annoying. If the popup comes back after powering up their device (and it may, as many are designed to do just this) it may require some additional, technical expertise to get rid of it. For actual tech savvy users, it’s a quick fix, but it may be hard to explain over the phone if the recipient is flustered or otherwise frightened. If you can’t go yourself, it may require a visit from a local technician.
Rule Number Two: “Don’t “google” the contact number or email for important services.” All of the popular search services offer ad results at the top of actual search results that are often hard to distinguish from the legitimate information you were seeking. Bad actors are paying for ads that pretend to provide support for various commonplace companies. They will answer the phone as that service, including pretending to be Microsoft, Amazon, Apple, or Google in order to trick callers into giving them access to their devices. If your loved one is fond of using the phone in this manner, provide them with a printed list of known-good numbers for their most used services like their banks, pharmacies, etc, as well as including lines like “FACEBOOK: NO NUMBER TO CALL-DO NOT TRY” as a reminder that certain services are never available via phone.
Rule Number Three: “Always call someone you trust about anything on which you are uncertain.” Our loved ones often will refuse to call us because they don’t want to be a bother. Frequent calls may seem like a nuisance, but they pale in comparison to the absolute disaster you will both have to handle if they get hacked. I’d rather have dozens of calls of “Is this OK?” than the single, “I may have done something bad.” Reinforce their caution with approval, and if you have the time, perhaps explore with the caller what clued them into making the call. If it boils down to them just applying the above 3 rules, then score one for the good guys!
Image by Fernando Arcos from Pixabay
Depending on how long you’ve been using computers, you may well remember a time when, “Have you tried turning off and back on,” was the first thing you heard when trying to troubleshoot any issue. In the 90’s and into the 00’s this was the go-to first step of tech support. And then we entered what some of you might call the golden age of business computing ushered in by Windows 7, somewhat tarnished by Windows 8, and then, with Windows 10, an era that even I can look back on as a bastion of stability when compared to what we have now.
What the heck happened?
Two words: Internet and Cybercrime. I know, I know, both of those things have been around for a lot longer than Windows 10 and even Windows 7, but up until maybe 2012 or 2013, technology companies like Symantec, McAfee and Microsoft had the upper hand in that war. In 2013, with the arrival of the widely successful CryptoLocker-powered attacks, criminals understood what sort of money was at stake and poured all of their resources into cybercrime infrastructure that has evolved into a never-ending escalating battle of security breaches, software updates and increasingly complicated security rituals. All the while, technology itself has permeated every facet of our lives, resulting in things that we would have considered absurd 10 years ago, such as doorbells that require a two-factor login. Everything requires a password because everything is connected to the internet, and because of the ongoing arms race in cybersecurity, everything around us is constantly being updated in this frantic race with no finish line anywhere in sight. Long story short: expect to reboot your devices frequently going forward. There was a time when I could say, “Hey, reboot your computer every other week and you will be fine.” Nowadays, that guidance is, “Reboot your computer at least every 3 days, if not daily.” Microsoft Windows is being updated weekly, as are the major office productivity apps like Office and Acrobat, and not all of their updates are well tested – resulting in more crashing and rebooting until someone notices and issues yet another update to fix the previous update. If it feels excessive, it’s because it is excessive, but for the moment, we don’t have much choice. Right now, cybercrime has the edge, and it’s running everyone ragged.
Long-time readers will notice that it is pretty rare for me to post good news to this blog. I’m sure good technology things happen every day, but we don’t get called when something is working properly, and the mainstream media usually don’t report on anything but bad news. Fortunately for us – because let’s face it, we are sorely in need of “W’s” in the fight against cybercrime – a prominent hacking group responsible for thousands of cyberattacks worldwide resulting in more than $120M in ransom payments has been dismantled by a joint law enforcement operation led by the UK and US. The action resulted in what they are calling a complete dismantling of the APT (advanced persistent threat) known as Lockbit.
What this means for you
On top of seizing control of nearly all of Lockbit’s operational assets, including 34 servers, 200 cryptocurrency accounts and arresting 2 Russian nationals, they actually converted Lockbit’s own dark website into a “reverse” leak site that touted the task force’s takedown of the APT as well as posting their own countdowns to when additional data on the Lockbit crew would be leaked to the internet, turning a commonly used cybercrime tactic back on the criminals. Before the site was “pwned” by authorities, it was used by Lockbit to publish a list of its victims and ransom countdown timers.
This was no small effort – it required coordination between 10 countries and at least three major law enforcement agencies. It will hopefully result in some of the victims being able to recover encrypted data and maybe discourage some portion of the cybercriminal element from continuing operations, but let’s be realistic – this APT was one head of a massive hydra, and the assets neutralized were a fraction of the compromised computers and accounts used as zombies or command and control servers across the globe. In the above-mentioned “Operation Cronos” action 14,000 rogue accounts were shut down. For perspective, a cybercrime botnet was discovered in 2009 that was comprised of nearly two million computers. That number has likely been dwarfed many times over by now. It’s too early to declare victory by a longshot, but as the old proverb instructs, “How do you eat an elephant? One bite at a time.”
Image by Schäferle from Pixabay
Back in October of this year, we wrote about DNA testing company 23andMe’s reported data breach. Initially thought to “only” impact 1.4 million people, 23andMe has revised that estimate to a whopping 6.9 million impacted users that had data exposed including names, birthdays, locations, pictures, addresses, related family members, but not, as the company has strenuously emphasized, actual genetic data. I’m fairly certain that little nugget is not providing the relief they might hope.
Why this should matter to you
Even if you nor any immediate family is a 23andMe customer, it’s important to understand why this data breach is particularly noteworthy. 23andMe wasn’t hacked in a manner that is more commonplace for large companies – hacked or stolen credentials for someone inside the company that had privileged access, but rather through a mass breach of 14,000 customer accounts that were secured by passwords found in dark web databases, ie. these stepping-stone customers were using the same passwords that were exposed in other breaches and leaks. The hackers used those compromised accounts to essentially automate a mass cross-referencing data harvest that in the end, exposed data on nearly 7 million 23andMe customers. This last data exposure is on 23andMe – it would seem they didn’t anticipate the built-in cross-referencing services that the genetics testing company offers would be turned against itself. Also, there was the minor omission of not enforcing multi-factor authentication to secure everyone’s accounts, which might have compensated for the poor password discipline of its customers. The two take-aways? Unique passwords and multi-factor authentication should be the minimum security requirements you should expect from any service that contains your valuable data.
Image courtesy of geralt at Pixabay










