I’ve been doing this for over three decades, and I can tell you with absolute certainty that most small business backup strategies are garbage. Not because people don’t care about their data. They do. But because backups are one of those things that everyone assumes is working fine until the moment they desperately need it, and then they discover it’s been broken for six months.
According to Veeam’s 2024 Data Protection Trends Report, 85% of organizations experienced at least one ransomware attack in the past year, but only 23% were able to recover all of their data from backups. Think about that. Three-quarters of companies that got hit couldn’t fully restore from their backups. That’s not a technology problem. That’s a broken backup strategy problem.
The Backups That Don’t Actually Work
Let me tell you what I see constantly in professional services firms. Someone set up a backup years ago. Maybe it was the previous IT person. Maybe it was the office manager who watched a YouTube video. Maybe it was even a reputable IT company that did it right at the time. But then nobody ever tested it. Nobody verified it was running. Nobody checked that the backup software still had a valid license. Nobody noticed when the external hard drive filled up and stopped backing up new files eight months ago.
I’ve walked into law offices where their “backup” was someone copying files to a USB drive every Friday and taking it home for the weekend. I’ve seen accounting firms whose cloud backup hadn’t successfully completed in two years, but nobody noticed because it wasn’t throwing error messages anymore, it just quietly failed in the background.
What Actually Breaks
Backups fail in predictable ways. The backup software loses its connection to the cloud service and nobody notices. The external hard drive gets unplugged when someone needed the USB port and never gets plugged back in. The cloud storage account hits its limit and stops backing up new data. The backup runs, but it’s not actually capturing the open database files that contain all your critical information.
Gartner research shows that 77% of backup failures are only discovered when an organization attempts to restore data. You don’t find out your backup is broken until you need it, which is exactly when you can’t afford to discover that problem.
Or the backup works perfectly, but when you go to restore, you discover that the data is corrupted. Or the restore process is so slow that it would take three weeks to get your data back, and your business can’t survive three weeks of downtime. Or the backup included your files but not the configuration settings you need to actually run your software again.
Data Loss Prevention That Actually Works
Real business backup services for professional services firms need three things. First, they need to be automated and monitored. If your backup depends on someone remembering to do something, it will fail. Humans forget. Humans get busy. Humans quit and nobody tells the new person about the Friday backup routine. Automation removes the human failure point, and monitoring catches it when the automation breaks.
Second, backups need to be tested regularly. Not once when you set them up. Regularly. At least quarterly, you or your IT provider should be doing test restores. Pick a random file and restore it. Pick a random user account and verify you can recover their email. According to Infrascale’s Small Business Backup Report, businesses that test their backups quarterly have a 95% success rate in actual disaster recovery situations, compared to 22% for those who never test.
Third, you need redundancy. A single backup isn’t a backup, it’s a single point of failure. You need multiple copies in multiple locations using multiple methods. This is where disaster recovery planning intersects with backup strategy.
What Professional Backup Services Actually Do
Professional backup services for businesses aren’t just about the technology. They’re about having someone whose job is to make sure your backups are working. Someone who gets alerted when a backup fails. Someone who verifies that restores are possible. Someone who updates the backup strategy as your business changes.
For most professional services firms, this means managed backup services where your IT provider is actively monitoring your backups, not just “providing” backup software and hoping you figure it out. You need someone watching the logs. You need someone expanding storage when you’re running low. You need someone testing restores before you have an emergency.
And you need proper disaster recovery planning, which is more than just backups. It’s having documented procedures for what happens when disaster strikes. Who do you call? What gets restored first? How do you communicate with clients during downtime? These aren’t questions you want to be figuring out while your office is on fire or your systems are encrypted by ransomware.
Quick and Easy
Most backup strategies fail because they’re never tested, not properly monitored, or lack redundancy. Professional business backup services include automated monitoring, regular restore testing, and disaster recovery planning to ensure your data is actually recoverable when you need it.
Look, I get it. Multi-factor authentication is a pain in the butt. It slows you down when you’re trying to get work done, it interrupts your flow with prompts at the worst possible times, and yes, it makes you feel like technology doesn’t trust you anymore. Your team is going to complain about it. Some will actively try to find workarounds. And honestly, I don’t blame them.
The thing about ransomware, though, is that it’s worse.
I’ve been managing IT for professional services firms for over three decades, and I can tell you that the conversation we have after a breach is exponentially more painful than the conversation about implementing MFA. One is an inconvenience. The other is a catastrophe.
The Uncomfortable Truth About Endpoint Security
The professional services industry is getting hammered by ransomware. Accounting firms, law offices, and property management companies are prime targets because you have exactly what criminals want: sensitive financial data, confidential client information, and typically just enough technology to be vulnerable but not enough to be fortress-like.
According to the FBI’s Internet Crime Complaint Center, ransomware complaints increased 18% in 2024, with losses exceeding $59.6 million. However, those numbers only capture reported incidents. Most small and mid-sized firms never report attacks because they’re embarrassed, worried about reputation damage, or they just paid the ransom quietly and moved on.
When someone gets ransomware into your network, it doesn’t just encrypt your files. It steals them first, then encrypts them, then threatens to publish your clients’ private information if you don’t pay. Even if you have backups, which you should, you still have a data breach on your hands. You still have to report it. Your clients still find out. Your reputation still takes a hit.
You know what the entry point is in most of these attacks? Stolen credentials. Microsoft’s Digital Defense Report found that password-based attacks increased 146% in 2024, with more than 7,000 password attacks happening every second across their platforms. Someone phished an employee’s password, logged in as them, and waltzed right through your front door like they owned the place.
What MFA Actually Does (And What It Doesn’t)
Multi-factor authentication isn’t perfect. I’m not going to pretend it’s some silver bullet that makes you invincible. Criminals have already figured out ways around it, like cookie-stealing, where they trick you into authenticating through a legitimate-looking service just to capture your session token.
Here’s what it does: it makes the cheap, easy attacks fail. The automated bot that tries 10,000 stolen passwords against your email server. The script kiddie who bought a dump of credentials on the dark web. The lazy criminal who isn’t willing to put in the extra effort. According to research from Google, implementing any form of MFA blocks 99.9% of automated attacks. Even the most basic SMS-based authentication stops the vast majority of credential stuffing attacks cold.
Think of it like locking your car doors. Will it stop a professional car thief with the right tools and motivation? No. But it will stop the opportunistic criminal who’s just walking through the parking lot trying door handles. Most cybercrime is exactly that: opportunistic.
Why Your Cyber Insurance Company Cares
Something that might make the MFA conversation easier with your team: it’s not really optional anymore. In 2026, cyber insurance requirements have gotten strict enough that most carriers won’t even quote you coverage without multi-factor authentication on all your critical systems. Email, remote access, financial systems, client portals. All of it.
I’ve seen insurance companies do post-breach audits and deny claims because MFA wasn’t implemented properly. It can’t be partially implemented, or “we were planning to roll it out.” Actually implemented and actually used. They will look at your authentication logs, and if they see that the account that got compromised didn’t have MFA enabled, that’s it. Claim denied. You’re on your own for the six-figure recovery costs.
Making It Less Terrible
The good news is that MFA in 2026 is better than it used to be. Not good, but better. You’re not stuck with those horrible SMS codes that never arrive when you need them. Modern authentication apps are faster. Hardware security keys work better. Some services even use passwordless authentication now, which sounds scarier but is actually more convenient once you get used to it.
The key is implementing it intelligently. You don’t need to make people authenticate every single time they access their email if they’re on a trusted device on your network. You can set reasonable timeout periods. You can use conditional access policies that only trigger extra authentication when something looks suspicious, like a login from an unfamiliar location.
You need to train your people not just on how to use MFA, but also on why it matters. Not with scare tactics, but with reality. The Verizon 2024 Data Breach Investigations Report found that 68% of breaches involved a human element, whether that’s stolen credentials, social engineering, or simple mistakes. Tell your team about the law firm down the street that got hit with ransomware because someone clicked a phishing link. Tell them about the accounting practice that had client tax returns published online because their insurance claim got denied. Make it real, because it is real.
The Reality of Small Business Ransomware Protection
Look, if I’m being completely honest with you, which I always am, no security measure is going to stop a determined, sophisticated attacker who specifically targets your firm. But you’re probably not going to get specifically targeted. What you’re trying to protect against is being the easy target, the firm that criminals hit because you’re vulnerable and they know it.
Multi-factor authentication is one piece of a larger endpoint security solution. You also need proper backups, security monitoring, email filtering, security awareness training for your team, and someone who actually knows what they’re doing managing all of it. But MFA is the piece that insurance companies look for first, and for good reason.
If you haven’t implemented multi-factor authentication yet, start now. Check with your cyber insurance carrier about their specific requirements, because they vary. Get your critical systems secured first: email, financial software, anything that touches client data, and any way your team accesses your network remotely.
And when your team complains, which they will, remember that their annoyance is temporary. A ransomware attack isn’t.
Quick and Easy
Multi-factor authentication blocks 99.9% of automated attacks and is now required by most cyber insurance policies. While your team will find it annoying, the alternative of ransomware attacks and denied insurance claims is far worse for professional services firms.
Remember when you could spot a phishing email because it had terrible grammar or came from a weird email address?
Those days are over.
Research from Hoxhunt showed that by March 2025, AI-generated phishing attacks had become more effective than those created by elite human security experts. The AI didn’t just catch up, but surpassed the best humans at social engineering.
Let that sink in. The people whose entire job is creating realistic phishing simulations to test your employees? AI is better at it than they are.
The Scale of the AI Phishing Problem
According to the World Economic Forum, phishing and social engineering attacks increased 42% in 2024. That was before AI really hit its stride.
The attacks aren’t just better written anymore. They’re contextual and arrive at the exact right time. They reference real projects, real people in your organization, and real deadlines.
Google’s 2026 forecast warns that attackers are using AI to create emails that are essentially indistinguishable from legitimate communication.
This is what that looks like in practice:
You receive an email from your CFO requesting an urgent invoice payment. It uses her exact writing style. It references the specific vendor you’ve been working with. It arrives right when you’d expect such a request. The email address looks right. The signature looks right. Everything looks right.
Except it’s not from your CFO. It’s from an AI that studied 50 of her previous emails and generated a perfect forgery.
Voice Cloning: The New Frontier
Email isn’t even the scariest part anymore.
A tech journalist recently demonstrated that she could clone her own voice using cheap AI tools and fool her bank’s phone system – both the automated system and a live agent – in a five-minute call.
Think about what that means for your business. Your CFO gets a call that sounds exactly like your CEO: voice, cadence, the way they clear their throat, everything. It’s asking for an urgent wire transfer for a time-sensitive deal.
How do you defend against that?
Why Traditional Phishing Training Fails Against AI
Your annual security training tells employees to look for:
- Spelling and grammar errors (AI doesn’t make these mistakes)
- Generic greetings (AI personalizes everything)
- Suspicious sender addresses (AI uses compromised legitimate accounts)
- Urgent requests (legitimate urgent requests also sound urgent)
- Links that don’t match the display text (AI uses legitimate-looking domains)
Every single indicator you’ve trained people to watch for? AI bypasses them.
What Actually Works Against AI Generated Phishing
The old training about “look for spelling errors” is dead. Your employees need to understand that verification matters more than urgency.
Use this to protect you and your team:
Slow down when things feel urgent. Urgency is the weapon. If someone’s asking for sensitive information or money transfers, that urgency should trigger caution, not immediate compliance.
Verify through a different channel. Email says it’s from your CEO? Call them on a known number. Text message from your bank? Call the number on your card, not the one in the message. Voice call asking for a transfer? Hang up and call back.
Trust your judgment about whether requests make sense. Does your CEO normally ask for wire transfers via text? Does your IT department usually request password resets through email? If the method doesn’t match the request, verify.
Create a culture where questioning is safe. Your employees need to know they won’t get fired for double-checking whether the CEO really sent that request. These attacks exploit hierarchy and time pressure.
The Reality for Professional Services Firms
The accounting firms, law offices, and property management companies we work with are particularly vulnerable to these attacks because:
- They handle sensitive financial information
- They regularly process wire transfers
- They work with clients who expect fast responses
- They have hierarchical structures that discourage questioning authority
One immigration law firm we work with almost lost $180,000 to an AI-generated email that perfectly mimicked its managing partner’s communication style, requesting an urgent retainer transfer. The only thing that saved them was an associate who thought the request was weird enough to verify in person.
That associate didn’t stop the attack because they spotted technical indicators. They stopped it because something felt off, and they were empowered to question it.
What This Means for Your Business
You need to update your security training immediately. Not next quarter. Not when the budget allows. Now.
The training needs to focus on:
- Verification procedures that work regardless of how legitimate something appears
- Creating psychological safety for employees to question urgent requests
- Understanding that AI can fake anything visual or auditory
- Practicing what to do when something seems both urgent and suspicious
You need to practice these procedures regularly. Not once a year during security awareness month. Monthly at minimum.
Because the attacks are getting better every single day. Criminals using them no longer need your employees to click a suspicious link. They need your employees to trust their eyes and ears when they shouldn’t.
The Quick and Easy: AI-generated phishing attacks now outperform human security experts, with attacks increasing 42% in 2024. AI generates emails and phone calls that are indistinguishable from legitimate communication, bypassing traditional phishing indicators such as spelling errors, generic greetings, and suspicious links. Voice cloning technology can fool both automated systems and live humans. Traditional training focusing on spotting errors no longer works. Instead, businesses need verification procedures that work regardless of appearance, cultures where questioning authority is safe, and regular practice with realistic scenarios. Professional services firms are particularly vulnerable due to their hierarchical structures and regular financial transactions. The key defense is slowing down when things feel urgent and verifying through different channels.
The uncomfortable truth is your employees are using AI tools you don’t know about. Right now. Today.
IBM’s latest research found that 20% of organizations already suffered a breach due to what they’re calling “shadow AI” – employees using unauthorized AI tools without IT’s knowledge. The kicker is that those breaches added an average of $200,000 to remediation costs.
Think about that for a second. The issue is not the technology failing or hackers breaking through your firewall. The cause is your own people, trying to do their jobs faster, pasting proprietary information into ChatGPT, Gemini, or whatever AI tool made their work easier that day.
Why Shadow AI Happens (And Why You Can’t Stop It)
Varonis found that 98% of employees use unsanctioned apps. That’s not a typo. Ninety-eight percent. If you think your company is the exception, you’re wrong.
Why does this happen? Because your employees are struggling. They’re being asked to do more with less, and they’re exhausted. Then they discover this magical tool that can summarize a 50-page document in 30 seconds or write that email they’ve been dreading. Of course, they’re going to use it.
The problem isn’t that they’re lazy or malicious. The problem is that they have no idea what happens to the data they feed into these systems. Some AI services train their models on your inputs. Some store everything you type. Some have security controls. Most don’t.
Why Banning AI Tools Doesn’t Work
Banning these tools outright works. Right? Gartner predicts that by 2027, 75% of employees will acquire or create technology outside IT’s visibility. Bans just push people to hide what they’re doing better.
This happens constantly with the accounting firms and law offices we work with. A partner bans ChatGPT, but an associate uses it on their phone anyway. Now, instead of managing the risk, you’ve just lost visibility into it entirely.
The Real Cost of Shadow AI
The financial impact goes beyond the $200,000 average breach cost. Consider what happens when:
- Your proprietary client data gets fed into a public AI model
- Your trade secrets become part of an AI training dataset
- Your confidential legal strategy gets stored on servers you don’t control
- Your financial projections end up accessible to your competitors
These aren’t theoretical risks. These are things happening right now to businesses that thought their employees would never do something that careless.
What You Actually Need to Do About Shadow AI
You need an actual policy about AI use. Not a ban. A policy.
This is what works:
Identify which AI tools are safe for your business. Not every AI tool is a security nightmare. Some have proper data handling. Some don’t train on your inputs. Figure out which ones meet your requirements.
Make approved tools easy to access. If your employees need AI to do their jobs effectively, give them a way to use it safely. The property management firms we work with that have implemented approved AI tools see almost zero shadow AI usage.
Train people on what they can and cannot share. Most people don’t realize that pasting client information into ChatGPT might expose it. They’re not trying to cause a breach. They’re trying to work faster. Teach them the difference between safe and unsafe usage.
Create a culture where people can ask questions. Your employees should feel comfortable asking, “Is this AI tool safe to use?” instead of just using it and hoping for the best.
The Bottom Line on Shadow AI
This isn’t going away. The only question is whether you’re managing it or pretending it doesn’t exist.
The firms sleeping well at night aren’t the ones who banned AI. They’re the ones who acknowledged it exists and created safe pathways for using it.
Because your employees are already using these tools, you just don’t know about it yet.
The Quick and Easy: Shadow AI, unauthorized AI tool usage by employees, has already caused breaches in 20% of organizations, costing an average of $200,000 each. With 98% of employees using unsanctioned apps and 75% projected to acquire technology outside IT visibility by 2027, banning AI tools doesn’t work. Instead, businesses need clear AI usage policies, approved tools that are easy to access, employee training on safe data sharing, and a culture that allows people to ask questions before using new tools. Technology isn’t the risk, but using it without oversight or understanding the consequences.
We’ve discussed in previous blogs how technology things seem to be getting worse from just about every angle, whether it’s cost, quality or security. We can attribute a large chunk of this downward trend to the increasing profitability of cybercrime, which is itself a vicious, amplifying spiral of escalation. The more we try to keep ourselves safe, the more complicated it becomes to do so, and most regular folks don’t have the training or endurance to keep up, especially if you are a part of the growing elderly generations that are forced to use technology they barely understand just to stay alive and keep in contact with friends and family. With the recent (in my opinion ill-advised) downsizing the Cybersecurity and Infrastructure Security Agency (CISA) much of the this country’s organizational strength and operational efficiency in cataloging and combatting cybersecurity threats will be abandoned.
What this means for all of us
Regardless of whether you are a big or small organization, CISA’s leadership and work provided foundational guidance on all existing cybersecurity threats while constantly researching, investigating and publishing information on new threats as they discovered. One of the main reasons that governments exist is to provide funding, resources and scaled force for tasks that cannot (and should not) be handled by smaller groups or for-profit institutions, such as military defense, mail delivery, and national security. As has been demonstrated time and time again, for-profit companies cannot be trusted to put people before profits, and security oversight is definitely not something you want to enshittify. And yet, that is exactly where we are. In the absence of CISA leadership, organizations, whether they be ad-hoc coalitions of state-level agencies or, most likely, for-profit companies in the security industry, are now scrambling to fill the gigantic, CISA-shaped hole in our nation’s cybersecurity. Let’s be clear, security for small businesses was already well on its way to becoming difficult, expensive and onerous. Eliminating national leadership will most definitely lead to a fracturing of an already complicated security framework that will most assuredly weigh very heavily on those who can least afford to shoulder a burden that was formerly carried by those trained, equipped and funded to do so.
Per a recent updated report from the FBI and CISA, the telecomm hacks that had been previous announced (and most likely missed amidst the election and holidays) are now being regarded as much worse than previously thought, and that there is no anticipated ETA as to when the hackers can be evicted from the various compromised infrastructures. As such, the FBI and CISA are recommending everyone avoid unencrypted communications methods on their mobile devices, which includes SMS messaging between Android and Apple phones, and carrier-based cellular voice calls (which have never been encrypted).
What this means for you
If you are like 95% of the world, you are probably thinking, “Well, if China wants to know about the grocery list I texted to my spouse, they are welcome to it,” or “I’ve got nothing to hide,” or even more naively, “I’ve got nothing worth stealing.” Most people do not consider just how much they communicate via unsecured text – banking two-factors, prescription verifications, medical complaints to doctors, passwords to coworkers, driver’s license pictures, credit card pins – the list is endless, and extremely valuable to threat teams like Salt Typhoon, the APT allegedly behind this huge compromise. The reason that this is a big deal is that we as a society (at least in America) have grown overly comfortable with this lack of privacy, and on top of that, the market has encouraged a fractured and flawed approach to communications between the various community silos we have created for ourselves online. What you might not know is that messaging from iPhone to iPhone, and Android to Android, are fully encrypted, as well as messages in WhatsApp, Facebook Messenger and Signal, but as you consider your circle of family and friends, how many of them are on the same platform and use the same messaging apps to communicate? How many of your two-factor codes arrive via SMS?
To address this latter issue, you should move any multi-factor codes to an app like Microsoft or Google Authenticator (if the platform even allows it – many banks do not yet support apps). This process will be painful and tedious, but probably most important in terms of improving your personal safety. The messaging problem is not so “easily” solved at least from a friends and family perspective, but for business communications, you should consider moving everything to a platform like Microsoft Teams, Google Workspace, Slack, etc. And stop sharing passwords via text. More information to come as we learn more about the severity of this telco hack.
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net
Ever since they were hacked in 2023, genetics and ancestry website 23andMe has been more or less moribund, going from a high of $16 per share to $0.29 today and the resignation of their entire board of directors last month. When we last wrote about them in December of last year, the beleaguered DNA testing company had to revise their initial statement about only getting a “little” hacked (1.4M records) to admitting that they got majorly hacked (6.9M records). As you can imagine, this didn’t bode well for their marketability.
Why are we talking about them again?
It’s been nearly a year since the initial data breach, and judging by the lack of faith the recently departed board of directors had in the company’s founder, they aren’t likely to return to full potential any time soon, if ever. If you were one of the millions of people that sent them your DNA to analyze, you’ve probably already reaped whatever benefits (positive and negative) you will likely get from 23andMe, but they may not be done making money from your data. While they claim that much scientific good has been generated if you were one of the many who consented to allow your de-personalized data to be used by researchers, you may want to consider the consequences of letting a company who’s security practices led to their current downfall continue to have access to your data. Because you do have the option of asking them to delete your data. And seeing as you paid them for the privilege of providing your data, it seems rather mercenary for them to then take your data and continue to sell it without compensating you. Rather, they got hacked, exposed your confidential information, and then continued to (somewhat) operate. If you’d like to see some consequences, you can do your part by asking them to delete your data which can be done merely by logging into your account on their website and submitting that request. Do it. If a majority of their customers were to do this, perhaps it will send a warning to competitors to do a better job with your precious data, and a message to our government about doing a better job protecting our privacy.
Image courtesy of geralt at Pixabay
The past few days I’ve been working with several clients who are in various stages of being compromised or having their online accounts attacked. The recent surge of activity is possibly related to the recent RockYou2024 “publication” wherein a file containing nearly 10 billion passwords was posted to a popular online hacking forum on July 4th. Analysis of the new file demonstrated that the bulk of the data is a compilation other breaches, including the previous release of this compilation, RockYou2021, which contained over 8 billion passwords at the time. Regardless of whether it’s old or new, many people will continue to use old passwords across multiple accounts for years if they aren’t forced to change them, so it’s a good bet that a large majority of the information in this file is quite usable, adding significant firepower to any hacker’s arsenal.
Passwords alone aren’t safe enough
While I was working to restore some semblance of security to my clients, one of the things I noticed was that the various bank accounts they accessed via the web or their phone did not have multi-factor security enabled, nor were my clients aware that it wasn’t actually turned on, or even available to be enabled. I was always under the impression that banks were forcing this on everyone, as it was a constant struggle for many of my clients who are accountants or financial professionals, but for at least one of my clients, all four banking accounts did not have the full multi-factor security login process enabled. On top of this, it was a struggle sometimes to actually enable the multi-factor as each bank buries the settings in their gloriously bad interfaces, and the instructions to turn it on aren’t always clear. And if someone like ME struggles with enabling this type of security, imagine what your elderly parents might be facing. Do yourself a favor: if you don’t know for a fact that you have multi-factor enabled for your banking accounts, log in and check, or call the number on the back of your credit card or debit card to find out. You might be surprised at how unsecure you were.
Image by Manuela from Pixabay
One of the most appalling practices in the current world of online hacking and phishing is the constant attacks on our elderly friends and family because the attackers know they are easy targets. Unfortunately, I don’t see technology becoming any easier for anyone, especially the elderly, so if they are going to continue using technology for things like shopping, paying bills and handling various elements of their health and property, see if you can get them to abide by some simple but critical rules when they get into unfamiliar situations. This may mean more calls to you on trivial things, but if you are like me, I’d rather that then getting the, “I’ve been hacked,” call.
Rule Number One: “Never trust popups on your devices that warn you about something scary and ask you to call a number.” None of the legitimate malware protection software on the market will do this. This is nearly always a scam. If they get something like this on their computer, tell them to take a picture of it and then just power off the device, manually if it won’t shutdown normally, and physically by unplugging the cord if that doesn’t seem to be working. These fake popups are meant to be frightening, disorienting and sometimes incredibly annoying. If the popup comes back after powering up their device (and it may, as many are designed to do just this) it may require some additional, technical expertise to get rid of it. For actual tech savvy users, it’s a quick fix, but it may be hard to explain over the phone if the recipient is flustered or otherwise frightened. If you can’t go yourself, it may require a visit from a local technician.
Rule Number Two: “Don’t “google” the contact number or email for important services.” All of the popular search services offer ad results at the top of actual search results that are often hard to distinguish from the legitimate information you were seeking. Bad actors are paying for ads that pretend to provide support for various commonplace companies. They will answer the phone as that service, including pretending to be Microsoft, Amazon, Apple, or Google in order to trick callers into giving them access to their devices. If your loved one is fond of using the phone in this manner, provide them with a printed list of known-good numbers for their most used services like their banks, pharmacies, etc, as well as including lines like “FACEBOOK: NO NUMBER TO CALL-DO NOT TRY” as a reminder that certain services are never available via phone.
Rule Number Three: “Always call someone you trust about anything on which you are uncertain.” Our loved ones often will refuse to call us because they don’t want to be a bother. Frequent calls may seem like a nuisance, but they pale in comparison to the absolute disaster you will both have to handle if they get hacked. I’d rather have dozens of calls of “Is this OK?” than the single, “I may have done something bad.” Reinforce their caution with approval, and if you have the time, perhaps explore with the caller what clued them into making the call. If it boils down to them just applying the above 3 rules, then score one for the good guys!
Image by Fernando Arcos from Pixabay











