Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

C2 provides technology services and consultation to businesses and individuals.

T (818) 584 6021
Email: [email protected]

C2 Technology Partners, Inc.
26500 Agoura Rd, Ste 102-576, Calabasas, CA 91302

Open in Google Maps
QUESTIONS? CALL: 818-584-6021
  • HOME
  • BLOG
  • SERVICES
    • Encryption
    • Backups
  • ABOUT
    • SMS Opt-In Form
    • Terms and Conditions
    • Privacy Policy
FREECONSULT

AI Generated Phishing: Why Your Security Training Is Already Outdated

  • 1
Christopher Woo
Tuesday, 27 January 2026 / Published in Woo on Tech
Person typing on laptop with email showing and AI symbol

Remember when you could spot a phishing email because it had terrible grammar or came from a weird email address?

Those days are over.

Research from Hoxhunt showed that by March 2025, AI-generated phishing attacks had become more effective than those created by elite human security experts. The AI didn’t just catch up, but surpassed the best humans at social engineering.

Let that sink in. The people whose entire job is creating realistic phishing simulations to test your employees? AI is better at it than they are.

The Scale of the AI Phishing Problem

According to the World Economic Forum, phishing and social engineering attacks increased 42% in 2024. That was before AI really hit its stride.

The attacks aren’t just better written anymore. They’re contextual and arrive at the exact right time. They reference real projects, real people in your organization, and real deadlines.

Google’s 2026 forecast warns that attackers are using AI to create emails that are essentially indistinguishable from legitimate communication.

This is what that looks like in practice:

You receive an email from your CFO requesting an urgent invoice payment. It uses her exact writing style. It references the specific vendor you’ve been working with. It arrives right when you’d expect such a request. The email address looks right. The signature looks right. Everything looks right.

Except it’s not from your CFO. It’s from an AI that studied 50 of her previous emails and generated a perfect forgery.

Voice Cloning: The New Frontier

Email isn’t even the scariest part anymore.

A tech journalist recently demonstrated that she could clone her own voice using cheap AI tools and fool her bank’s phone system – both the automated system and a live agent – in a five-minute call.

Think about what that means for your business. Your CFO gets a call that sounds exactly like your CEO: voice, cadence, the way they clear their throat, everything. It’s asking for an urgent wire transfer for a time-sensitive deal.

How do you defend against that?

Why Traditional Phishing Training Fails Against AI

Your annual security training tells employees to look for:

  • Spelling and grammar errors (AI doesn’t make these mistakes)
  • Generic greetings (AI personalizes everything)
  • Suspicious sender addresses (AI uses compromised legitimate accounts)
  • Urgent requests (legitimate urgent requests also sound urgent)
  • Links that don’t match the display text (AI uses legitimate-looking domains)

Every single indicator you’ve trained people to watch for? AI bypasses them.

What Actually Works Against AI Generated Phishing

The old training about “look for spelling errors” is dead. Your employees need to understand that verification matters more than urgency.

Use this to protect you and your team:

Slow down when things feel urgent. Urgency is the weapon. If someone’s asking for sensitive information or money transfers, that urgency should trigger caution, not immediate compliance.

Verify through a different channel. Email says it’s from your CEO? Call them on a known number. Text message from your bank? Call the number on your card, not the one in the message. Voice call asking for a transfer? Hang up and call back.

Trust your judgment about whether requests make sense. Does your CEO normally ask for wire transfers via text? Does your IT department usually request password resets through email? If the method doesn’t match the request, verify.

Create a culture where questioning is safe. Your employees need to know they won’t get fired for double-checking whether the CEO really sent that request. These attacks exploit hierarchy and time pressure.

The Reality for Professional Services Firms

The accounting firms, law offices, and property management companies we work with are particularly vulnerable to these attacks because:

  • They handle sensitive financial information
  • They regularly process wire transfers
  • They work with clients who expect fast responses
  • They have hierarchical structures that discourage questioning authority

One immigration law firm we work with almost lost $180,000 to an AI-generated email that perfectly mimicked its managing partner’s communication style, requesting an urgent retainer transfer. The only thing that saved them was an associate who thought the request was weird enough to verify in person.

That associate didn’t stop the attack because they spotted technical indicators. They stopped it because something felt off, and they were empowered to question it.

What This Means for Your Business

You need to update your security training immediately. Not next quarter. Not when the budget allows. Now.

The training needs to focus on:

  • Verification procedures that work regardless of how legitimate something appears
  • Creating psychological safety for employees to question urgent requests
  • Understanding that AI can fake anything visual or auditory
  • Practicing what to do when something seems both urgent and suspicious

You need to practice these procedures regularly. Not once a year during security awareness month. Monthly at minimum.

Because the attacks are getting better every single day. Criminals using them no longer need your employees to click a suspicious link. They need your employees to trust their eyes and ears when they shouldn’t.

The Quick and Easy: AI-generated phishing attacks now outperform human security experts, with attacks increasing 42% in 2024. AI generates emails and phone calls that are indistinguishable from legitimate communication, bypassing traditional phishing indicators such as spelling errors, generic greetings, and suspicious links. Voice cloning technology can fool both automated systems and live humans. Traditional training focusing on spotting errors no longer works. Instead, businesses need verification procedures that work regardless of appearance, cultures where questioning authority is safe, and regular practice with realistic scenarios. Professional services firms are particularly vulnerable due to their hierarchical structures and regular financial transactions. The key defense is slowing down when things feel urgent and verifying through different channels.

aicybersecurityphishingsecurity

Shadow AI – The Security Risk Already Inside Your Company

  • 0
Christopher Woo
Tuesday, 13 January 2026 / Published in Woo on Tech
Employees in a meeting, AI is present

The uncomfortable truth is your employees are using AI tools you don’t know about. Right now. Today.

IBM’s latest research found that 20% of organizations already suffered a breach due to what they’re calling “shadow AI” – employees using unauthorized AI tools without IT’s knowledge. The kicker is that those breaches added an average of $200,000 to remediation costs.

Think about that for a second. The issue is not the technology failing or hackers breaking through your firewall. The cause is your own people, trying to do their jobs faster, pasting proprietary information into ChatGPT, Gemini, or whatever AI tool made their work easier that day.

Why Shadow AI Happens (And Why You Can’t Stop It)

Varonis found that 98% of employees use unsanctioned apps. That’s not a typo. Ninety-eight percent. If you think your company is the exception, you’re wrong.

Why does this happen? Because your employees are struggling. They’re being asked to do more with less, and they’re exhausted. Then they discover this magical tool that can summarize a 50-page document in 30 seconds or write that email they’ve been dreading. Of course, they’re going to use it.

The problem isn’t that they’re lazy or malicious. The problem is that they have no idea what happens to the data they feed into these systems. Some AI services train their models on your inputs. Some store everything you type. Some have security controls. Most don’t.

Why Banning AI Tools Doesn’t Work

Banning these tools outright works. Right? Gartner predicts that by 2027, 75% of employees will acquire or create technology outside IT’s visibility. Bans just push people to hide what they’re doing better.

This happens constantly with the accounting firms and law offices we work with. A partner bans ChatGPT, but an associate uses it on their phone anyway. Now, instead of managing the risk, you’ve just lost visibility into it entirely.

The Real Cost of Shadow AI

The financial impact goes beyond the $200,000 average breach cost. Consider what happens when:

  • Your proprietary client data gets fed into a public AI model
  • Your trade secrets become part of an AI training dataset
  • Your confidential legal strategy gets stored on servers you don’t control
  • Your financial projections end up accessible to your competitors

These aren’t theoretical risks. These are things happening right now to businesses that thought their employees would never do something that careless.

What You Actually Need to Do About Shadow AI

You need an actual policy about AI use. Not a ban. A policy.

This is what works:

Identify which AI tools are safe for your business. Not every AI tool is a security nightmare. Some have proper data handling. Some don’t train on your inputs. Figure out which ones meet your requirements.

Make approved tools easy to access. If your employees need AI to do their jobs effectively, give them a way to use it safely. The property management firms we work with that have implemented approved AI tools see almost zero shadow AI usage.

Train people on what they can and cannot share. Most people don’t realize that pasting client information into ChatGPT might expose it. They’re not trying to cause a breach. They’re trying to work faster. Teach them the difference between safe and unsafe usage.

Create a culture where people can ask questions. Your employees should feel comfortable asking, “Is this AI tool safe to use?” instead of just using it and hoping for the best.

The Bottom Line on Shadow AI

This isn’t going away. The only question is whether you’re managing it or pretending it doesn’t exist.

The firms sleeping well at night aren’t the ones who banned AI. They’re the ones who acknowledged it exists and created safe pathways for using it.

Because your employees are already using these tools, you just don’t know about it yet.

The Quick and Easy: Shadow AI, unauthorized AI tool usage by employees, has already caused breaches in 20% of organizations, costing an average of $200,000 each. With 98% of employees using unsanctioned apps and 75% projected to acquire technology outside IT visibility by 2027, banning AI tools doesn’t work. Instead, businesses need clear AI usage policies, approved tools that are easy to access, employee training on safe data sharing, and a culture that allows people to ask questions before using new tools. Technology isn’t the risk, but using it without oversight or understanding the consequences.

 

aisecurityShadow AI

Email Credential Theft is Still Hot

  • 1
Christopher Woo
Monday, 10 November 2025 / Published in Woo on Tech

You would think that with all the money pouring into technology these days, we would figure out a way to stem the flood of hacking attempts, but it seems the tech bros are more focused on figuring out how replace humans with AI than keeping humans safe. And sadly, email compromises, and even more importantly, business email compromises are big business for cybercrime, so they are pouring just as much money, humans and AI into stealing their way into your email.

What this means for you

First off, you may be wondering how it is, with all the existing tools and money aimed at security, we can’t do a better job filtering out all the myriad of ways hackers keep inventing to steal our passwords, and why multi-factor doesn’t seem to make any difference in stopping them. Lately a popular method of getting access to your 2FA-protected accounts is by cloning the cookie that is created when you authenticate with your multifactor, and this is accomplished by sending you links from actual legitimate websites, like Docusign for example, where the authentication process is expected. Most people, even hardened internet warriors, aren’t trained to spot when an authentication request is “out of context” – in this case, using your Microsoft credentials to log into the Docusign website, and may also be thinking, “Even if this isn’t legit, I have 2FA so the password being stolen doesn’t matter.” Normally they would be right, but the hacker is actually counting on that 2FA prompt to print them out a fake ID that gets them past the bouncer who is only trained to check ID’s and not whether the holder presenting them is legitimate. That’s an oversimplification of what happens, but the point is that the process they use to fake you out is actually a legitimate service (and hence ignored or passed through by usual malware checks) and even the documents you might actually be granted access to are harmless, because it was all a distraction to mask the real crime of bypassing your multifactor and gaining access to your email account undetected. And from there, the mayhem begins.

How do you combat this? Aside from being ultravigilent and deeply cautious to the point of paranoia, this particular type of attack is difficult to defend against, especially for personal email accounts. As a company, there are services that can be implemented that can detect certain types of unauthorized access once they have already occurred, but as many of you probably realize, the horse is already out of the barn, and this is damage control, not prevention. This type of unauthorized access detection is only one layer of a multilayered approach to security that all companies should have to keep their employees and themselves safe.

compromisecookiesemailhackmultifactor

How to live in a Post-Truth World

  • 0
Christopher Woo
Tuesday, 07 October 2025 / Published in Woo on Tech
Misleading Signs

In 2016, the Oxford Dictionary named “post-truth” as its “Word of the Year.” At the time, AI generated content was crude and easy to spot, and when it was presented as “real” no one took it seriously. There were plenty of other things to worry about: Brexit, the Panama Papers, the deaths of David Bowie, Prince, Muhammad Ali and John Glenn, numerous European terrorist attacks, Creepy Clown sightings, and the election of a US president who was (and still is) enamored with social media, a platform many of us had already noticed was having a significant detrimental effect on society in general. Fast forward nine years where I just saw a very convincing video on the internet from Jake Paul announcing that he was gay and releasing a makeup line. Except that video wasn’t real, but thousands, possibly millions thought it was.*1

What can we do?

I am asked constantly by my family, friends and clients how we are supposed to trust what we see and hear on the internet. Given just how far we have “advanced” in generating fake content that is essentially indistinguishable from reality, they are understandably concerned if not outright scared. We are far past the point of mainstream media priortizing objectivity and truth over profits. It’s clear we have plenty of politicians and leaders for whom truth is an inconvenience rather than an ideal, and the world’s richest men who are in charge of our technology seem hellbent on squeezing every last cent out of us, at the cost of our security, privacy and integrity. Unfortunately, none of us (as far as I know), is someone with enough clout and money to move this particular needle in any significant way, but we can all do something: You can continue to value truth and scientific knowledge and hold others to that same ideal. There so many ways to pursue this in your daily life that are beyond my capabilities to share with you, but there is definitely one thing I can call out in this blog: if you are going to consume content from social media (let’s face it, it’s not going anywhere anytime soon) don’t be lazy about it. Don’t just assume because someone you know on the internet posted something, that it is automatically true or to be taken at face value. We are already past the point of being able to say, “Seeing is believing,” without having to second guess ourselves, and we already know there are plenty of “people” on social media who are there purely to exploit anyone they can. We are tired. We are overworked and under overwhelming stress, and the internet is so conveniently apt at showing us exactly what we think we want to see. If you are going to value truth, you must be mindful that your social media feed is carefully tailored to show both want you want to see as well as what they want you to see, and their objective, in the end, is always profit and power frequently at the expense of truth. Knowing this is indeed half the battle, and the other half will be you, holding others accountable to truth as you hold yourself.

As a start for my own quest to seek truth on the Internet, I have found a service called Ground News (https://ground.news) that brings you the news as well as the reported bias of the news sources. I have found it useful in determining if the article I am reading might have some bias, and from that, determining if what I have read be helpful in finding out what actually is happening.

Another site that does something similar is AllSides (https://www.allsides.com/unbiased-balanced-news), another news aggregator run by a public benefit corporation, a concept that I wish were applied to many more corporations, especially the ones that seem to have a stranglehold on our daily existence.

Image by Pablo Jimeno from Pixabay

  1. I don’t want to give the “creator” any more internet clicks than they are already getting. If you want to see it, you know how to find it. ↩︎
biasfakenewsnewspost-truthsocial mediatruth

Scatological Devolution

  • 0
Christopher Woo
Tuesday, 26 August 2025 / Published in Woo on Tech
two ceramic smiling poop emojis on a white background
[Warning: there is some slightly foul language ahead. If you are easily offended, perhaps some of my other blogs may be of interest.]

I’ve written about this topic before, but it’s nice when major publications back your viewpoint. One of my favorite authors has a new book forthcoming, and as a sign of the times the title – which may have been scandalous in a previous, perhaps more innocent age – gets straight to the point: “Enshittification: Why Everything Suddenly Got Worse and What To Do About It“. And because everything these days is meta and Mr. Doctorow’s book isn’t even out, I read an advanced review of the book that contained praise as well as some criticisms which I think are valid and troubling to consider when asking the most important question.

What can we do about it?

In case you didn’t read my previous blog about this or don’t remember it (because we all have enough to worry about already, so I get it), “enshittification” is the concept that all good online services and websites will eventually be ruined by our society’s relentless pursuit of profit. The advanced review as it appears on the Current Affairs website does a pretty good job of explaining this topic, and if you don’t intend to purchase the book, I think the article provides enough of an overview for you to spot this trend in the world around you, which may or may not improve how you may feel about it. I’m going to read the book for myself before I render my own praise or criticism, but I have similar concerns to the reviewer’s when it comes to answering the question that you have all asked, “What can we do about it?” It sounds like Mr. Doctorow is calling for grassroots efforts and government intervention to counteract future enshittifications (the author seems to think it’s already too late for the likes of Amazon, Facebook, Netflix, etc. and I agree), but from where I’m sitting it seems like getting help from the government isn’t on the menu at the moment, and our grassroots are divided as we fight to maintain healthcare, livelihoods and just basic human decency. So what is my recommendation to you if your technology feels “shitty?”

Take matters into your own hands. If you have the option to use something else, do so and make sure you tell the losing platform why you moved (even if they will probably never read your feedback). If changing the technology isn’t an option, perhaps take a moment to clearly identify the crappy part for the purposes of determining if it’s something you have control or agency over (maybe a new setting or change in interface), or if it’s out of your hands, such as the price going up. If it’s out of your control, focus your energy on working around or through it, or changing something else so that you can eliminate it altogether. Using technology is unavoidable for most of us, but there is no reason to feel like you are a hostage to it, and the best way to manage this is to change the things that you can control, and asking for help or sympathy (or both!) on the things you can’t.

Doctorowenshittification

Can you tell the difference?

  • 0
Christopher Woo
Tuesday, 05 August 2025 / Published in elephant on the internet, Woo on Tech

I’ve been working in tech long enough to remember when “automation” meant macros in Excel and AI was still the stuff of sci-fi. Today, artificial intelligence is everywhere—from customer service chatbots to advanced data analytics, predictive modeling, and content creation. It’s no longer a niche tool; it’s a foundational layer in how businesses operate. And while this explosion of AI capability is exciting, it’s also incredibly risky—especially for those who treat it like a shortcut instead of a tool.

Let me be clear: AI is not magic. It’s not intelligent in the human sense. It’s powerful, but it’s only as good as the data it learns from and the intent behind its use. I’ve watched companies implement AI without understanding how it works, leading to biased outcomes, false insights, or compliance violations. They feed it flawed data, make strategic decisions based on unverified outputs, or worse, let it replace human judgment entirely.

The danger lies not in the technology, but in the overconfidence that often accompanies it.

AI should augment decision-making, not replace it. When misused, it can erode trust, amplify existing inequalities, and expose companies to significant legal and reputational risk. If you’re using generative AI to write content, ask yourself—how do you verify it’s accurate? If you’re using AI to screen job candidates, are you confident it’s not introducing bias?

As a consultant, I encourage clients to treat AI the same way they would a junior employee: train it, supervise it, and never let it act without oversight.

The future of AI is promising, but only if we use it responsibly. Those who blindly chase efficiency without understanding the tool may find themselves solving one problem and creating five more. So take the time to understand what AI is—and more importantly, what it isn’t.

Want help making AI work for your business—safely and strategically? Reach out for a consultation.

Author’s Note: This blog post was written by ChatGPT using the following prompt, “Write a short blog from the perspective of an experienced technology consultant about the rising use of AI and the dangers it poses for those that use the tool incorrectly.” I did not touch-up or edit the text provided by that prompt in any way, shape or form other than to copy and paste it into this website. Anyone who’s followed my blog for awhile or knows me personally might have smelled something fishy, or maybe not. In reading the above, I can definitely say that I have written plenty of articles just as bland. Interestingly, ChatGPT included the last, italicised bit – it’s clearly been trained on plenty of marketing blogs like this one. I know that many of you actually read my blogs for my personal take on technology. If I were to feed my own AI engine the past 10 years of my articles so that it could perhaps get a sense for my writing style and personality, do you think it could produce more blogs that would be indistinguishable from what I wrote with my own two hands and one brain?

Image courtesy of TAW4 at FreeDigitalPhotos.net

artificial intelligencechatgpt

The invisible algorithm bubble

  • 0
Christopher Woo
Tuesday, 08 July 2025 / Published in Woo on Tech, algorithm, data privacy, elephant on the internet, social media

Most of you have known about this aspect of Internet life for awhile now: everything we do is tracked, even in “incognito” mode and behind VPNs. And while some of the obvious indentifying bits of your transactions may be obscured by privacy tools most don’t even bother to use, everything we do is logged, categorized and analyzed down to the minute and individual, and across years and world-wide demographic groups. Any which way the data can be sliced, diced and sorted, it has and will be for the forseeable future. Data has been the gold-rush of the 21st century for several years now, and you’ve most likely started to sense the bubble of information that seems to follow you everywhere you go.

What on earth are you talking about?

By now, you’ve probably heard the term “algorithm” used to discuss various things, like search results, or page rankings, or advertising. Unless you happened to be immersed in a profession that deals with them all day long, you probably only have a vague sense of the impact algorithms have on your daily life. I could go on and on about how it works, but the easiest way to demonstrate how effective it is will be just to show you.

Assuming you have either a TikTok or YouTube account that you have used for at least a few months, try opening up a browser tab to either site while you are logged in, and another incognito tab while are not logged in. Even minimal use of an account will drastically change what the site presents to you on the front page. Now think about everywhere you log in: Facebook, Spotify, Amazon, Netflix, Gmail, Instagram. All of them have extremely specific and voluminous data profiles on every aspect of how you use their site, and they are constantly feeding that data to algorithms that constantly inform what and how content is presented to you. While this can be pleasing or even comforting at first, it also has the knock-on effect of not showing us things we don’t want to see, even when it may be important for us to have that exposure. Humans, in their “default” state, will gravitate to what is comfortable and familiar, and the internet continues to reinforce this is as vicious, feedback loop that is definitely turning out to be detrimental to compassion, curiousity and emotional growth.

Interestingly enough, most data algorithms also seem to follow a well-known phenomenon known as the the “Observer’s Effect” where the properties of the observed object change just because it is being observed. You can be certain that the minute you try to poke at the algorithm surrounding you on a particular platform, it will definitely observe you observing it, and depending on that platform’s intent for your interactions with it, will alter itself to maybe make it less obvious that you are being manipulated. Now wrap your head around that and add the fact that nearly all of our “news” is coming from platforms that actively know you are watching and can adjust what you consume based on agendas that most likely involve monetization and not just sharing information, and you get a sense for just how far down the rabbit hole we have fallen.

Image courtesy of TAW4 at FreeDigitalPhotos.net

Security is about to get even more complicated

  • 0
Christopher Woo
Tuesday, 27 May 2025 / Published in Woo on Tech

We’ve discussed in previous blogs how technology things seem to be getting worse from just about every angle, whether it’s cost, quality or security. We can attribute a large chunk of this downward trend to the increasing profitability of cybercrime, which is itself a vicious, amplifying spiral of escalation. The more we try to keep ourselves safe, the more complicated it becomes to do so, and most regular folks don’t have the training or endurance to keep up, especially if you are a part of the growing elderly generations that are forced to use technology they barely understand just to stay alive and keep in contact with friends and family. With the recent (in my opinion ill-advised) downsizing the Cybersecurity and Infrastructure Security Agency (CISA) much of the this country’s organizational strength and operational efficiency in cataloging and combatting cybersecurity threats will be abandoned.

What this means for all of us

Regardless of whether you are a big or small organization, CISA’s leadership and work provided foundational guidance on all existing cybersecurity threats while constantly researching, investigating and publishing information on new threats as they discovered. One of the main reasons that governments exist is to provide funding, resources and scaled force for tasks that cannot (and should not) be handled by smaller groups or for-profit institutions, such as military defense, mail delivery, and national security. As has been demonstrated time and time again, for-profit companies cannot be trusted to put people before profits, and security oversight is definitely not something you want to enshittify. And yet, that is exactly where we are. In the absence of CISA leadership, organizations, whether they be ad-hoc coalitions of state-level agencies or, most likely, for-profit companies in the security industry, are now scrambling to fill the gigantic, CISA-shaped hole in our nation’s cybersecurity. Let’s be clear, security for small businesses was already well on its way to becoming difficult, expensive and onerous. Eliminating national leadership will most definitely lead to a fracturing of an already complicated security framework that will most assuredly weigh very heavily on those who can least afford to shoulder a burden that was formerly carried by those trained, equipped and funded to do so.

enshittificationgovernmentsecurity

RIP Skype

  • 0
Christopher Woo
Tuesday, 06 May 2025 / Published in Woo on Tech

Two years ago, in 2023, Microsoft announced that over 36 million people were still using Skype daily to communicate via video and chat. The app was 20 years old at that time, and has been in Microsoft’s hands since 2011 when they bought it from eBay for $8.5 billion to replace their own popular (but less capable) Live Messenger service. On May 5, after 14 years in the trenches, Microsoft has shut down the service and has given users 60 days to move their content (contacts and messages) to the free version of Teams, or lose the data forever.

What this means for you

If you were a diehard Skype user hoping that Microsoft wasn’t going to make good on it’s February promise to close Skype permanently on May 5th, you are probably wondering what to do next. Fortunately, it seems that logging into Teams with your Skype credentials will ease the transition by automatically bringing over your chat history and contacts, because, in case you didn’t know, your Skype account was actually a full-blown Microsoft (personal) account all along. Unfortunately for many, the Teams replacement for Skype is not a feature-for-feature substitute, with the main loss being the ability to make phone calls to land lines and mobiles that don’t have internet access. This well-known “life-hack” trick was assuredly what kept Skype popular in face of the various other video chat apps that have come to dominate the space, and probably one of the main reasons Microsoft decided to shut down Skype in the end. If only a fraction of the 36 million Skype users were using Skype to make cheap or free long-distance calls, Microsoft was leaving a large amount of money on the table, even by their standards. Rest in power, Skype. You were a handy bit of software for many people.

Get ready to show your work

  • 0
Christopher Woo
Tuesday, 15 April 2025 / Published in Woo on Tech
Make a list, check it twice!

I’m sure it’s still a thing for students today, but one of the phrases that always caused a groan in any class that involved solving equations was, “Make sure you show your work.” Whether it was pre-Algebra or Advanced Calculus, the only way you could prove that you actually understood the topic well enough to solve the problem was for you to write out each step of the solution. We had graphing calculators when I was going through high school, but even if we were allowed to use them during tests, more often than not there was going to be at least one instance where the calculator was only there to confirm the answer we arrived at after lines and lines of chicken scratch and piles of eraser crumbs.

There’s a point to this nostalgic indulgence

If you are a business owner or part of the executive team, you will likely be familiar with the technology security questionnaires that accompany your organization insurance renewals. Up until perhaps 2023, checking “yes” boxes on the questions or tossing in vague answers were typically enough to get you through the approval or renewal process, and I’m fairly certain that the application reviewers were just as cross-eyed as you were when filling them out. I’m (not really) sorry to say this “relaxed” approach to evaluating your security standards are in the rear-view mirror for everyone, regardless of the industry you are in or the size of your organization. Insurance carriers are reading your responses and are not taking “N/A” or “No” as an answer when asking if you have various security safeguards in place. At best, you may be encouraged by your insurance agent to, “Reconsider some of your responses,” and at worst it may lead to an outright denial of coverage and a mad scramble to find another carrier for your insurance needs. The insurance industry is already taking a beating on natural disaster claims (something not likely to abate given the world’s general dismissal of climate change), so they are definitely not going to be generous with the next most popular claim: cyberattacks. Don’t given them any excuse to deny a cyber liability claim by just checking a box. Show your work by actually implementing the security standards they are asking about, and if you don’t know where to start, get a professional like C2 on the job as soon as possible.

  • 1
  • 2
  • 3

Recent Posts

  • Person typing on laptop with email showing and AI symbol

    AI Generated Phishing: Why Your Security Training Is Already Outdated

    Remember when you could spot a phishing email b...
  • Employees in a meeting, AI is present

    Shadow AI – The Security Risk Already Inside Your Company

    Your employees are using unauthorized AI tools ...
  • Email Credential Theft is Still Hot

    You would think that with all the money pouring...
  • Misleading Signs

    How to live in a Post-Truth World

    In 2016, the Oxford Dictionary named “pos...
  • two ceramic smiling poop emojis on a white background

    Scatological Devolution

    [Warning: there is some slightly foul language ...

Archives

  • GET SOCIAL
Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

© 2016 All rights reserved.

TOP