Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

C2 provides technology services and consultation to businesses and individuals.

T (818) 584 6021
Email: [email protected]

C2 Technology Partners, Inc.
26500 Agoura Rd, Ste 102-576, Calabasas, CA 91302

Open in Google Maps
QUESTIONS? CALL: 818-584-6021
  • HOME
  • BLOG
  • SERVICES
    • Encryption
    • Backups
  • ABOUT
    • SMS Opt-In Form
    • Terms and Conditions
    • Privacy Policy
FREECONSULT

Can you tell the difference?

  • 0
Christopher Woo
Tuesday, 05 August 2025 / Published in elephant on the internet, Woo on Tech

I’ve been working in tech long enough to remember when “automation” meant macros in Excel and AI was still the stuff of sci-fi. Today, artificial intelligence is everywhere—from customer service chatbots to advanced data analytics, predictive modeling, and content creation. It’s no longer a niche tool; it’s a foundational layer in how businesses operate. And while this explosion of AI capability is exciting, it’s also incredibly risky—especially for those who treat it like a shortcut instead of a tool.

Let me be clear: AI is not magic. It’s not intelligent in the human sense. It’s powerful, but it’s only as good as the data it learns from and the intent behind its use. I’ve watched companies implement AI without understanding how it works, leading to biased outcomes, false insights, or compliance violations. They feed it flawed data, make strategic decisions based on unverified outputs, or worse, let it replace human judgment entirely.

The danger lies not in the technology, but in the overconfidence that often accompanies it.

AI should augment decision-making, not replace it. When misused, it can erode trust, amplify existing inequalities, and expose companies to significant legal and reputational risk. If you’re using generative AI to write content, ask yourself—how do you verify it’s accurate? If you’re using AI to screen job candidates, are you confident it’s not introducing bias?

As a consultant, I encourage clients to treat AI the same way they would a junior employee: train it, supervise it, and never let it act without oversight.

The future of AI is promising, but only if we use it responsibly. Those who blindly chase efficiency without understanding the tool may find themselves solving one problem and creating five more. So take the time to understand what AI is—and more importantly, what it isn’t.

Want help making AI work for your business—safely and strategically? Reach out for a consultation.

Author’s Note: This blog post was written by ChatGPT using the following prompt, “Write a short blog from the perspective of an experienced technology consultant about the rising use of AI and the dangers it poses for those that use the tool incorrectly.” I did not touch-up or edit the text provided by that prompt in any way, shape or form other than to copy and paste it into this website. Anyone who’s followed my blog for awhile or knows me personally might have smelled something fishy, or maybe not. In reading the above, I can definitely say that I have written plenty of articles just as bland. Interestingly, ChatGPT included the last, italicised bit – it’s clearly been trained on plenty of marketing blogs like this one. I know that many of you actually read my blogs for my personal take on technology. If I were to feed my own AI engine the past 10 years of my articles so that it could perhaps get a sense for my writing style and personality, do you think it could produce more blogs that would be indistinguishable from what I wrote with my own two hands and one brain?

Image courtesy of TAW4 at FreeDigitalPhotos.net

artificial intelligencechatgpt

The invisible algorithm bubble

  • 0
Christopher Woo
Tuesday, 08 July 2025 / Published in Woo on Tech, algorithm, data privacy, elephant on the internet, social media

Most of you have known about this aspect of Internet life for awhile now: everything we do is tracked, even in “incognito” mode and behind VPNs. And while some of the obvious indentifying bits of your transactions may be obscured by privacy tools most don’t even bother to use, everything we do is logged, categorized and analyzed down to the minute and individual, and across years and world-wide demographic groups. Any which way the data can be sliced, diced and sorted, it has and will be for the forseeable future. Data has been the gold-rush of the 21st century for several years now, and you’ve most likely started to sense the bubble of information that seems to follow you everywhere you go.

What on earth are you talking about?

By now, you’ve probably heard the term “algorithm” used to discuss various things, like search results, or page rankings, or advertising. Unless you happened to be immersed in a profession that deals with them all day long, you probably only have a vague sense of the impact algorithms have on your daily life. I could go on and on about how it works, but the easiest way to demonstrate how effective it is will be just to show you.

Assuming you have either a TikTok or YouTube account that you have used for at least a few months, try opening up a browser tab to either site while you are logged in, and another incognito tab while are not logged in. Even minimal use of an account will drastically change what the site presents to you on the front page. Now think about everywhere you log in: Facebook, Spotify, Amazon, Netflix, Gmail, Instagram. All of them have extremely specific and voluminous data profiles on every aspect of how you use their site, and they are constantly feeding that data to algorithms that constantly inform what and how content is presented to you. While this can be pleasing or even comforting at first, it also has the knock-on effect of not showing us things we don’t want to see, even when it may be important for us to have that exposure. Humans, in their “default” state, will gravitate to what is comfortable and familiar, and the internet continues to reinforce this is as vicious, feedback loop that is definitely turning out to be detrimental to compassion, curiousity and emotional growth.

Interestingly enough, most data algorithms also seem to follow a well-known phenomenon known as the the “Observer’s Effect” where the properties of the observed object change just because it is being observed. You can be certain that the minute you try to poke at the algorithm surrounding you on a particular platform, it will definitely observe you observing it, and depending on that platform’s intent for your interactions with it, will alter itself to maybe make it less obvious that you are being manipulated. Now wrap your head around that and add the fact that nearly all of our “news” is coming from platforms that actively know you are watching and can adjust what you consume based on agendas that most likely involve monetization and not just sharing information, and you get a sense for just how far down the rabbit hole we have fallen.

Image courtesy of TAW4 at FreeDigitalPhotos.net

Recent Posts

  • Email Credential Theft is Still Hot

    You would think that with all the money pouring...
  • Misleading Signs

    How to live in a Post-Truth World

    In 2016, the Oxford Dictionary named “pos...
  • two ceramic smiling poop emojis on a white background

    Scatological Devolution

    [Warning: there is some slightly foul language ...
  • Can you tell the difference?

    I’ve been working in tech long enough to rememb...
  • The invisible algorithm bubble

    Most of you have known about this aspect of Int...

Archives

  • GET SOCIAL
Get Tech Support Now - (818) 584-6021 - C2 Technology Partners, Inc.

© 2016 All rights reserved.

TOP