If there is one thing that the Internet excels at, it is putting any information – old and new – literally at your fingertips. Conversely, one of the things it does a terrible job at is qualifying that information, to the point where it becomes increasingly difficult to weed out the good from the bad. If you use technology as part of your work, you must continue to fight valiantly to stay internet and tech savvy just to keep yourself safe, and unfortunately for you, technology security is evolving so quickly even us experts are struggling to keep everyone as savvy as they need to be in 2023. I could bore you to tears with the constant cavalcade of new technology pouring into business these days, but my job is to point out what’s important, and right now, security continues to be priority one.
You should know these new terms. Study like there will be a test on Friday!
Endpoint Detection & Response (EDR) is what the security industry is calling the next generation (really, this generation) malware protection you might have known as “antivirus” back in the late 2000’s and 2010’s. Today’s cyberthreats bear very little resemblance to the viruses we feared in the previous decades, and as such EDR platforms are built to not only detect known viruses, but also monitor suspicious behaviors and information patterns using constantly updated algorithms to spot possibly undocumented but malicious activity. Where the previous generation antivirus may have scanned your computer once a day and quarantined the files it could identify, EDR platforms are built to monitor all activity constantly and act immediately, up to locking down the affected PC and sending out warning flags to security personnel.
Zero-Trust Networking is a relatively new security concept that upends the traditional concept of assuming the devices on your office network should be, by default, allowed access to that network because those computers are “inside the firewall.” Zero trust security basically states that all devices must constantly prove they are safe and legitimate before they are granted access to any protected information or services. The moment they aren’t able to do so (perhaps because of a malware infection or installation of unauthorized software or failed password attempts) zero-trust systems may restrict access to various systems or applications, the internet, or even access to the device itself.
Security Information and Event Management (SIEM) is a security service that insurance companies are increasingly looking for when underwriting clients. Though the name seems to imply otherwise, this is not about throwing a party for security, but instead this is a platform that gathers the large amount of data that your various technologies and services generate as you and your organization uses them, aggregates that data into massive, searchable database that is then scanned by even more algorithms and humans to spot unusual events, security breaches and other items of interest before they have time to turn into front-page news and business destroying events.
Image by Free stock photos from www.rupixen.com from Pixabay
Part of an occasional series of articles that discuss what I call “The Elephant on the Internet.”
One of the things that is becoming readily apparent with the younger generations is a growing disaffection with established religions. According to study in 2022 performed by the Survey Center on American Life, religious affiliation has been steadily declining in America for the past 30 years, which is generally around the same time access to the Internet became reliably and affordably available to the masses. Obviously, that’s not the only thing that has risen in prominence since the 1990’s, but you’d be hard pressed to name something else that might even get close to matching the importance of organized religion, and clearly, for each successive generation, it’s overshadowing last century’s opiate without breaking a sweat.
Get to the point, Woo!
Unfortunately for this quotation and the idea it represents, its original author is not viewed fondly by Americans, who, more so than perhaps the previous 60 years, are again struggling through an identity crisis that has been fueled and stoked by religious extremism and class conflict, core elements of our fabled enemy of the Cold War: Marxism. Before the Internet, TV was the stand-in for Religion, but the concept remains as applicable regardless of the actual opiate: people will always seek something to distract them from the struggles of life, various injustices and the seeming indifference of the cosmos towards our personal trials. In case you didn’t notice, Television has essentially been assimilated by the Internet, and our local church is one of many that I know that are adopting Internet platforms like streaming and social media in a bid to fill its pews and remain relevant with generations that are already firmly hooked on the Internet.
Here’s the scary part: unlike Television (and maybe more like Religion, pre-Industrial Revolution), the Internet is not only our opiate from an entertainment/distraction standpoint, but it’s also now our daily bread: we have, unwittingly or not, tied everything of modern life to the Internet. Some of us have bound our very livelihood to the Internet and many do not know how to live otherwise. I’m sure the thought of religion disappearing suddenly isn’t as breathtaking as it might have been 100 years ago, or the thought of a world without TV 50 years ago, but could you imagine what would happen if the internet stopped working tomorrow? Every time the internet goes down (which seems to be frequent these days), a small part of me asks, “What if it doesn’t come back?” or worse, “What if it comes back for some and not others?” That latter question is one we might need to answer sooner rather than later. An increasingly shrinking number of companies and individuals control nearly every corner of the Internet while religiously making sure we’re distracted, and I would be hard pressed to identify if any of them have any sort of recognizable ethical governance or compassion.
Image courtesy of TAW4 at FreeDigitalPhotos.net
If you catch me at the end of a frustrating day, I can sometimes be overheard swearing quietly under my breath about certain technology platforms, especially inkjet printers. Make no mistake, I was a huge fan when they first appeared on the scene – being able to print your own, high-quality photos was a dream come true for amateur photographers and graphic designers, of which I was both when HP released their famous “Deskjet” printer in 1998. Twenty-five years later, HP has managed to twist this innovative hardware platform into yet another moneymaking scam with their inescapable ink subscription platform. At least one judge has heard our suffering, made evident after denying the dismissal of a class-action lawsuit brought against HP for falsely advertising all-in-one printers that stop functioning if ink is low or missing, even if the function doesn’t require ink (like scanning or faxing).
What this means for you
Let’s be real. The chances of a mega-corporation being brought to heel by a California judge are fairly slim, but the fact that one of them stood up to the world’s largest printer manufacturer means that there are people still willing to stand up for consumers, keeping that small spark of hope still lit in this cynic’s heart. In case you happen to be one of the 7 people on Earth who haven’t fallen into this trap in the past 10 years or so, most of the major printer manufacturers have turned their inkjet product lines into the razor and blades model of the new millennium wherein the printers are sold cheaply (sometimes at a loss) because the ink cartridges they require are the real money maker. Up until maybe 3-4 years ago, third-party ink sellers leveled the playing field somewhat by providing less expensive (and usually lower quality) consumables for those printers, but once the manufacturers realized how much money they were leaving on the table, they closed that loophole by locking down the printers to require “genuine” ink and toner. While an argument can be made that using non-genuine consumables gives the manufacturer reasonable justification for voiding warranties or declining warranty service, it’s not clear what justifies rendering them completely nonfunctional because one of your ink colors is low or depleted. Except of course, the pure-profit motive that seems to drive every consumer technology company these days.
That’s enough ranting for one day. If you need some lightly NSFW humor to lighten the mood (WARNING: Foul language ahead!), have a read of @System32Comics on Instagram (I know, I know, “social media bad,” but “independent web comic artists GOOD!”), including one of my all-time favorites of theirs which perfectly illustrates the dystopian world in which we now live:
Image by pavelkovar from Pixabay
I know some of you are Trekkies, and even if you aren’t a fan, you’ve more than likely heard the phrase, “You will be assimilated. Resistance is futile,” chanted by Star Trek’s hive-mind aliens, the Borg. Though they pale in comparison to some of the movies and series’ most iconic nemeses like Khan and the omnipotent Q, their constant drive to absorb beings and technology to improve the collective are proving to be hauntingly prescient when compared to certain modern-day companies seemingly bent on assimilating the internet to feed the AI beast.
“I am the beginning, the end, the one who is many. I am the Borg.”
When the Borg appeared for the first time on Star Trek in 1989, repulsion to their “otherness” came from our culture’s inherent dislike of the concept of individuality and freedom being made subservient to a collective will. While AI was not new to science fiction at the time – it had already become infamous decades before in the sci-fi classic 2001: A Space Odyssey – it was viewed as something maybe possible in the distant future. Luckily, we got Y2K instead of HAL when the new millennium rolled around, but now, just 20-ish years later, we are faced with the reality of web-crawling bots hoovering up everything on the internet to fuel “large language model” AI platforms. It’s hard not to draw comparisons to the Borg in this regard. Human content creators are already having to resort to legal measures against various companies for “assimilating” their original work into AI-generated copycat products that are being sold on platforms like Amazon (a company often compared to the Borg) or appearing in YouTube videos (another very Borg-like company), or in sound-alike songs on Spotify.
“We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us.”Star Trek: First Contact (1996)
Image by PIRO from Pixabay
The FBI held a press conference last week to confirm what we figured was already a thing the moment open-source AI projects started surfacing: threat actors are using artificial intelligence to write malware, build ransomware websites and to put more teeth in their phishing campaigns. And as if we didn’t need more nightmare fuel, the FBI also shared this little nugget: terrorists groups are using AI to research deadlier “projects” like more potent chemical attacks.
If you can dream it, you can build it.
Unfortunately for us, dreams aren’t limited to those of us who are just trying to make our way through life without hurting anyone while having some fun along the way. Criminals aren’t hampered by ethics or compassion, and neither are AI’s, even when the programmers try to put in safeguards. As I’ve always maintained, anything built by humans will be subject to our flaws, and I don’t know that I’m willing to trust that any AI that becomes self-aware will be able to differentiate between good and evil with the amount of garbage we have piled onto the internet. At this point, unless you happened to be a multi-billionaire with ethics and a hotline to folks in power, the best you can do is let your congress-critter know that we should be pumping the brakes on this runaway AI truck. While there have been some relatively feeble attempts from the established technology titans to put together something akin to a digital watermark that will help the rest of the world identify content created by an AI, there are probably hundreds of throne-contenders willing to ignore the rules for a chance at the top, humanity be damned, and you can bet that many of them already have their hands in the pockets of any government powerful enough to even try to regulate this technology.
Am I saying it’s time to start looking for bunker-friendly real estate in an under-developed country with robot unfriendly terrain? Not yet, but could we confidently say we would know when that moment has arrived, or maybe we’ve already crossed that threshold. Most of us can only cross our fingers and hope the future is more like Star Trek and nothing like Terminator.
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net
Despite the fact that artificial intelligence seems to be creeping into almost every aspect of our lives, we’re still a bit aways from AI being able to understand that what we intend to do with our various bits of technology isn’t always what we end up doing. Perhaps the most familiar example of this is the infamous autocorrect feature on your smartphone. Depending on your degree of finger-fatness, spelling acumen and (let’s be real) patience for texting in general, the autocorrect function of your phone can swing to the extremes within a single sentence, oftentimes with hilarious results if you aren’t paying attention before hitting send. Apparently, this has been happening on a wide scale for over 10 years now with emails mistakenly sent to email addresses in Mali (the African country whose domain is “.ML”) instead of the appropriate military mailbox that ends in “.MIL”.
What this means for you
Per the US’s Department of Defense spokesperson, they are well aware of the problem and have addressed this issue for any military emailers by blocking the .ML domain from delivery. Problem solved, right? Well, at least for the US Military, but not for the rest of the world that is well outside of their control, and apparently their immediate concern. Ever since the days of “Clippy” software developers have been making various attempts to assist us in being better at technology. Their heart is in the right place, but each time, it falls short. Right now, as I type this blog, WordPress is suggesting various words and corrections that variously remind me about my poor typing habits, sloppy word choices and the overwhelming fact that my grade school English teachers were better at keeping 30 kids subdued versus impressing upon them the importance of good grammar. In the end, it is helping me write a better (at least grammatically) article, but only because I’m not blindly accepting every suggestion it’s providing.
The key problem with today’s current “active assistant” systems is that they are still relying on humans to provide data, and as we all know, humans are fallible and prone to mischief, especially when it comes to AI. This was back in 2016, mind you, before the arrival of concepts like “post-truth” and “alternate facts”, so if anything, the data we’ve been amassing in the past 6 years is probably the most unreliable it’s been since the advent of the Internet. And here’s the thing – let’s say you’re a military contractor working through an email service administered by someone other than the Department of Defense. You’re an international company, regularly dealing with people all around the world. A third grader could probably understand the difference between .MIL and .ML, but if you are Outlook and you are just trying to send emails out because your human pushed “send,” unless you’ve been trained to know that your human is a military contractor that works with the US military and not the Malian military, that email is going to get sent to the wrong mailbox because you missed a keystroke. In this instance, when AI gets good enough to spot the problem and say, “Hey, did you mean to address this email to the African country of Mali?” it might be a boon instead of a bane.
Image by Fernando Arcos from Pixabay
I still regularly encounter the perception that Apple computers are inherently more secure than Windows PCs. From a purely statistical standpoint Apples are hacked less than Windows PCs, but that’s largely because there are less OS X computers in the world compared to Windows PCs. From a purely mercenary standpoint, hackers are going to go where the money is, so it stands to reason that Apple computers will be targeted less, but iPhones still comprise the majority of mobile devices in use around the world. Fanboys on both sides will argue for the superior security architecture of their ride-or-die OS, but the fact remains that all operating systems are written by humans (for now!) and we all know humans make mistakes from time to time. Normally we focus on Windows security because they constitute the majority of our clientele, but Apple gets the spotlight this week for a zero-day vulnerability that is being actively exploited, and when Apple attempted to patch the flaws, they broke Safari’s access to certain websites.
What this means for you
Unfortunately for everyone, the flaw is something that definitely needs to be addressed quickly as security researchers have found websites in the wild that have been built specifically to exploit the weakness. Affected devices may be tricked into what’s known as “arbitrary code execution” meaning the attackers can fool your device (both computers and iOS mobile devices) into running malware which can then lead to your device being completely compromised. To their credit, Apple acted quickly by issuing security fixes through their Rapid Security Response (RSR) updates which (if your device is configured to install them) supposedly addressed the vulnerability, but once the RSRs were applied it broke Safari’s access to websites like Zoom, Facebook and Instagram. Apple has since pulled back the RSRs due to the cure being worse than the disease and are probably working on an updated RSR that won’t break the internet. In case you were wondering, Apple has had to patch 10 zero-day vulnerabilities so far in 2023. To be fair, this is way less than what has had to be patched on the Windows side. Heck, the latest Microsoft update addresses 6 critical flaws this week! Both platforms are far from perfect when it comes to security. Don’t let the numbers lull you into a false sense of security – Mac users, just like PC users, should have proper malware protection and backups in place. As they stay in the stock market, “Past performance is no guarantee of future results.”
Image by Bruno /Germany from Pixabay
Though it may surprise you to know that Microsoft isn’t the biggest company in the world in 2023 (that honor belongs to Apple this year), you can bet they can field enough lawyers to literally bury any litigation Joe Everyman could think to bring against them. This hasn’t stopped a New Jersey attorney for suing Microsoft because he can’t access his email, even after days of attempting to get help from Microsoft’s eldritch technical support bureaucracy. I can see some of you breaking out in a cold sweat already, imagining the nightmare your job or life would be if you couldn’t access your email. Well, you don’t have to imagine – just read the complaint if you’d like to have it outlined in bulleted, stomach-churning detail.
What this means for you
The internet has democratized many things including providing easy, affordable consumer access to the technology services everyone needs to get even the most banal things done to survive in today’s world. A Microsoft 365 email box can be had for as little as $5/month with a credit card and a few minutes of time. Though you may encounter a glitch or two along the way to spending your handful of dollars to getting a very powerful and reliable email service, any difficulty you will have encountered in “buying” the service (psst, you’re renting, btw) will pale in comparison to navigating the platform when something goes wrong. Here’s why: you are paying $5/month for a service that is built to scale for the world’s largest organizations and once you punch past the paper-thin “training wheel” trappings that makes the service marketable to New Jersey attorneys (and you and me) you uncover the cosmic horror of Microsoft’s technology leviathan. What sort of Faustian bargain did you get into? A necessary one, but this is why you hire someone like C2 to help you put a leash on the beast you just bought. Make no mistake – Microsoft’s technology is sometimes just as incomprehensible to us as it is to you, but instead of being paralyzed by fear when the beast rears up on its hind legs, we roll up our sleeves and walk straight into the belly of beast, “Hello darkness, my old friend, I’ve come to talk with you again.”
Image by Colin Behrens from Pixabay
A Russian-backed ransomware gang known as “Cl0p” has put about 50 notches in its belt in the past two weeks by exploiting several vulnerabilities in a Managed File Transfer (MFT) platform called MoveIt. Though you might never have heard of MFTs or MoveIt, you are probably very familiar with DropBox, Google Drive and OneDrive, all of which feature the ability to share files with others (ie. MFTs) as part of their overall service. MoveIt is purchased by organizations that want to set up their own private file sharing service and one of the distinctive features of MoveIT is that is premise-based and not cloud-based. Even now many organizations believe that “rolling your own” on-premise services is more secure than putting everything in the cloud, but this batch of breaches is proving the exact opposite.
What this means for you
Fifty seems like an impressive body-count, and those are only the ones we know about. According security researchers, Cl0p may have been probing weakness in MoveIt implementations as far back as 2021. The group is following the usual extortion playbook – they are threatening to release the stolen data unless their demands are met, though in several instances they seem to be walking a careful path to steer clear of extorting entities that might draw literal crosshairs on their backs. While Cl0p seemed proud to enumerate the US Department of Energy on its list of victims, it said in a statement that it would not be exploiting any data taken from government agencies and that such data would be erased, presumably to avoid global politics (and “lettered agency” involvement) getting in the way of profits.
The key takeaway for us smaller targets is pointing out that premised-based systems are no more secure than cloud-based systems, and in this particular case, because onsite systems require active monitoring and maintenance by trained professionals to stay secure, this becomes a fundamental weakness if the organization cannot maintain the premise system as well as a cloud-based (and centrally managed) platform. Most on-premise platforms are far from the “set and forget” applications of the previous decades, and any system that is internet-facing like MFTs require constant policing, something that most companies are ill-suited to provide or even afford.
Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net
Though it is becoming increasingly difficult to do it, I do try to find positive technology news to share with you. Given all the doom and gloom surrounding artificial intelligence lately (for good reason!) it seems rare to find a silver lining to the black cloud hovering over everything else, but like a lone ray of light piercing the foreboding dark, comes news of a critical medical discovery made by a team of university researchers and an AI algorithm.
Technology should supplement, not replace, people
In the abstract of the article published to the Nature Chemical Biology journal, the researchers stated it with deceptively terse simplicity:
Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii.Abstract of Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii
In case you aren’t a microbiologist with institutional access to Nature’s publications, you can read a nice break-down of the paper from MIT News, but I’ll also give you the shorthand: scientists dumped a bunch of data on a computer program that was told to sift through and look for anything that might work against a drug-resistant strain of bacteria known as A. baumannii. And it found something that has the promise of being highly effective against the deadly bacteria in two hours of analysis (after a lot pre-programming and data gathering).
The most important distinction between the above and the almost carney-like uses of ChatGPT that have been making the news lately is simply this: where we seem to stumble is when we attempt to substitute technology for humans instead of using it to amplify and augment what makes us human. Among the many who are protesting the alarming rise of AI are the ones that corporations seem most eager to replace – the writers, artists and musicians. And not because they are doing a better job of it – quantity and speed do not equal quality – but because at least some part of our society seems willing to accept the slightly off-kilter, lifeless and sometimes completely misinformed AI-generated content because it’s cheap or free. Are we guilty of readily grabbing up what we can get, even if it’s not nearly as good as the “real thing”? At what point would be it be acceptable to get medical counseling from an AI – perhaps when the real thing is only available to those who can afford it?
Image by bamenny from Pixabay