Part of an occasional series of articles that discuss what I call “The Elephant on the Internet.”
One of the things that is becoming readily apparent with the younger generations is a growing disaffection with established religions. According to a study in 2022 performed by the Survey Center on American Life, religious affiliation has been steadily declining in America for the past 30 years, which is generally around the same time access to the Internet became reliably and affordably available to the masses. Obviously, that’s not the only thing that has risen in prominence since the 1990’s, but you’d be hard pressed to name something else that might even get close to matching the importance of organized religion, and clearly, for each successive generation, it’s overshadowing last century’s opiate without breaking a sweat.
Get to the point, Woo!
Unfortunately for this quotation and the idea it represents, its original author is not viewed fondly by Americans, who, more so than perhaps the previous 60 years, are again struggling through an identity crisis that has been fueled and stoked by religious extremism and class conflict, core elements of our fabled enemy of the Cold War: Marxism. Before the Internet, TV was the stand-in for Religion, but the concept remains as applicable regardless of the actual opiate: people will always seek something to distract them from the struggles of life, various injustices and the seeming indifference of the cosmos towards our personal trials. In case you didn’t notice, Television has essentially been assimilated by the Internet, and our local church is one of many that I know that are adopting Internet platforms like streaming and social media in a bid to fill its pews and remain relevant with generations that are already firmly hooked on the Internet.
Here’s the scary part: unlike Television (and maybe more like Religion, pre-Industrial Revolution), the Internet is not only our opiate from an entertainment/distraction standpoint, but it’s also now our daily bread: we have, unwittingly or not, tied everything of modern life to the Internet. Some of us have bound our very livelihood to the Internet and many do not know how to live otherwise. I’m sure the thought of religion disappearing suddenly isn’t as breathtaking as it might have been 100 years ago, or the thought of a world without TV 50 years ago, but could you imagine what would happen if the internet stopped working tomorrow? Every time the internet goes down (which seems to be frequent these days), a small part of me asks, “What if it doesn’t come back?” or worse, “What if it comes back for some and not others?” That latter question is one we might need to answer sooner rather than later. An increasingly shrinking number of companies and individuals control nearly every corner of the Internet while religiously making sure we’re distracted, and I would be hard pressed to identify if any of them have any sort of recognizable ethical governance or compassion.
Image courtesy of TAW4 at FreeDigitalPhotos.net
The news is aflutter with Artificial Intelligence bots doing things like writing job descriptions, college essays, passing Bar exams and apparently various other menial tasks that we humans would clearly rather have someone else doing, especially if that someone else doesn’t need to get paid, or at least paid a living wage. Both Microsoft and Google have announced their intentions to include AI in their business platforms, and while some of the things people have had AI do are pretty nifty, we also seem to be conveniently forgetting or at least disregarding the consequences of letting technology do everything.
“I’ll be back.”
Terminator is probably an extreme example of AI gone horribly awry, but we can already see faint echoes of a future where we become complacent about machines replacing humans across all aspects of our lives. Sure, it is nice that technology can assist with the dangerous, dirty and banal tasks, and for it to augment our capabilities in things where our physical bodies limits us, such as space exploration or virology or disabilities, but once it starts replacing things we should know how to do (even if not as well as a machine), we are placing a dangerous amount of trust in something that can (and will) fail. The most common manifestation of this is how most humans handle password management. We rely on technology to remember and automatically enter passwords for us on everything, including the most critical services such as email, banking apps and even the password management platform itself, and as a result, don’t remember any of them, or even realize that a password is required at all.
As a simple test of how vulnerable you might be to this over-dependency, if you imagined yourself being sat down in front of a brand-new phone or computer, would you know how to get access to something like your email, or your bank account, or even where your passwords are stored? If even imagining this scenario is triggering your fight or flight response, you might be relying on technology too blindly. There is a fine line between allowing technology to augment our capabilities as humans versus replacing basic skills that everyone should have in a rapidly evolving world. No AI spam filter in the world will beat well-trained common sense and skepticism. Using technology and our humanity together is the difference between utopia and dystopia.
Image courtesy of Geerati at FreeDigitalPhotos.net
I’d hazard a guess that this could be more broadly stated that people world-wide don’t understand how their data is being used by companies and governments, but the basis for this generalization comes from a study published by the US by the Annenberg School for Communication entitled “Americans Can’t Consent to Companies’ Use of Their Data.” A bold statement for a country for whom a large part of their economy is derived from monetizing digital ones and zeroes, but the subtitle tells us the rest of the story: “They Admit They Don’t Understand It, Say They’re Helpless To Control It, and Believe They’re Harmed When Firms Use Their Data – Making What Companies Do Illegitimate.”
Doesn’t exactly roll off the tongue
The survey asked 2000 Americans 17 true-false questions about how companies gather and use data for digital marketing purposes, and if participants were to be graded on the traditional academic scale, most of the class failed, and only 1 person out of the 2000 got an “A”. An example of the type of knowledge tested:
FACT: The Federal Health Insurance and Portability Act (HIPAA) does not stop apps that provide information about health – such as exercise and fertility apps – from selling data collected about the app users to marketers. 82% of Americans don’t know; 45% admit they don’t know.
“Americans Can’t Consent to Companies’ Use of Their Data: They Admit They Don’t Understand It, Say They’re Helpless To Control It, and Believe They’re Harmed When Firms Use Their Data – Making What Companies Do Illegitimate.” Turow, Lelkes, Draper, Waldman, 2023.
You should read this paper (or at least the summary), but I understand it if you don’t. Even though it reads easier than your typical academic paper, the topic is uncomfortable for those who have an inkling of what’s at stake, and for most of us, we’ve already resigned ourselves to not being able to do anything about it because we feel powerless to do otherwise. And this is their point – this paper wasn’t written merely as an academic exercise. The authors are basically claiming that because very few of us can understand the variety and extent to which companies collect and use our data, there is no possible way we can give genuine informed consent for them to do so. But unless there are laws that protect us in this regard, American companies can do as they please, and they will do so because their responsibility is not people but to stakeholders, and in this current market, minding everyone’s privacy is not nearly as profitable as ignoring it.
This report now provides evidence that notice-and-consent may be beyond repair—and could even be harmful to individuals and society. Companies may argue they offer ways for people to stop such tracking. But as we have seen, a great percentage of the US population has no understanding of how the basics of the commercial internet work. Expecting Americans to learn how to continually keep track of how and when to opt out, opt in, and expunge their data is folly.
ibid, Page 18 (emphasis mine)
As is often the case with academic papers, rarely do the authors take on the monumental task of attempting to solve the issue, but they at least acknowledge that the possible only way to even begin is for our lawmakers to acknowledge this enormous elephant on the internet.
We hope the findings of this study will further encourage all policymakers to flip the script so that the burden of protection from commercial surveillance is not mostly on us. The social goal must be to move us away from the emptiness of consent.
ibid, Page 19 (emphasis mine)
Perhaps a letter to your elected representatives asking them if they’ve read this article and have any interest in doing something about it?
Image courtesy of TAW4 at FreeDigitalPhotos.net
In years leading up to the domination of the world by the Internet we used to make fun of organizations and industries that seemed to be dragging their feet on getting modernized – the Navy’s old DOS-based, air-gapped systems seemed so antiquated (even with the Wargames movie sounding very prescient, if simplistic alarms) or local mom-and-pops using mechanical registers, or hospitals and clipboard paper charts. Now that everything has a network connection and is sending and receiving data via the internet, it would seem the Monkey’s Paw curled up all fingers except one and that one is flipping us “the bird.” This latest facepalm comes in the form of devices built by or containing components built by Siemens that use an operating system known as Nucleus, an OS that was written for devices used in industries that require stringent safety and security controls, such as the medical, automotive and aviation controls. Clearly this would mean that the OS must be safer than the usual swiss cheese we see from OS’s like Windows, right? Researchers have found 13 vulnerabilities in the networks stack of Nucleus, an OS that is used in an estimated 3 billion devices.
What this means for you
I won’t go into the gory details of the vulnerabilities as that would only be entertaining for security geeks and I know they aren’t reading my blogs for that sort of fun. Suffice it to say, so far as the researchers know, these vulnerabilities haven’t been exploited in the wild yet and Siemens has supposedly addressed these holes with updates. So why am I spending precious minutes telling you about something that (a) you have no direct control over and (b) might already be taken care of? Precisely because of those things. It’s convenient and comfortable for us to go about our daily lives while ignoring just how much of our surroundings are managed, monitored and controlled by devices that we have zero understanding of how they work, let alone what master to which they report.
We can be sure of two things in this current crazy timeline: if a device can gather and report data, it will do so because data = profit, and if the device was built, programmed or configured by a human, you can be certain that it is less than perfect. Most of the time, we can deal with something that is less than perfect. In fact we are surrounded by imperfections that are suitable, usable and safe. Most of us understand that perfection is an ideal to strive for and not objectively obtainable. Unfortunately for internet security, small imperfections, even when rare or obscure, can lead to massive problems. At the moment, as with the parallel analogy of the ratio of air disasters to safe flights, it feels like security breaches and vulnerabilities are everywhere, when in fact they only make up a very small percentage of the amount of the vast amount of digital transactions that occur every single second. Unfortunately, like plane crashes, though their occurrences may be statistically rare (for the moment), they can be catastrophic when they happen. Engineers strive to reduce the chances that a plane will crash or that an operating system will be vulnerable to attack, but in the end, they are subject to human error. No technology is infallible.
It would be paralyzing to try to anticipate everything that could go wrong – this is the textbook definition of anxiety. However, I think it’s useful to carefully moderate your expectations when it comes to relying on technology to protect you or care for you perfectly. Don’t take your technology and security for granted, and you will be less surprised and better prepared for when it shows its human side.
Image by Bruno /Germany from Pixabay
A little while back, I wrote about a very disturbing trend in 2017 where something was gaming YouTube’s content algorithms with what appeared to be AI-generated content and metatags. If the last part of that sentence made little sense here’s the concept put another way. Someone was (and probably still is) using computer algorithms to build and publish content based purely on what would get to the top of YouTube’s search results. “Great,” I can hear you say, “How do I hire these guys?” That’s the thing – a lot of the content appeared to be completely artificially generated and automatically published. Basically someone built a robot and turned it loose on YouTube, and it actually worked.
Now it’s happening on Spotify
Chances are that you are a Spotify user – according to the company’s Q1 2020 report they have 286 million active users and 130 million premium subscribers. One of the primary draws of Spotify is creating your own playlists, whether based around a genre, artist or mood. You probably started on Spotify with a list of artists, songs and albums that you used to create your first playlists, but the other, wildly popular feature of Spotify is the ability to “discover” new music by searching its vast collection and having it generate playlists for you, as well as seeing what others, especially your friends, are listening to via their shared playlists. As you have probably guessed by now, Spotify drives this discovery process via search algorithms that are, of course, now being gamed like YouTube’s back in 2017. Any summarization I could put together would not do proper justice to just how strange the mushroom is that has grown in Spotify’s garden, instead I would recommend reading the article if you are at all curious as to why Spotify has made certain “odd” choices when recommending music to you. (Note: Medium is a subscription based website that limits story views).
As a wanna-be musician and as someone who deeply enjoys music, I’m not sure how I feel about the path that music is taking on Spotify. On the one hand I find it heartening that the platform allows for a wider swath of musicians to not only have their music be heard by larger audiences, but that they stand to make some money from it (as long as they know how to leverage the Spotify algorithms). On the other hand, audiences are losing track of the artist in sacrifice to search engine optimization, which prevents the artist from building a following. I’m pretty sure that most musicians don’t create purely in service to profit, but for the enjoyment of others. Being able to make a living is (usually) a happy product of this, but if the only objective is profit, I’d like to believe that particular product won’t endure…as long as Spotify doesn’t completely commodify musical tastes.
In case you haven’t already been scared silly by the concept, “deep fakes” are a new classification of videos wherein the faces of the subjects of the videos, usually short clips from movies or talk shows with easily recognizable actors, are replaced with a different face. While skilled video and movie special effects editors have been doing this for decades, the effect was usually obvious and it took an expensive special effects studio to produce the result. Now, we have YouTubers producing clips like the below which is amazing and terrifying at the same time:
What this means for you
The amazing part is easy to see (or not see). At some point in the video, I forget that I’m looking a Bill Hader and can only see Arnold’s face, which coupled with his excellent impression of the Governator, makes it look AND sound like Schwarzenegger is sitting with Conan instead of Hader. The terrifying part? This was done by one guy using open source software that doesn’t require an entire special effects studio team to produce.
If that isn’t enough to put a chill in your bones here are a few recent deep fake news stories that should wake you right up:
- The Democratic National Committee produced a deep fake video of their own chair Tom Perez for this year’s Def Con (one of the biggest hacker conventions in the world) to highlight the dangers deep fakes present to the 2020 elections.
- A Chinese app maker just released a free app on the Chinese iOS App store that can use a single picture to replace actors’ faces in a collection of famous movie clips.
- A scammer used a deep fake audio application to impersonate the voice of a UK energy firm CEO which was convincing enough to trick an employee into transferring over $200k to an unauthorized bank account, from where it was quickly transferred and laundered through multiple international accounts.
There’s that elephant again, though at least this time, there are a lot of people talking about it. Technology is again racing ahead of ethics, morality and law, and shows no signs of stopping. Will it take money or elections being stolen before anything is done about it? Have we hit a point where society will always be trailing technology, picking up the broken pieces and taping together integrity as best we can?
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net
One would think that nothing could be more awful than the violent mass murder that happened last Friday. Until you learn that the shooter live-streamed his monstrous rampage on Facebook. And surely nothing could be more depraved than that, right? But consider this: even after the live-stream was taken down by Facebook, over the course of the next 24 hours, literally tens of thousands of different versions kept appearing and reappearing on various video streaming sites, including YouTube and Twitter, faster than they could be removed. While it’s highly likely that many of the repostings were being performed by bots designed to take advantage of popular videos to leverage ad traffic, there are most assuredly humans behind at least some of that activity, demonstrating two very sobering and discouraging trends.
This is the Elephant on the Internet – the one that we can’t keep ignoring
If there is any good to come from this horrific event, it’s that a burning spotlight is now fixed (for the moment) on social media’s utter failure to control the spread of the killer’s hateful and atrocious ideology. Despite their efforts, versions of the video keep re-appearing, edited and formatted to avoid detection by the algorithms that are frantically being updated to attempt to remove the video’s spread. At one point, during the first 24 hours after the shooting, at least one version of the video was being uploaded every second. Facebook removed 1.5 million versions of the video on the Saturday following the event. And here’s what is actually even more depressing to consider: a large portion of this activity is happening not because the bots are trying to spread hate – the video is being reposted because people are watching it. Let that sink in. Regardless of the posters intent, the blame falls on our collective shoulders. Why are people watching this? What is wrong with society that this is not immediately repugnant? Will this be the crucible for social media, or will we let it slide yet again? Pandora’s Box is truly open, but perhaps it has been ever since social media first appeared on the internet, decades ago.
Image by Thomas Ulrich from Pixabay
Surprisingly, most people don’t realize that the popular idiom, “The Devil is in the detail” is actually derived from the more encouraging phrase, “God is in the detail,” i.e. pay attention to the small things as they are important. Both adages are more relevant now than ever, particularly because the average human is now daily agreeing to privacy policies with which, if they were to actually read the fine print, would probably not agree to at all. Such is the case with the numerous policies you are “accepting” when you install apps on your smartphone. What policy acceptance? The one hidden behind a small pop-up that says your data will be shared with other parties to improve your experience, or some other vaguely worded reminder that you are sharing data with a company in exchange for the free (or sometimes paid) use of an app.
What this means for you
“Yeah, yeah, I know, they are watching my every move,” my clients have said to me, “I’ve got nothing to hide.” Or, “It’s a small price to pay for this wonderful app/service/game.” Except most aren’t aware of how much data is being tracked, or what it can used for, aside from advertising. If you’d like a small taste of how this data is being assembled and the level of detail it can offer into everyone’s daily routines, read this article from the NY Times, “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret” – it’s a very easy read and has some nice interactive visual aids to bring the point home. Despite its approachable tone, the content of the article should be unsettling for everyone. For example, when asked to explain why their prompt to grant access to very precise coordinate data and permission to share with 16 companies was instead presented as a way to “recommend local teams and players that are relevant to you,” a spokesperson for the app responded (emphasis ours):
…the language in the prompt was intended only as a “quick introduction to certain key product features” and that the full uses of the data were described in the app’s privacy policy.
Let’s be honest here: I’m in this business up to my neck, and even I don’t read those privacy policies, but only because I know exactly what I’m trading for the use of a “free” app. You have a much more relatable excuse: “Ain’t nobody got time for ‘dat.” You are not wrong, but in the pursuit of better deals, faster commutes, cheaper gas or just weather updates, we have traded a precious commodity: privacy. And lest you forget, privacy is not about hiding secrets, but about not wanting to share everything about your life with complete strangers who only view you as a profit center. This is yet another glimpse of the elephant on the internet around which everyone is still carefully tip-toeing. Make sure you are paying attention!
Image courtesy of TAW4 at FreeDigitalPhotos.net
I’d like to say I’m busy watching the mid-term results come in, but actually, I’m too tied up reading all the reports of voting machine failures causing delays, confusion and most certainly some disenfranchisement. Despite plenty of media attention on the matter months ago it’s clear nothing was done, causing delays, confusion and doubt across the process in numerous states.
- Voting Machine Meltdowns Are Normal—That’s the Problem – Wired
- Voting Machine Manual Instructed Election Officials to Use Weak Passwords – Motherboard/Vice
- Voting Machine Hell, 2018: A Running List of Election Glitches, Malfunctions, and Screwups – Gizmodo
- Why voting machines malfunctioned on Election Day – Vox
- Voting machine errors already roil Texas and Georgia races – Politico
- Voting machines can be hacked in two minutes, expert warns – Fox News
We’re talking about it, but it’s still being ignored
Sadly, Election Day in the US once again illustrates my point about technology and humans: we are not perfect, nor are the machines we build and use. Despite this reality being clearly demonstrated in the above, we have the hubris to believe that our technology is somehow immune to our own frailties. In many ways, technology clearly allows us to overcome limitations and achieve spectacular things, but it also amplifies our shortcomings, and as we’ve seen numerous times elsewhere it also enables the less virtuous to exploit those shortcomings.
To change things, we need to expect better from our leaders – business, political, and spiritual. They need to understand critical technologies or admit when they do not and hire experts to help shape and implement policy that advances humanity as whole and not just financial interests. It’s OK to admit to not understanding technology, but if it’s an important part of your job or responsibilities, that continued lack of understanding could cause irreparable harm. Change begins with you, and putting in the effort to understand a technology also grants the benefit of being able to spot others who do not, an advantage that is handy in business and politics.
- 1
- 2