It seems rare to find feel-good information about artificial intelligence lately, so when I spot it, I like to share it, especially when it has to compete for attention with discouraging stories where AI technology is enabling the spread of hate and misinformation. After a young woman in Rhode Island lost her voice due to a brain tumor removal, a team of Doctor’s and ChatGPT-maker OpenAI were able to utilize a short voice recording to create a specialized app that allows her to speak again with an accurate recreation of her own voice.
Stop there if you want to keep feeling good.
Still reading? OK, I warned you. While I’m not going to go into details because I don’t want to give them any more publicity, there are plenty of other AI startups who aren’t focused on limiting their platforms to only medical applications where all parties involved are providing ongoing consent for the use of their voice. While I’m sure they claim their software can only be used in completely consensual situations, we’ve already seen spreading usage of deepfakes for political propaganda, extortion and reputation destruction.
Given the potential AI has, many companies may have started out with stars in their eyes for all the imagined possibilities the technology could enable, but once buyers with suitcases of money started showing up, ethics seem to be relegated to a secondary (or lower) concern, if it was ever a constraint in the first place. As always, money and politics complicate matters, and we humans are not known for approaching things cautiously, especially when being first to the prize means establishing dominance. It’s heartening to know that at least some folks are intent on developing genuinely helpful applications of AI technology, even if in the end, their efforts will most assuredly be monetized. Our hope here, like 3-D printing, is that the technology becomes so widespread that it levels the playing field for (most) everyone.
Image by bamenny from Pixabay
In 2019 I wrote about the arrival of deep fakes and posited that it might take an election being stolen before anyone in the country takes it seriously. Welcome to 2024 where someone engineered a robocall in New Hampshire designed to suppress the vote in that state’s January 23rd primary elections. The call featured what appears to be an artificial intelligence-generated clone of President Biden’s voice telling callers that their votes mattered more in November than in today’s primary. To put a nice ironic cherry on top, the robocallers seemed to have spoofed a phone number from a Democrat PAC that supports Biden’s efforts in New Hampshire. Here is the actual release from the NH Department of Justice website that signals the official investigation, in case you are skeptical of the above website’s veracity.
What this means for you
I imagine that regardless of which side of the political spectrum you sit on, this presents a very scary future where we cannot trust our eyes or ears or practically anything on the internet at a time when truth and objective reasoning are crucial. The technology to do the above is readily available and accessible, and it seems a small but influential number of us cannot be trusted to act responsibly with powerful technology. If you are thinking, “well, let them duke it out in their political battles over there, I don’t need to worry about AI fakes affecting me,” let me spin a “fanciful” situation for you to consider. Let’s say you have a disgruntled ex-employee who is looking to strike back at you or your company and decides to use the above tool to fake a harassing phone call from someone in company leadership to someone else in your organization. Do I even have to tell you that this service is likely already on offer in questionable corners of the internet? What can you do?
Make your voice heard in the upcoming elections by voting for leaders that represent your values (which are hopefully based on lifting people up instead of pushing them down). How do you know who that might be? Time to step up and ask directly. Don’t rely on third parties to put words in their mouths. It’s time for direct accountability, for you, me and them.
Register to vote. Get out and vote.
Image courtesy of Stuart Miles at FreeDigitalPhotos.net
I know some of you are Trekkies, and even if you aren’t a fan, you’ve more than likely heard the phrase, “You will be assimilated. Resistance is futile,” chanted by Star Trek’s hive-mind aliens, the Borg. Though they pale in comparison to some of the movies and series’ most iconic nemeses like Khan and the omnipotent Q, their constant drive to absorb beings and technology to improve the collective are proving to be hauntingly prescient when compared to certain modern-day companies seemingly bent on assimilating the internet to feed the AI beast.
“I am the beginning, the end, the one who is many. I am the Borg.”
When the Borg appeared for the first time on Star Trek in 1989, repulsion to their “otherness” came from our culture’s inherent dislike of the concept of individuality and freedom being made subservient to a collective will. While AI was not new to science fiction at the time – it had already become infamous decades before in the sci-fi classic 2001: A Space Odyssey – it was viewed as something maybe possible in the distant future. Luckily, we got Y2K instead of HAL when the new millennium rolled around, but now, just 20-ish years later, we are faced with the reality of web-crawling bots hoovering up everything on the internet to fuel “large language model” AI platforms. It’s hard not to draw comparisons to the Borg in this regard. Human content creators are already having to resort to legal measures against various companies for “assimilating” their original work into AI-generated copycat products that are being sold on platforms like Amazon (a company often compared to the Borg) or appearing in YouTube videos (another very Borg-like company), or in sound-alike songs on Spotify.
“We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us.”
Star Trek: First Contact (1996)
Image by PIRO from Pixabay
The FBI held a press conference last week to confirm what we figured was already a thing the moment open-source AI projects started surfacing: threat actors are using artificial intelligence to write malware, build ransomware websites and to put more teeth in their phishing campaigns. And as if we didn’t need more nightmare fuel, the FBI also shared this little nugget: terrorists groups are using AI to research deadlier “projects” like more potent chemical attacks.
If you can dream it, you can build it.
Unfortunately for us, dreams aren’t limited to those of us who are just trying to make our way through life without hurting anyone while having some fun along the way. Criminals aren’t hampered by ethics or compassion, and neither are AI’s, even when the programmers try to put in safeguards. As I’ve always maintained, anything built by humans will be subject to our flaws, and I don’t know that I’m willing to trust that any AI that becomes self-aware will be able to differentiate between good and evil with the amount of garbage we have piled onto the internet. At this point, unless you happened to be a multi-billionaire with ethics and a hotline to folks in power, the best you can do is let your congress-critter know that we should be pumping the brakes on this runaway AI truck. While there have been some relatively feeble attempts from the established technology titans to put together something akin to a digital watermark that will help the rest of the world identify content created by an AI, there are probably hundreds of throne-contenders willing to ignore the rules for a chance at the top, humanity be damned, and you can bet that many of them already have their hands in the pockets of any government powerful enough to even try to regulate this technology.
Am I saying it’s time to start looking for bunker-friendly real estate in an under-developed country with robot unfriendly terrain? Not yet, but could we confidently say we would know when that moment has arrived, or maybe we’ve already crossed that threshold. Most of us can only cross our fingers and hope the future is more like Star Trek and nothing like Terminator.
Image Courtesy of Stuart Miles at FreeDigitalPhotos.net