It seems rare to find feel-good information about artificial intelligence lately, so when I spot it, I like to share it, especially when it has to compete for attention with discouraging stories where AI technology is enabling the spread of hate and misinformation. After a young woman in Rhode Island lost her voice due to a brain tumor removal, a team of Doctor’s and ChatGPT-maker OpenAI were able to utilize a short voice recording to create a specialized app that allows her to speak again with an accurate recreation of her own voice.
Stop there if you want to keep feeling good.
Still reading? OK, I warned you. While I’m not going to go into details because I don’t want to give them any more publicity, there are plenty of other AI startups who aren’t focused on limiting their platforms to only medical applications where all parties involved are providing ongoing consent for the use of their voice. While I’m sure they claim their software can only be used in completely consensual situations, we’ve already seen spreading usage of deepfakes for political propaganda, extortion and reputation destruction.
Given the potential AI has, many companies may have started out with stars in their eyes for all the imagined possibilities the technology could enable, but once buyers with suitcases of money started showing up, ethics seem to be relegated to a secondary (or lower) concern, if it was ever a constraint in the first place. As always, money and politics complicate matters, and we humans are not known for approaching things cautiously, especially when being first to the prize means establishing dominance. It’s heartening to know that at least some folks are intent on developing genuinely helpful applications of AI technology, even if in the end, their efforts will most assuredly be monetized. Our hope here, like 3-D printing, is that the technology becomes so widespread that it levels the playing field for (most) everyone.
Image by bamenny from Pixabay