Despite the fact that artificial intelligence seems to be creeping into almost every aspect of our lives, we’re still a bit aways from AI being able to understand that what we intend to do with our various bits of technology isn’t always what we end up doing. Perhaps the most familiar example of this is the infamous autocorrect feature on your smartphone. Depending on your degree of finger-fatness, spelling acumen and (let’s be real) patience for texting in general, the autocorrect function of your phone can swing to the extremes within a single sentence, oftentimes with hilarious results if you aren’t paying attention before hitting send. Apparently, this has been happening on a wide scale for over 10 years now with emails mistakenly sent to email addresses in Mali (the African country whose domain is “.ML”) instead of the appropriate military mailbox that ends in “.MIL”.
What this means for you
Per the US’s Department of Defense spokesperson, they are well aware of the problem and have addressed this issue for any military emailers by blocking the .ML domain from delivery. Problem solved, right? Well, at least for the US Military, but not for the rest of the world that is well outside of their control, and apparently their immediate concern. Ever since the days of “Clippy” software developers have been making various attempts to assist us in being better at technology. Their heart is in the right place, but each time, it falls short. Right now, as I type this blog, WordPress is suggesting various words and corrections that variously remind me about my poor typing habits, sloppy word choices and the overwhelming fact that my grade school English teachers were better at keeping 30 kids subdued versus impressing upon them the importance of good grammar. In the end, it is helping me write a better (at least grammatically) article, but only because I’m not blindly accepting every suggestion it’s providing.
The key problem with today’s current “active assistant” systems is that they are still relying on humans to provide data, and as we all know, humans are fallible and prone to mischief, especially when it comes to AI. This was back in 2016, mind you, before the arrival of concepts like “post-truth” and “alternate facts”, so if anything, the data we’ve been amassing in the past 6 years is probably the most unreliable it’s been since the advent of the Internet. And here’s the thing – let’s say you’re a military contractor working through an email service administered by someone other than the Department of Defense. You’re an international company, regularly dealing with people all around the world. A third grader could probably understand the difference between .MIL and .ML, but if you are Outlook and you are just trying to send emails out because your human pushed “send,” unless you’ve been trained to know that your human is a military contractor that works with the US military and not the Malian military, that email is going to get sent to the wrong mailbox because you missed a keystroke. In this instance, when AI gets good enough to spot the problem and say, “Hey, did you mean to address this email to the African country of Mali?” it might be a boon instead of a bane.
Image by Fernando Arcos from Pixabay