Any day we can take a purveyor of child pornography off the streets is a good day in my book. In this case, we can thank Google for discovering a Texas man sending images of child sex abuse through his Gmail account. As you might have guessed, a search algorithm rather than a human spotted the transgression and sent an alert to the National Center for Missing and Exploited Children, who then tipped off local authorities. According to Google, this is the only criminal activity they actively scan for within Gmail, and the search relies heavily on a large database of known illegal images maintained by NCMEC against which comparisons are made.
What this means for you:
In the case of child pornography, I’d say that just about any method used to catch perpetrators is justified, but as many pundits and security analysts point out, this practice teeters precariously on a knife edge of ethics. Telecommunication service providers like Google are required to inform law enforcement of suspected child abuse whenever it is made aware of such activity within its systems, but that word “aware” is ill-defined in today’s age of artificial intelligence, big data analysis and search algorithms. Does a search algorithm matching mathematical hashes on images constitute “awareness”? Should this same algorithm be used to look for other serious crimes? What about petty crimes? Does talking about a crime constitute the commission of a crime? What happens if someone hacks your account and sends out a bunch of disgusting images in an attempt to get you arrested? All the more reason to keep your passwords strong, unique and very, very safe. Oh, and don’t use email to commit or plan out crimes, because even though Google says they are only watching for child pornography, you can bet other agencies are looking at everything. Heck, maybe you should just not commit crimes at all, mmkay?