It’d probably be criminal not to share with the small handful of you who haven’t already heard about this story, but you might be surprised to find out that Facebook is once again creating trust issues with its userbase. This time around the dust-up was caused by a research study conducted in 2012 wherein Facebook purposefully manipulated an undisclosed number of newsfeeds to first emphasize emotionally negative posts, and then positive posts in an attempt to demonstrate that one or the other could “infect” readers with that emotion and cause them to post similar negative or positive posts. Recently published results seem to bear out that theory, but that’s not what has everyone in a dither. The problem is that Facebook didn’t bother telling any of the subjects (that would be me and you and everyone else on Facebook during the time of the experiment) that they were part of this experiment.
Most who utilize social media apps are aware to a greater or lesser degree that whenever an app as “useful” as Facebook is free (and even when they are not free), you are actually paying for the use of that app via your demographic potential (advertising) and via data generated by your interaction. The more cynical among us will just nod their head at the above and mutter a resigned, “Told ya so,” or “Why am I not surprised?” but there are still plenty of folks out there who (bless their hearts) believe that Facebook has their best interests in mind all the time. The only thing surprising to me about the above news was that it took this long for Facebook to admit that manipulating its billions of users was something that it was doing to make sure people kept using Facebook.
What this means for you:
At first, Facebook seemed to be surprised that people were upset about this study, and quickly pointed out that everyone had already consented to this by accepting the Facebook Terms of Use (you read that fine print, right?). Not so fast, says the internet. At the time of the study, Facebook’s TOU did not include the wording that would (however obliquely) cover your consent in participating in said “research”. That wording only appeared 4 months later, almost as if they knew what might be coming. The Cornell researcher who published the paper on the study has already publicly apologized for upsetting the internet, and Cornell University is backing away from any association with the study like it was radioactive waste.
What are the take-aways? Facebook has yet again betrayed user trust. Fool me once, shame on you. Fool me twice, shame on me. Will people stop using Facebook because of this? A small percentage might, but the majority will shrug it off, as they have other infractions in the past. By my count, we are well past two instances of Facebook not always looking out for its users’ best interests. “Buyer beware” is all I can say in this regard. And in case you are still “new” to today’s technology landscape, this isn’t restricted to Facebook. Everyone is doing it: Microsoft, Google, Yahoo, Amazon all the way down to the thousands of scrappy startups. Data is this century’s gold rush, and as before, people and companies can and will do many, many unscrupulous things in the mad rush for profit.