
I wrote a couple weeks ago about Facebook's latest debacle, and ended it by deleting my data and disabling my account, which I did. I was curious if I'd miss it at all, so I waited before deciding whether or not to pull the plug for good - good as in "final", good as in "mental health", good as in "no longer part of their experiments".
I've decided to make it permanent, esp after doing some research. So why did I ultimately dump Facebook? I'll let the numerous articles, going all the way back to the very beginning, tell the story for me. I can't in good conscience use a site as careless as Facebook. I'll add to list when I catch FB in the news again... so fairly often I'm sure.
2015
Allowing company to syphon data of 50 million users
In March 2018, it was discovered that a company syphoned the data of 50 MILLION users through Facebook's API in 2015 - and they did it legitimately. They only screwed up (according to Facebook) when they handed the data to someone else. Facebook knew about it for years but never disclosed the full extent, until someone blew the whistle.
Testing out using your location to improve your 'suggested friends' list
In mid 2016, it was discovered that Facebook tested using users' location information to improve the 'suggested friends' list. The author stumbled onto this information when she wrote an incorrect piece accusing Facebook of using everyone's location data to improve the lists, to which Facebook quickly responded by admitting it had just been part of a four-week test at the end of 2015.
Still, it's creepy to think that Facebook (really, any app) is able to do this using your mobile device's location - and it's why you should check the list of permissions an app requests very carefully.
2012
Sharing your purchases with friends, without telling you
For a couple months, Facebook made certain offers to 1.2 million users. If you accepted the offer, then your friends were notified (half the time without your knowledge or consent) to see whether it influenced whether they accepted it too.
Adding negative posts to your feed to affect your emotions
It was discovered that in 2012 (reported in June 2014), Facebook ran a large scale experiment with the emotions of 700,000 users to see what effect more positive or negative posts had on emotional state. You can read more about the findings on PNAS, and the (pre?)2012 study that probably led to it.
Collecting the messages you DIDN'T post
Facebook ran an experiment for 2 weeks in July 2012, collecting data on how 5 million users censored themselves before posting a message. They collected nearly everything you typed even if you didn't post it, or if you edited it before posting. In other words, just by typing into their text box you were sharing data with them - even if you never pressed the "post" button.
2010
Comparing your public voting record to your actions on Facebook
Facebook ran an experiment with 61 million users, showing an I Voted button and comparing it to your actual voting record to determine how the presence of the button might affect your (and your friends') actual actions. That's right - they looked up your voting record for the purpose of their "research".
Hiding your shared links from your friends
Facebook ran an experiment with 253 million users between Aug - Oct, randomly preventing shared links from showing in friends' feeds to see how it affected further sharing. Even if you thought you were sharing something with your friends, it's probable Facebook was effectively censoring you.
2009
Giving you "recommended" privacy settings that make even more of your data public
In July 2009, Facebook reworked its privacy settings, ostensibly to make them easier to find, but in reality they made it easy for users to accidentally share even more data by providing "recommended" settings that switched things back to public.
They even removed some useful privacy settings altogether, like the ability to not share data through their API. Interestingly, that's the exact kind of setting that would've prevented the abuses that allowed a company to collect 50 million unsuspecting users' data in 2015.
2008
Promising users that "verified" apps were more secure, but never actually checking them
To increase user trust in apps, Facebook launched a verification program in Nov 2008. They claimed to be certifying apps for security, and awarded "verified" badges to those apps, but the FTC later determined that Facebook was not. Who knows how many users used an app that Facebook assured them was safe, when in fact they were not.
You can read more about what the FTC found in their docs for Facebook, Inc. Page 15 of the complaint states that, "before it awarded the Verified Apps badge, Facebook took no steps to verify either the security of a Verified Application's website or the security the Application provided for the user information it collected, beyond such steps as it may have taken regarding any other Platform Application." In other words, the program didn't do what it promised at all.
2007
Tracking your online behavior around the web and sharing it without your consent
In Nov 2007, Facebook released a feature called Beacon. Any site could inject a 1x1 pixel into their code to send data about user activity back to Facebook, who could then pair it up with actual Facebook users somehow (via cookies I guess).
Imagine, you log into Facebook and "like" a few posts by friends, then leave the site. You spend the rest of the day (or week or whatever) surfing the web, making purchases... just doing your thing. Clueless to the fact that random sites are sending your actions to Facebook, who then knows everything you've been doing. Creepy enough, but then they shared that activity with the rest of the network too.
It was opt-in by default, and (shocked face) it was not well received.
Sources
- Facebook's Fascinating (and Disturbing) History of Secret Experiments
- Publications – Facebook Research
- randomly searching online... they made this too easy