In mid-September, The Wall Street Journal published leaked internal documents from Facebook regarding the harmful effects Instagram has on teenage girls. According to the internal report, the app increases the prevalence of body image issues and suicidal thoughts among teenagers. The company even planned to introduce an Instagram for kids to rope in more users, which was recently abandoned in light of the scandal. What a shocker.
Social media platforms have become this dangerous in large part due to the algorithms that determine which content users see. Facebook’s algorithms prioritize content that engages people, which is most often through outrage. Provocative posts draw in users to comment and interact with a post. This leads to the prioritization of dangerous content.
These algorithms radicalize people to extremism, and often contribute to the spread of misinformation. They fueled QAnon, the anti-vax movement and election conspiracies which led to the Jan. 6 insurrection. Facebook has also been used as a tool by authoritarian governments globally. This includes boosting far-right campaigns in the U.S. and Brazil, spurring genocide-like violence in Ethiopia and Myanmar and enabling espionage and surveillance by China and Iran, as well as serving as a platform for Russian international influencing operations and recruitment by terrorist groups.
Facebook claims it would be irrational to use this type of algorithm since advertisers avoid association with harmful content. But the proof is in the pudding. Facebook is full of harmful content, and advertisers keep coming.
Facebook owns not only its own platform, but also Instagram and WhatsApp, and the company has a history of using predatory practices to capitalize on the social media market.
Since the leak, former Facebook Product Manager Frances Haugen came out as the whistleblower in a 60 Minutes interview and has since testified to Congress about how the company chose profit over the well-being of its users and deliberately hid the harm caused by its platform.
Facebook, unfortunately, has no incentive to choose its users over profits. The company is protected from repercussions by Section 230 of the Communications Decency Act of 1996, which states that online intermediaries cannot be held responsible for the information posted to their platform. This means Facebook gets to self-regulate. Clearly the social network is not doing enough to handle widespread human and drug trafficking on the platform, and they leave much of the misinformation, cyberbullying and violence accessible.
On Oct. 4, the day after Haugen’s 60 Minutes interview and one day prior to her testimony before Congress, Facebook and all of its subsidiary platforms experienced a major outage for about six hours. There was another blackout a few days later.
At times, this story felt like a well-choreographed soap opera storyline. Haugen timed her interview perfectly to ensure people knew about the documents. Then Facebook failed, highlighting our society's dependence on the app, and its flaws were aired out the next day in front of the government.
Nevertheless, the global outage did show the public utility of Facebook in many countries. WhatsApp, Facebook’s encrypted messaging platform, is needed for communication in Brazil, India and Indonesia and is crucial in war-torn Afghanistan and Syria. Small businesses across the globe rely on Facebook and its associated apps, with many reporting lost revenue due to the outage. For some in developing countries, Facebook actually serves as their only link to the internet.
Facebook’s sites are inarguably crucial across the globe. Based on Haugen’s testimony, the company and its CEO and Chairman Mark Zuckerberg hold the power and authority to perpetuate and abuse the dependency people have on its services.
Controversy is nothing new for Facebook. In 2018, it was discovered that Facebook exposed the data of millions of users to the British firm Cambridge Analytica without consent, to be sold to right-wing campaigns in the U.S. and U.K. in 2016. This scandal showed that users themselves were Facebook's products.
The company has since tightened data security but continues to use, collect and share people’s data with third parties for targeted advertising. Even though this scandal revealed a massive breach of privacy by Facebook and many threatened to quit in response, the company continued on with just a small fine and no decline in its monthly active users.
We have grown up with the rise of social media, and it's a staple in our lives, which makes it hard to step back from. Social media is a crucial way to connect and share information. Even on campus, many student organizations and University departments utilize Instagram to share events and information. Our relationship with social media became even clearer during the pandemic, when online platforms became the only way to connect with one another.
Facebook is a ubiquitous part of our lives despite its many malpractices. Its policies harm people and nations in tangible ways and shape discourse around the world.
Despite this, there has previously been little drive to change things in the U.S. because of lobbying and a lack of consensus on the solution. Global legislation has mostly targeted increasing responsibility for self-regulation, with the notable exception of the European Union's General Data Protection Regulation, implemented in 2018 in response to the Cambridge Analytica scandal, which regulates the storage and usage of people’s data in the region.
We need to start disengaging from Facebook’s platforms in a meaningful way if we want to see any real and lasting changes. Otherwise, this cycle of power and influence will continue on, unchecked, for generations.
Social media is a useful tool, connecting people across the globe, but it should not be used at the expense of humanity.
Ruhika Chatterjee is a junior from Princeton, N.J. studying Molecular and Cellular Biology.