For billions of people around the world, social media – websites and apps like Facebook, Instagram, Snapchat, Twitter, and TikTok – is an integral part of everyday life. These networking and sharing sites can be a virtually unlimited resource of information, entertainment, ideas, and human connection that enrich our lives. They can also be very dangerous places, especially for young people.
A 2018 study by the Pew Research Center surveyed 750 teenagers aged 13-17. They found that 97% used social media, and 45% were online almost constantly. While teens and tweens use social media to connect with friends, learn, explore their interests, and be entertained, they often do so at the expense of sleep, schoolwork, extracurricular activities, and in-person socialization.
And because it’s difficult to constantly monitor a teenager’s computer and smartphone usage, parents may not realize their child is viewing inappropriate content or interacting with online predators who may convince them to meet in person. 24/7 connectivity with their peers can also expose them to relentless bullying, often leading to self-harm or even suicide.
Some frightening statistics:
- There are up to half a million Internet predators online every day.
- One in 25 victimized children will make physical contact with their predator within a year.
- Only 15% of parents know what their kids are doing online.
- Social media is where more than 80% of child sex crimes begin.
- Increased time on social networking sites related to poor mental health, psychological distress, and suicidal thoughts.
- 36% of students, some as young as ten, say they have been cyberbullied.
- 88% of teens on social media say they have witnessed some form of cyberbullying.
The content on these websites and applications is considered “third-party,” meaning that it comes from the site’s users, not the company that created the site. The site merely hosts and allows others to share the content.
According to Section 230 of the 1996 United States Communications Decency Act (CDA), social media companies are not liable for harm caused by third-party content. So, for example, if someone uses Facebook to bully someone into committing suicide, Facebook won’t be held responsible.
But several new lawsuits, including one being considered by the U.S. Supreme Court, may begin holding social media companies accountable for the harm they knowingly cause their users.
In February 2023, the Supreme Court will hear arguments on the CDA in Gonzalez v. Google, potentially changing the entire landscape of online content going forward.
Cases Could Set New Precedents for Online Content Regulation
The Gonzalez v. Google case, on which the Biden administration filed a brief siding with the plaintiffs in December 2022, asks whether Section 230 protects social media companies when their algorithms recommend third-party content.
American citizen Nohemi Gonzalez was killed in an ISIS terrorist attack on a Paris restaurant in 2015. Her family’s attorneys allege that YouTube, whose parent company is Google, promoted ISIS recruitment videos using its algorithm, which led to the attack. U.S. Solicitor General Elizabeth Prelogar even alleged in the brief that Google had violated the 1990 Antiterrorism Act by helping to promote ISIS.
Lower courts have sided with Google thus far, provided its algorithm treats ISIS videos like any other content.
However, an increasing number of bipartisan lawmakers believe that Section 230 has allowed social media companies to host false, discriminatory, and violent content without consequences. They say the platforms should not be protected when their algorithms recommend content to users.
The Supreme Court will also hear another case, Twitter v. Taamneh, that will decide whether Twitter, Google, and Facebook can be sued for abetting terrorism by letting the Islamic State post content. That case was brought by the family of Nawras Alassaf, a victim of a 2017 attack on an Istanbul nightclub.
Can Social Media Sites Be Considered Dangerous Products?
Increasing numbers of plaintiffs in lawsuits against social media companies are also using a different tactic to get around Section 230 – product liability.
Traditional product liability law refers to defective or dangerous products – consumer items with a defect in their design, manufacturing, or marketing that puts users at risk. Anyone involved in creating, manufacturing, or distributing a dangerous product can be sued under product liability.
Lawyers fighting a section 230 defense are beginning to argue that social media companies are liable for harm not via the content other users have posted but because the product itself – the site or app – is harmful. Plaintiffs in these cases say that the social media site’s algorithm is a defective product design.
More than 80 cases using the product liability defense against TikTok, Facebook, Instagram, and others have been grouped into multidistrict litigation (MDL) in California. Plaintiffs allege that the platforms are using harmful content to hook users regardless of the consequences on their mental and physical health.
The Supreme Court’s choice to address Section 230 protections indicates that this argument is far from over. If the justices ultimately decide that algorithms invalidate companies’ immunity, many would become more restrictive about the content they share. Some fear this would threaten freedom of speech on the Internet, while others feel that more regulation will lead to a safer online experience.
Personal injury due to online content is a newer concept, and more and more cases are likely to follow from those experiencing mental, physical, or emotional anguish due to social media.
Herman, Katz, Gisleson & Cain has been fighting for clients affected by defective and dangerous products for decades. For more information about product liability lawsuits or a free case review, fill out our online form or call 844-943-7627.
Jed Cain is a partner with Herman, Herman & Katz, LLC. He has dedicated his career to representing injured folks and their families.
Comments for this article are closed.