Meta
Meta
Understanding the Lawsuits Regarding Social Media and Teen Mental Health

Our Commitment to Supporting Teens and Parents

Understanding the Lawsuits Regarding Social Media and Teen Mental Health

We know parents worry about the safety of their teens online. At Meta, we’re consistently making changes to provide teens with safe, protected experiences, and bring parents more peace of mind. For over a decade, we’ve built safeguards into our platforms to support parents, and to help teens connect with their friends and explore their interests, in a protected environment. We’ve consulted with parents, policymakers, and safety experts throughout this process to ensure their expertise informs the changes we make.Last year, we introduced Teen Accounts, which have significantly changed the experience for hundreds of millions of teens across Instagram, Facebook, and Messenger. Teen Accounts have built-in protections that automatically limit who can contact them and the content they see. All teens are automatically enrolled in Teen Accounts, and teens under 16 need a parent’s permission for any of these settings to be less strict. Since making these changes, 97% of teens aged 13-151 have stayed in these restrictions on Instagram, which we believe offer the most age-appropriate experience for younger teens.More recently, we revamped Instagram Teen Accounts to be guided by 13+ movie ratings. This means that, when fully rolled out, teens will see content on Instagram similar to what they’d see in an age-appropriate movie, by default. Teens under 18 can’t opt out of this default setting without a parent’s permission.Teen Accounts are the latest example of Meta moving in lockstep with parents to put teens’ safety first on our platforms. In a recent survey, conducted by Ipsos and commissioned by Meta, we asked US parents how Teen Accounts would help them and their teens. We heard that, overall, nearly all parents surveyed (94%) say Teen Accounts are helpful for parents (Ipsos & Meta, 2025).We appreciate that teens may try to get around these protections, so we use AI technology to place those we suspect are teens into these protections, even if they tell us they’re adults.Our work doesn’t stop here. Technology is evolving rapidly, which means we will need to constantly adapt and strengthen our protections for teens, while listening and responding to concerns parents have. This work is never done, and we’ll continue to work tirelessly to give parents peace of mind that their teens can navigate our platforms safely, with the right guardrails and oversight in place.

Our safety tools, features, and resources

Read more about our actions in the following areas:

Teen Accounts and Automatic Restrictions for Teens
Family Center resources for parents and families
Parental supervision
Age-appropriate experiences
Content and interactions
Time management
Online safety and privacy

Our Position on the Lawsuits

These lawsuits misportray our company and the work we do every day to provide young people with safe, valuable experiences online.Striking a balance between allowing teens to access the benefits of social media while keeping them safe is one of the most critical questions across our industry. We have listened to parents, researched the issues that matter most, and made real changes to protect teens online – like introducing Teen Accounts with built-in protections and providing parents with tools to manage their teens’ experiences.Despite the snippets of conversations or cherry-picked quotes that plaintiffs’ counsel may use to paint an intentionally misleading picture of the company, we’re proud of the progress we’ve made, we stand by our record of putting teen safety first, and we’ll keep making improvements.

Commonly Asked Questions


What does the academic research say about social media and teen mental health?

We recognize parents’ concerns about how their teens are using social media, most existing research coincides with what parents likely already know: teen mental health and well-being is the result of many different factors because all teens are different.Teens’ experience with social media can also depend a lot on how it’s used, what protections are in place, and how much parental involvement there is.Research also shows that social media offers meaningful benefits to teens, like opportunities to explore, connect with friends and family, and express themselves. The National Academies report devotes an entire chapter to these potential benefits. The report cites social media as a valuable tool to find others who share academic or creative interests, network, and seek help from others when facing challenges in life. By offering a means to connect with people in the same position, social media can reduce stigma and be a venue for sharing coping strategies. (Galea et al., 2024, p. 77).There is also new evidence that the rates of teen depression, suicidal thoughts and behaviors in the US has begun to decline, even as social media usage increases or stays the same. According to the US Department of Health and Human Services’ National Survey of Drug Use and Health, the prevalence of major depressive episodes among 12-to-17-year-olds fell from 21% in 2021 to 15% in 2024. Serious suicidal thoughts in 12-to-17-year-olds fell from nearly 13% in 2021 to 10% in 2024. And the prevalence of suicide attempts by teens also fell slightly—from 3.6% to 2.7%. Additionally, the percentage of adolescents who had a major depressive episode in the past year declined from 20.8% in 2021 to 15.4% in 2024.

How is Meta addressing parents’ concerns about teen well-being online?

To address parents’ concerns and best support teens, we’ve reimagined how teens experience Instagram and our other apps with Teen Accounts. These protected accounts provide teens with built-in restrictions that automatically limit who they’re talking to and the content they see. Teens under 16 can’t loosen these settings without a parent’s permission.Parents play a central role in helping teens navigate new experiences—whether it is starting a new school, learning to drive, or joining a sports team—and social media is no different. Parents also know their own teens best, which is why we provide tools that let parents shape their teens’ experience, including limiting them to just 15 minutes a day of Instagram, seeing who they’re messaging and more.In addition, families can support healthy habits at home by setting expectations, having open conversations, and making use of available tools. For example, The American Academy of Pediatrics (AAP) offers a Family Media Plan, which the AAP encourages families to create to set priorities and boundaries together.

What is impacting teen mental health, if not social media?

There are a number of factors, such as academic pressure (Kim et al., 2023; Gray, 2023; Barbayannis et al., 2022), school safety (Riehm et al., 2021; Levine & McKnight, 2020), in-person bullying (Hansen et al., 2022), the pandemic (Saggioro de Figueirdo et al., 2021; FAIR Health, 2021), global financial crisis (Reeves et al., 2012), and the opioid epidemic (Winstanley & Stover, 2019).
In a 2023 study in the journal Nature, researchers evaluated the inter-relations of 17 different measures over time to evaluate social media’s influence on mental health—they found it was not very influential, and instead other things were, like “in-person bullying, lack of family support and school work dissatisfaction” (Panayiotou et al., 2023, p. 316)

Even if the science doesn’t say social media hurts teen mental health, what’s the harm in banning it for teens or enacting restrictions?

The reality is that teens today use social media to connect with their friends and like-minded communities, to access information and discover interests, and to express themselves.While we recognize parents’ concerns around teens having safe, age-appropriate experiences on social media, we believe that a balance can be struck that allows teens to use social media in a meaningful way, with the right protections and guardrails in place.The National Academies report devotes an entire chapter to the benefits of social media for teens (Galea et al., 2024, p. 71). It cites research which shows that social media can play a key role in forming a coherent sense of self and identity (Galea et al., 2024, p. 81). Banning social media would deprive young people of a tool which encourages creativity and community engagement, such as starting a small business, keeping in touch with friends, or learning a new skill.Parents also know their teens best, and we believe they should be in the driver’s seat when it comes to their teens’ online experiences. An outright ban would take away parents’ role in making that decision.Ultimately, if we focus the debate about teen mental health entirely on social media, we miss the opportunity to fix the root causes. As professor and quantitative psychologist Candice Odgers (2024) puts it: “The bold proposal that social media is to blame [for a mental health epidemic] might distract us from effectively responding to the real causes of the current mental-health crisis in young people.”

Has Meta blocked or altered findings about teen safety on its platforms?

That assertion is not only wrong, it is clearly contradicted by recent news reporting on our efforts to conduct precisely this type of research.Since 2022, Meta has approved roughly three dozen studies on social issues related to young people and hundreds more on other youth-related matters. The truth is that navigating one of today’s most critical topics—how to allow teens to access the benefits of social media while keeping them safe—is a new frontier, and one that necessitates countless discussions and research. That’s why you may see thousands of emails, presentations, and internal research laying out various scenarios and options for tackling teen safety and supporting parents.

How does Meta prevent under 13-year-olds from accessing its platforms and ensure that teens don’t lie about their age to avoid protections?

Instagram is for people 13 and older (or higher in some countries), and we remove accounts that don’t meet this requirement when we identify them. If those presented with our age verification menu are unable to prove they are 13+, we remove their account.We also train content reviewers to flag reported accounts that appear to belong to under 13s, even if the accounts are reported for something else. We appreciate that some teens may try to get around these protections, so we use AI technology to place those we suspect are teens into these protections, even if they tell us they’re adults.We don’t rely on AI technology alone—we also allow users to report those they think may be underage, use facial age estimation technology from Yoti, and ask for ID if we think someone may be misrepresenting their age. We’re advocating for federal legislation that requires app stores to get parents’ approval and verify age whenever their teens under 16 download apps, and that helps us place teens in age-appropriate experiences. By verifying a teen’s age on the app store, individual apps would not be required to collect potentially sensitive identifying information. Apps would only need the age from the app store to ensure teens are placed in the right experiences for their age group.

How does Meta prevent Child Exploitation Imagery (CEI) on its platforms?

Child exploitation is a horrific crime that occurs both on and off line – but we don’t want this content on our platforms. To combat it, we use sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children (NCMEC), and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators.Our systems are effective at reducing violating content, and we’ve invested billions in safety, security, and brand suitability solutions.We also support NCMEC by helping prioritize urgent reports—such as through its case management tool—and continue to lead the industry in detecting and reporting child exploitation, as NCMEC has repeatedly recognized.

How does Meta work to prevent the presence of suicide and self-harm content on its platforms?

Suicide and self-harm are devastating issues, and our deepest sympathies are with those who have been affected. We’ve worked for years with leading safety and suicide prevention experts, including our Suicide and Self Harm Advisory Group, to shape policies that reflect the latest research. Our approach aims to balance safety while offering support to those who may be struggling—we remove content that’s graphic or promotes self-harm, while allowing people to share their experiences in ways that can help others feel less alone. We’ve strengthened these policies over time, removing both real and fictional depictions of self-harm and limiting teens’ exposure to this content in Feed, Stories, and recommendations. We direct people searching for related content to trusted help resources, such as the Suicide and Crisis Lifeline and the National Alliance on Mental Illness (NAMI), and we use technology to avoid recommending content that discusses suicide or self-harm, or that trivializes themes around death or depression.

How does Meta respond to allegations that the company prioritizes growth over well-being and safety on its platforms?

This allegation does not reflect reality. Since the earliest days of our apps, we’ve invested billions in safety and security solutions—and specifically, in understanding well-being and safety, continually evolving our policies, features, and approaches to give people—especially teens—a safer experience online.Frankly, it is not in our business interest to leave people unprotected online. If teens encounter bullying, unwanted advances or upsetting content on Instagram, they’ll leave for a competitor. Parents won’t let their teens use our apps unless they feel it’s safe. We've made countless decisions that could hurt engagement and growth, like making all teen accounts private by default, and allowing parents to place time restrictions on their teen's Instagram usage. Teens might not like these restrictions, but we made these changes anyway because they were the right thing to do.Time and time again, we have moved in lockstep with parents’ and experts’ feedback to develop tools to keep teens safe online. The tools and features that Meta has developed have been adopted by many of our peers.

How does Meta respond to allegations that its leadership ignored warnings about harmful content or features on Meta’s platforms?

This is factually incorrect, and oversimplifies the significance of the decisions Meta’s leadership makes every day. Billions of people use Meta’s service, producing hundreds of millions of comments, photos, and interactions every day. Making decisions that impact this ecosystem are incredibly complex, and Meta’s leaders understand that there are important considerations to be weighed, like preserving speech and ensuring a change doesn’t have unintended consequences for safety.The truth is that, over many years, Meta has conducted extensive research, consulted countless experts, and engaged in rigorous internal debate to make decisions we felt were right at the time, and that balanced these considerations. We constantly evolve our approach as we learn more about how people use social media, and how best to keep them safe. Our commitment is backed by significant investment: we have thousands of people working on safety and security issues globally at Meta, with over $30 billion invested in teams and technology in this area over the last decade.

How does Meta account for employees who may appear to not take teen safety issues seriously?

Meta’s employees work every day to deliver a safe, positive experience for parents and teens. In many cases, plaintiffs have deliberately cherry-picked snippets from the millions of documents Meta has produced as part of this litigation to misrepresent the statements and motivations of Meta’s employees.Viewed in full, these exchanges generally show employees engaging in open and candid conversations about challenging and often complex issues—an important part of how we identify and address problems.

Recent Public Statements

Statement: Our Statement on the Plaintiff's Case in the Los Angeles JCCP Trial [March 6, 2026]

"Kaley has faced profound challenges, and we continue to recognize all she has endured. The jury’s only task, however, is to decide if those struggles would have existed without Instagram. Not one of her therapists identified social media as the cause. Her records show significant emotional and physical abuse, academic struggles and psychiatric conditions, entirely separate from her social media usage. The witnesses hired by her lawyer admitted that social media has benefitted Kaley, and she used it as an outlet to cope with the difficult circumstances at home. The evidence simply doesn’t support reducing a lifetime of hardship to a single factor, and our case will continue to underscore that reality."

Statement: Our Statement on the Los Angeles JCCP Trial [February 11, 2026]

"The question for the jury in Los Angeles is whether Instagram was a substantial factor in the plaintiff’s mental health struggles. The evidence will show she faced many significant, difficult challenges well before she ever used social media.”

Statement: Our Response to Claims about 500k Daily Inappropriate Interactions with Children [Feb 9, 2026]

"The number discussed in this 2020 email exchange does not refer to individual victims or incidents of child exploitation. The measurement technology we used at the time used an overly wide and cautious set of criteria, and as a result counted many benign interactions. This number significantly reduced after we refined and improved our measurement technology. Since 2020, we’ve introduced a range of new measures to help reduce potential grooming and inappropriate interactions with children - including preventing adults from starting private chats with teens they’re not connected to, and using improved behavioral signals to identify potentially suspicious actors and preventing them from finding and following teens."

Statement: Our Response to New Mexico AG’s Sensational Lawsuit [January 21, 2026]

"While New Mexico makes sensationalist, irrelevant and distracting arguments, we're focused on demonstrating our longstanding commitment to supporting young people. For over a decade, we’ve listened to parents, worked with experts and law enforcement, and conducted in-depth research to understand the issues that matter most. We use these insights to make meaningful changes—like introducing Teen Accounts with built-in protections and providing parents with tools to manage their teens’ experiences. We’re proud of the progress we’ve made, and we’re always working to do better.” - Meta Spokesperson

Statement: Our Response to Plaintiffs’ Crime-Fraud Argument [January 21, 2026]

“As state and federal courts have now ruled, these were appropriate attorney-client discussions that are protected by attorney-client privilege. Our legal team partners with researchers to help ensure studies comply with the law and that research summaries are clear and accurate. Meta remains committed to transparent, industry-leading research, as demonstrated by the hundreds of youth-related studies we have approved since 2022.”

Statement: “Deactivation” Study [November 25, 2025]

“The plaintiffs’ lawyers have deliberately mischaracterized this study and have misled the public about its purpose and the findings. This research had nothing to do with teens, or Instagram, and it certainly doesn’t show any causal link between social media use and teen mental health. What it found was that people who already believed using Facebook was bad for them thought they felt better when they stopped using it. While these kinds of findings are common in other public deactivation studies, they are not particularly useful, which was the reason it didn't go forward.”

Statement: Strike Policy on Sex Trafficking [November 25, 2025]

"We take a zero tolerance approach to human trafficking and exploitation, and we work to remove accounts immediately if they violate our most severe policies."

Helpful Research

RESEARCH

2026 - JAMA Pediatrics

Social Media Use and Well-Being Across Adolescent Development

RESEARCH

2026 - Journal of Public Health

How do social media use, gaming frequency, and internalizing symptoms predict each other over time in early-to-middle adolescence?

RESEARCH

2024 - National Academies

National Academies of Science, Engineering and Medicine: Social Media and Adolescent Health

RESEARCH

2019 - National Library of Medicine

Nature Human Behavior: The association between adolescent well-being and digital technology use

RESEARCH

2023 - JAMA Network

JAMA Network: Trends and Seasonality of Emergency Department Visits and Hospitalizations for Suicidality Among Children and Adolescents in the US from 2016 to 2021

COMMENTARY

2022 - National Library of Medicine

NIH: Academic Stress and Mental Well-Being in College Students: Correlations, Affected Groups, and COVID-19

COMMENTARY

2021 - JAMA Network

JAMA Network: Adolescents’ Concerns About School Violence or Shootings and Association With Depressive, Anxiety, and Panic Symptoms

COMMENTARY

2022 - National Bureau of Economic Research

NBER: In-Person Schooling and Youth Suicide: Evidence from School Calendars and Pandemic School Closures

RESEARCH

2021 - National Library of Medicine

NIH: COVID-19 pandemic impact on children and adolescents' mental health: Biological, environmental, and social factors

RESEARCH

2021 - FAIR HEALTH

FAIR Health: The Impact of COVID-19 on Pediatric Mental Health

CORRESPONDENCE

2012 - The Lancet

The Lancet: Increase in state suicide rates in the USA during economic recession

COMMENTARY

2022 - Clinical Therapeutics

Clinical Therapeutics: The Impact of the Opioid Epidemic on Children and Adolescents
References:
  1. This statistic is specific to teens in the United States.