The Community Standards define what is acceptable to share and prohibit harmful content, including hate speech, bullying and harassment, and non-consensually shared intimate imagery.
When we believe a genuine risk of physical harm or a direct threat to public safety exists, we remove content, disable accounts and work with local emergency services.
Our policies do not allow content that outs an individual as a member of a designated and recognizable at-risk group or threatens LGBTQ+ safety by revealing sexual orientation or gender identity against their will or without permission.
We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are. That is why we don’t allow hateful conduct on Facebook, Instagram, or Threads.
We define hateful conduct as direct attacks against people — rather than concepts or institutions — on the basis of what we call protected characteristics (PCs): race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease. Additionally, we consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants, and asylum seekers from the most severe attacks, though we do allow commentary on and criticism of immigration policies. Similarly, we provide some protections for non- protected characteristics, such as occupation, when they are referenced along with a protected characteristic. Sometimes, based on local nuance, we consider certain words or phrases as frequently used proxies for protected characteristics.
We remove dehumanizing speech, allegations of serious immorality or criminality, and slurs. We also remove harmful stereotypes, which we define as dehumanizing comparisons that have historically been used to attack, intimidate, or exclude specific groups, and that are often linked with offline violence. Finally, we remove serious insults, expressions of contempt or disgust, cursing, and calls for exclusion or segregation when targeting people based on protected characteristics.
We recognize that people sometimes share content that includes slurs or someone else’s speech in order to condemn the speech or report on it. In other cases, speech, including slurs, that might otherwise violate our standards is used self-referentially or in an empowering way. We allow this type of speech where the speaker’s intention is clear. Where intention is unclear, we may remove content.
Bullying and harassment take varying forms, including threatening messages, unwanted malicious contact and the release of personal information. We do not tolerate this behavior.
Meta views public figures and private individuals differently to allow discussion, which can include critical commentary of people featured in the news or with large public audiences. For public figures, Meta removes posts that use derogatory terms, call for sexual assault or exploitation, call for mass harassment or threaten to release private information.
We recognize that bullying and harassment can have even more of an emotional and physical impact on minors, which is why our policies provide even deeper protections for young people.
The non-consensual sharing of intimate images violates our policies, as do threats to share those images. We remove images shared on Facebook and Instagram in revenge or without permission, as well as photos or videos depicting incidents of sexual violence. We also remove content that threatens or promotes sexual violence or exploitation.
Report when someone shares your intimate images without your consent or is threatening to do so. Our teams review reports 24/7 in more than 70 languages and will remove intimate images or videos shared without consent. We will also remove any content that threatens to share intimate images without permission. In most cases, we disable the account that shared, or threatened to share, such content on our technologies.
To stop further attempts at sharing a removed image, we use preventative photo-matching technologies. If someone tries to share the image after it has been reported to us and removed, we will alert them that it violates our policies. We also stop the resharing attempt and may disable the account. We encourage you to report sextortion, which is when people threaten or force someone to share intimate photos or videos. This is against the Community Standards and, in some instances, also against the law.
If you want to report actions that go against the Community Standards, such as hate speech, bullying and harassment or violence, go to the content you want to report and use the "Find Support" or "Report" link. We have teams of experts who review reports of violating content 24/7 in more than 70 languages, and we use artificial intelligence technology that finds and removes this content before users even see it.
We strive for open and proactive action when safeguarding users’ privacy, security and access to information online. We’ve published biannual transparency reports since 2013. We also release a quarterly Community Standards Enforcement Report, which includes data on actions we’ve taken against violating content on Facebook, Messenger and Instagram. We believe that increased transparency leads to increased accountability and responsibility.