How we are using facial recognition technology on images to keep people safe on our platforms and to confirm identity
We use facial recognition technology to check if images of public figures and high-profile individuals are being misused in scams on our platforms, to confirm the identity of users when they lose access to their accounts, or to detect content which goes against our policies on violating violent events or child sexual exploitation. This article explains how we use this technology and how it affects other people whose images may appear in the content or in photos on the accounts of users who want to confirm their identity.
Why we use facial recognition technology
Scammers often use the images of public figures and high profile individuals to bait people into engaging with scams. These types of scams, such as when scammers post "celeb-bait" ads or try to deceive users through impersonating Pages or profiles, violate our policies and are bad for our community.
In addition, if we suspect that someone has accessed a user’s account without their permission or if we need a user to confirm that they are the true account owner, we may ask them to confirm their identity using a video selfie. They can then choose to use facial recognition technology to compare their video selfie to their account photos.
Hackers or scammers regularly attempt to find ways to work around our systems, so we are constantly adapting and enhancing the ways that we detect scams and stop them from deceiving people. We have recently introduced facial recognition technology (an automated method of face analysis) to help reduce scams that misuse images and to confirm the identity of account holders. This helps us to detect and stop scams such as fraud or impersonation faster and more accurately. It also helps us get users back into their accounts as quickly and easily as possible. In limited circumstances, we may also use facial recognition technologies to help identify content which goes against our policies on violating violent events or child sexual exploitation, so that we may take appropriate action.
How this works
- When we use facial recognition technology, we use an automated system to create embeddings from public figures’ photos, from a user’s video selfie or from other publicly accessible content. An embedding is a numerical value representing the visual characteristics of a face.
- To detect scams, we compare those embeddings against images in potential scam content, such as images in possible celeb-bait ads, to confirm if the faces belong to a public figure. If the system detects a match, we will check to confirm if the image is being misused to scam people. If we confirm that the image is being misused to scam people, we will remove the violating content on Pages and profiles.
- To confirm users’ identities, we compare the embeddings created from a user’s video selfie against the photos uploaded on their Facebook or Instagram profiles, including any previous profile photos on Facebook, to confirm that they belong to the same person. If the system confirms a match, the user’s identity is confirmed and they can regain access to their account.
- To detect content which goes against our policies on violating violent events or child sexual exploitation, we compare the embeddings created from public images against content uploaded on our platforms. If the system detects a match, we may take action against this content in accordance with our Community Standards.
How does this affect people other than public figures and users confirming their identity?
Sometimes images of other people also appear in scams, and in the photos we use to confirm users’ identities, or in the content which we review to confirm if it goes against our policies on violating violent events or child sexual exploitation. In these limited circumstances, we may incidentally and temporarily process these images but without recognising their identities.
This is a necessary part of our process when using facial recognition technology to detect scams, or to confirm users’ identities or to detect content that goes against our policies on violating violent events or child sexual exploitation. For example, when detecting scams we might process an image of someone in a celeb-bait ad alongside the public figure whose image is being misused.
The purpose of using this technology is to try to match the faces in the image to the face of the public figure, or the true account holder, or perpetrators of violating violent events or child sexual exploitation. We don’t focus on identifying the other people in the image.
We don't store the embeddings related to these other people and they are deleted from our systems after the comparison has taken place. We also use other technical measures to limit the number of images that we use this technology on. We don't use the embeddings for anything other than helping to ensure the effective detection of, and enforcement against, scams and content which goes against our policies on violating violent events or child sexual exploitation.
The Meta Privacy Policy explains your rights and how we use your information. These measures are part of our efforts to promote safety, integrity and security on our platforms. It is in our interest to continue to combat harmful behaviour, including detecting, preventing and addressing spam, suspicious activity, breaches of our terms or policies, harmful or inappropriate content and other bad experiences on our platforms.
If you are a public figure and want to learn more about how we use facial recognition technology to protect your image from misuse, see the Help Center article.
If you are a Facebook or Instagram user and want to learn more about how we use facial recognition technology to help you regain access to your account, see the Help Center article.