Learn more about digitally created or altered content
Some images, videos, or audio on Facebook, Instagram or Threads may have been digitally created or changed, including by using artificial intelligence (AI). These posts will often be funny or harmless, but sometimes they could be misleading.
Digitally created or altered content
For years people have been able to use computer programs to digitally create or alter content. For example, they could crop photographs, sharpen images, superimpose audio onto a video or slow down or speed up videos. But with advances in AI, the speed and ease with which people can now digitally create or alter images, video and audio has increased.
How AI is used to make realistic images, video or audio
AI is a computer’s ability to learn and perform tasks similar to how a human would do. AI is not a new technology. It’s used in smart speakers that recognize voice commands, navigation apps and even robot vacuums.
Advancements in AI have made it easier to create realistic images, video and audio. Today, you can give certain AI tools a simple instruction, such as “make an image of a dog surfing,” and those tools can produce an image, video or audio based on that instruction.
For example, people used AI tools to make the following realistic-looking images of Pope Francis wearing a white puffer jacket and Tom Cruise supposedly standing next to stunt doubles.
How to tell if content you see is AI-generated or edited
You may sometimes have questions about whether a particular post was digitally created or edited, especially if those edits were made using AI. Here are some ways you might be able to tell:
- Consider the source. Check whether the website, person or organization sharing the content is one you are familiar with and trust.
- Note details that look or sound unnatural. AI tools sometimes have difficulty producing details accurately, indicating the content is synthetic. For example, an image may show someone with seven fingers on one hand. Another example may be someone speaking where the voice does not match the movement of their lips.
- Confirm with other online sources. Look for an article, website or other resource you trust that discusses what you see or hear in the content. If you can't find a reputable source online, that may be an indication that the claim is not true, or that the image, video or audio was created using AI.
- See if there is a label. Some AI tools that generate content will add a label, watermark or other visual marker to images or other content they produce that indicates the use of AI. For example, Meta includes a visible label on all images created using its own AI tool Imagine. Note: Content could still be created by AI if it doesn’t include a label or visual marker.
Here is another example of a digitally created image that has some key signs of being created using AI. The image supposedly shows an explosion that occurred outside of the Pentagon building in the United States.
Here are some signs this image was digitally created:
- The white columns on the front of the building are not the same size.
- The grass unnaturally melds into the sidewalk.
- The fence blends into the traffic barriers.
- The lamppost in front of the fence also bends unnaturally.
You could also check reputable news sources to see if they are reporting that the fire actually happened.
What Meta is doing about misinformation
You can read more about how Meta addresses misinformation in the Transparency Center. This includes working with independent, third-party fact checking partners who review and rate false and misleading content on our platforms. Meta is also working with industry experts, peer companies and AI creators to create common standards to identify and label more AI-generated content across the internet.