—Michel Protti, Chief Privacy and Compliance Officer, Product
Our work on privacy is underpinned by internal governance structures that embed privacy and data-use standards across the company’s operations.
As we continue to integrate privacy across the company, we’ve embedded privacy teams within product groups that deepen the understanding of privacy considerations by providing expertise within each product group. These teams enable front-line ownership of privacy responsibilities across our products. We have a robust and collaborative process for infusing privacy requirements into product roadmaps across Meta. Twice a year, product teams devote an average of 6-8 weeks to privacy roadmapping, dedicating a portion of their roadmaps to building new and improved privacy protections.
Led by Chief Privacy and Compliance Officer, Product, Michel Protti, the Product Compliance and Privacy team is made up of dozens of teams, both technical and non-technical, focused on guiding the company on privacy strategies.
The Product Compliance and Privacy Team is at the center of our company’s efforts to maintain a comprehensive privacy program. Its mission—to instill responsible practices and enable innovation across Meta—guides this work by ensuring people understand and trust how Meta products and services use their data responsibly.
The Product Compliance and Privacy Team is just one organization among many across the company that is responsible for privacy. There are thousands of people in different organizations and roles across Meta, including public policy, legal and product teams, who are working to embed privacy into all facets of our company operations. Getting privacy right is a deeply cross-functional effort, and we believe everyone at Meta is responsible for that effort.
Led by Erin Egan, Vice President and Chief Privacy Officer, Policy, the Privacy and Data Policy team leads our engagement in the global public discussion around privacy, including new regulatory frameworks, and ensures that feedback from governments and experts around the world is considered in our product design and data use practices.
To do so, the Privacy and Data Policy team consults with these groups through a variety of consultation mechanisms. In 2024, this included:
We support and participate in industry conferences and other events that promote privacy and share advances in privacy practices. In 2024, we shared our advances in privacy enhancing technologies in USENIX’s Conference on Privacy Engineering Practice and Respect, and governance and accountability best practices in IAPP’s conferences with privacy and governance professionals in each part of the world. These are opportunities for us to both share with and learn from experts, to ensure we’re taking their perspectives into account as we build products and policies.
We also partner with various experts via advisory groups, which we expanded in 2024. These groups are created to inform Meta’s decision making on the most novel and complex questions we face as we build innovative technologies. In 2024, we consulted with groups focused on:
The Privacy Legal team is embedded in the design and ongoing execution of our program and advises on legal requirements during the course of our privacy review process.
The Privacy and Product Compliance Committee is an independent committee of our Board of Directors that meets at least quarterly and has responsibility for privacy and product compliance oversight. The Committee is composed of independent directors with a wealth of experience serving in similar oversight roles. At least once per quarter, they receive briefings on, among other things, the global policy landscape, the state of our privacy program, and the status of the independent third-party assessment of our privacy program.
Internal Audit brings independent assurance on the overall health of our privacy program and the supporting control framework.
Part of ensuring that everyone understands their role in protecting privacy at Meta is driving continuous privacy learning and education that spans training and internal privacy awareness campaigns.
A core component of our privacy education approach is delivered through our privacy training. Our privacy training covers the foundational elements of privacy and is designed to help everyone here at Meta feel empowered to identify privacy risks and make responsible decisions that help mitigate them, so we can all take pride in not only what we build, but how we build it. One key theme is ensuring that Meta personnel are aware that they are the first line of defense to guard against privacy risks and to mitigate such risks.
Through its eLearning format, both our annual privacy training and our privacy training courses for new hires and new contingent workers provide scenario-based examples of privacy considerations aligned with our business operations and include an assessment to test the understanding of the relevant privacy concepts. These trainings are updated and deployed annually to ensure relevant information is included in addition to core concepts.
Alongside our foundational required privacy training, we also maintain a catalog of all known privacy training deployed across Meta that spans topics and risks.
Another way we drive company-wide awareness around privacy is through regular communication to employees. In addition to our privacy training courses, we deliver ongoing privacy content through internal communication channels, updates from Meta’s leadership, internal Q&A sessions, and a dedicated Privacy Day.
We leverage tentpole moments like Data Privacy Day to drive cross-company focus on privacy, featuring talks from Meta leaders, and highlighting key privacy concepts and priorities through engaging content and events.
We have a dedicated team whose job is to help ensure the company complies with global privacy and data regulations. To achieve this goal, we've developed a comprehensive, end-to-end process that helps us identify and respond to external regulatory obligations. This process provides the company with a coordinated view into all incoming regulatory obligations and a predictable process to support our responses.
We’ve created our Privacy Risk Management program to identify and assess privacy risks related to how we collect, use, share, and store user data. We leverage this process to identify risk themes, enhance our privacy program, and prepare for future compliance initiatives.
We’ve designed safeguards, including processes and technical controls, to address privacy risks. As a part of this effort, we conduct internal evaluations on both the design and effectiveness of the safeguards for mitigating privacy risk.
We’ve established a centralized Issue Management function to facilitate self-identification and remediation of privacy issues. This process spans the privacy issue management lifecycle from intake and triage, remediation planning, and closure with evidence.
We’ve established a privacy red team whose role is to proactively test our processes and technology to identify potential privacy risks. The Privacy Red Team assumes the role of external or internal parties attempting to circumvent our privacy controls and safeguards, which helps us proactively identify areas where we can improve our control environment.
No matter how robust our mitigations and safeguards, we also need a process to (1) identify when an event potentially undermines the confidentiality, integrity, or availability of data for which Meta is responsible, (2) investigate those situations, and (3) take any needed steps to address gaps we identify.
Our Incident Management program operates globally to oversee the processes by which we identify, assess, mitigate, and remediate privacy incidents. Although the Privacy and Data Practices team leads the incident management process, privacy incidents are everyone’s responsibility at Meta. Teams from across the company, including legal and product teams, play vital roles. We continue to invest time, resources, and energy in building a multi-layered program that is constantly evolving and improving and we highlight three components of our approach below.
We take a layered approach to protecting people and their information—which includes implementing safeguards designed to catch bugs proactively, before they can become a problem. Given the scale at which we operate, we have invested heavily in building and deploying a wide range of automated tools that are intended to help us identify and remediate potential privacy incidents as early and quickly as possible. These automated systems are designed to detect incidents in real time to facilitate rapid response.
Of course, no matter how capable our automated systems become, the oversight and diligence of our employees always plays a critical role in helping to proactively identify and remediate incidents. Our engineering teams regularly review our systems to identify and fix incidents before they can impact people.
Since 2011, we have operated a bug bounty program in which external researchers help improve the security and privacy of our products and systems by reporting potential security vulnerabilities to us. The researchers’ findings may lead to further investigation and, if necessary, remediation by Meta. The program helps us scale detection efforts and fix issues faster to better protect our community, and the bounties we pay to qualifying participants encourage more high-quality security research.
Over the past 10 years, more than 50,000 researchers joined this program and around 1,500 researchers globally have been awarded bounties.
While we’ve adopted a number of protections to guard against privacy incidents like unauthorized access to data, if an incident does occur, we believe that transparency is an important way to rebuild trust in our products, services, and processes. Accordingly, beyond fixing and learning from our mistakes, our Incident Management program includes steps to notify people where appropriate, such as a post in our Newsroom or our Privacy Matters blog about issues impacting our community, or working with law enforcement or other officials to address incidents we find.
Third parties are external partners who do business with Meta but aren’t owned or operated by Meta. These third parties typically fall into two major categories: those who provide a service for Meta (like vendors who provide website design support) and those who build their businesses around our platform (like app or API developers). To mitigate privacy risks posed by data and personal information exchanged with third parties, we developed a dedicated third party oversight and management program, which is responsible for overseeing third party risks and implementing appropriate privacy safeguards.
We’ve also created a third party privacy assessment process for service providers to assess and mitigate privacy risk. Our process requires that these service providers are also bound by contracts containing privacy protections. Their risk profile determines how they are monitored, reassessed, and, where appropriate, which enforcement actions to take as a result of violations, including termination of the engagement.
As part of Meta’s ongoing commitment to fostering a secure and privacy-centric developer ecosystem, we’ve launched new resources and initiatives designed to support responsible data use and reinforce our platform integrity.
Responsible Platform Initiatives Hub: We’ve launched a new Responsible Platform Initiatives Hub to centralize Meta’s approach to privacy and security with our third party developers, reinforcing our commitment to safe and responsible data practices across our platform. This hub provides resources and comprehensive guidance on our compliance-related programs, responsible development practices, and the tools that help developers align with Meta’s privacy standards. Through the hub, developers can access necessary resources related to our policies and responsible platform practices, supporting a safer and more transparent ecosystem.
Data Access and Renewal: We introduced a new consolidated process to request, manage and renew developer access to data, reinforcing our commitment to simplifying the platform and building new developer tools. Under data access renewal, developers undergo just one regular assessment once a year to confirm continued compliance with Meta’s privacy and data protection standards and Platform Terms. This process requires developers to explicitly reaffirm their adherence to Meta’s data protection standards and Terms, and outline how and why each type of data is used. By consolidating the developer experience, we hope to make it streamlined for developers to meet our requirements and deliver reliable security and privacy to their end users.
Our anti-scraping team is dedicated to detecting, investigating and blocking patterns of behavior associated with unauthorized scraping. Scraping is the automated collection of data from a website or app and can be either authorized or unauthorized. Using automation to access or collect data from Meta’s platforms without our permission is a violation of our terms of service.
We continue to invest in infrastructure and tools to make it harder for scrapers to collect data from our services and more difficult to capitalize off of it if they do. Examples of these investments include rate limits and data limits. Rate limits cap the number of times anyone can interact with our products in a given amount of time, while data limits keep people from getting more data than they should need to use our products normally.
We leverage internally generated user and content identifiers after we observed that unauthorized scraping often involves guessing or purchasing such identifiers. We also use new pseudonymized identifiers that help deter unauthorized data scraping by making it harder for scrapers to guess, connect, and repeatedly access data.
We’ve blocked billions of suspected unauthorized scraping actions per day across Facebook and Instagram, and we’ve taken a variety of actions against unauthorized scrapers including disabling accounts and requesting that companies hosting scraped data delete it.
The Privacy Review process is a central part of developing new and updated products, services, and practices at Meta. Through this process, we assess how data will be used and protected as a part of new or updated products, services and practices. We review an average of 1,400 products, features and data practices per month across the company before they ship to assess and mitigate privacy risks.
—Komal Lahiri, VP, Governance, Risk, and Compliance, Product
As a part of the process, a cross-functional team of privacy experts evaluates potential privacy risks associated with a project and determines if there are any changes that need to happen before project launch to mitigate those risks. If there is a disagreement on the assessment of applicable risks or the proposed product mitigations, the process requires teams to escalate to product and policy leadership and ultimately the CEO for further evaluation and decision.
The development of our new or modified products, services or practices through the Privacy Review process is guided by our internal privacy expectations, which include:
We’ve also invested in verification reviews and a centralized platform to support operating the Privacy Review process at scale:
—Michel Protti, Chief Privacy and Compliance Officer, Product
We've made significant strides in enhancing our Privacy Review process through technology investment. Specifically, we’ve introduced a product-level decision framework that has improved consistency and standardization across our reviews. This framework has enabled us to automate and streamline the review process by leveraging past reviews on similar projects, and embed privacy controls directly into our engineering tools. These changes have enabled us to review over 1,400 launches monthly while maintaining consistent privacy standards.
We continue to advance and build our privacy-aware infrastructure— innovative and efficient constructs embedded in Meta infrastructure solutions that enable engineers to more easily address complex privacy requirements as they build products. Privacy-aware infrastructure embeds privacy rules directly into code, helping ensure requirements are automatically respected. For example, we built Policy Zones that apply across our infrastructure to address restrictions on data, such as using it only for allowed purposes, providing strong guarantees for limiting the purposes of its processing. PAI continues to be crucial in enforcing complex purpose limitation scenarios while ensuring scalability, reliability, and a streamlined developer experience.
We’re proactively reducing the amount of user data that we collect and use by deploying innovative tools and technology across Meta. We continue to invest in privacy-enhancing technologies (PETs)—technologies based on advanced cryptographic and statistical techniques that help to minimize the data we collect, process and use and have been working to open source this work in cases where useful for the broader ecosystem, including on PETs for AI through PyTorch. Additionally, our investments in PETs helped enable a new cryptographic security feature on WhatsApp that helps to verify that your connection and conversation is secure based on key transparency. This feature reduces the possibility of a third party impersonating the person or business a user wants to connect and share encrypted messages with, by checking the validity of public keys in the conversation against a server-side directory that stores public keys with user information and then providing a publicly available, privacy-preserving audit record for anyone to verify that data has not been deleted or modified in the directory.
Similarly, we developed a framework for code and asset removal which guides engineers through deprecating a product safely and efficiently. Deprecating products is a complex feat involving internal and external dependencies, including dependencies on other Meta products that may not themselves be in scope for removal. To address this, our Systematic Code and Asset Removal Framework (SCARF) includes a workflow management tool that saves engineers time by identifying dependencies as well as the correct order of tasks for cleaning up a product. In addition, SCARF includes subsystems for safely removing dead code as well as unused data types.
SCARF powers thousands of human-led deprecation projects alongside the millions of code and data assets it has cleaned automatically. It is additionally useful for our privacy teams, who use the tool to monitor progress of ongoing product deprecations and ensure that they are completed in a timely manner.
Since 2016, personal conversations on WhatsApp have been protected by end-to-end encryption, which means no one outside of a user’s chats, not even WhatsApp or Meta, can read or listen to them. In 2023, we began to roll out default end-to-end encryption for all personal one-to-one chats and calls on Messenger, making them even more private and secure. This ensures that no one sees your messages except you and who you’re chatting with.
This has taken years to deliver because we’ve taken the time to get it right. Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. Enabling end-to-end encryption on Messenger meant fundamentally rebuilding many aspects of the application protocols to improve privacy, security, and safety while simultaneously maintaining the features that have made Messenger so popular. Our approach was to leverage prior learnings from both WhatsApp and Messenger’s Secret Conversations (Messenger’s early optional end-to-end encryption offering), and then iterate on our most challenging problems like muti-device capability, feature support, message history, and web support. Along the way, we introduced new privacy, safety and control features like app lock and delivery controls that let people choose who can message them, while improving existing safety features like report, block and message requests. On WhatsApp, where all personal chats and calls are end-to-end encrypted by default, we launched a novel encrypted storage system, built using WhatsApp’s Auditable Key Directory, that gives users the ability to securely save and restore their WhatsApp contacts.
At its core, end-to-end encryption is about protecting people’s communications, so they can feel safe expressing themselves with their friends and loved ones. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety go hand-in-hand. We commissioned an independent human rights impact assessment and designed our products to prevent harm by design, offered robust user controls and invested in proactively identifying abuse.
Ray-Ban Meta glasses let you snap a photo or record a video from your unique point of view, listen to music or take a call—all without having to pull out your phone. Ray-Ban Meta glasses have been redesigned with a higher quality camera, improved audio and microphone systems, and new features, such as livestreaming and built-in Meta AI, so you don’t have to choose between capturing the moment and experiencing it.
Ray-Ban Meta glasses were built with privacy at their core, and serve as a clear proofpoint of our commitment to responsible innovation and privacy by design. We’ve incorporated stakeholder feedback—which we obtained early on from the moment we launched Ray-Ban Stories—in meaningful and tangible ways.
We launched Instagram Teen Accounts, a new experience for teens, guided by parents. Teen Accounts will also be coming to other Meta platforms. Teen Accounts have built-in protections which limit who can contact teens and the content they see. Teens will automatically be placed into Teen Accounts, and teens under 16 will need a parent’s permission to change any of the default built-in protections to be less strict.
Supervision Features for Parents: While parents can’t read their teen’s messages, they can see who their teen has messaged in the past seven days.
More details available here.
We launched new generative AI features, including AI stickers, image editing with AI, our AI assistant known as Meta AI spanning across our apps, and 28 new AI characters. We've updated the Privacy Center guide on Generative AI and other transparency resources so people have information on how we build our AI models, how our features work, and what options and privacy data rights they have in their region.
Our “Why am I seeing this?” tool continues to help people understand why they’re seeing the ads they do on Facebook and Instagram feeds. One key update was summarizing information into topics about how activity both on and off our technologies—such as liking a post on a friend’s Facebook page or visiting a sports website—may inform the machine learning models we use to shape and deliver the ads seen. We also introduced new examples and illustrations explaining how our machine learning models connect various topics to show relevant ads. Additionally, we introduced more ways for users to find our ads controls, providing the availability to access Ads Preferences from additional pages in the “Why am I seeing this ad?” tool.
Meta Content Library and Content Library API are research tools that provide qualified individuals with access to publicly-available content from Facebook, Instagram and Threads. Individuals can search, explore and filter the data on both a graphical user interface or through a programmatic API.
Accessible data includes posts from Facebook Pages, groups, events, and profiles, posts from Instagram accounts and posts from Threads profiles. Details about the content, such as the number of reactions, shares, comments and post view counts are available as well.
Meta partnered with the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan to share public data from Meta technologies in a responsible, privacy-preserving way. This partnership is enabled through ICPSR’s industry-leading Social Media Archive (SOMAR) initiative. SOMAR independently processes and reviews applications for access to Meta Content Library and Content Library API.
Our work to communicate transparently includes providing external education to improve people’s understanding and awareness of our practices and ensuring information is accessible and easy to find.
To provide greater transparency and control to people, we’ve developed a number of privacy tools for people to understand what they share and how their information is used including:
“Getting privacy right is a continual, collective investment across our company, and is the responsibility of everyone at Meta to advance our mission.” —
Michel Protti, Chief Privacy and Compliance Officer, Product
Protecting users’ data and privacy is essential to our business and our vision for the future. To do so, we’re continually refining and improving our privacy program and our products, as we respond to evolving expectations and technological developments—working with policy makers and data protection experts to find solutions to unprecedented challenges—and sharing our progress as we do.