Meta

How Meta Is Preparing for the EU’s 2024 Parliament Elections

By Marco Pancini, Head of EU Affairs, Meta

Takeaways

  • As the election approaches, we’ll activate an Elections Operations Center to identify potential threats and put mitigations in place in real time.
  • We have the largest fact-checking network of any platform and are currently expanding it with 3 new partners in Bulgaria, France, and Slovakia. 
  • We have committed to taking a responsible approach to new technologies like GenAI, and signed on to the tech accord to combat the spread of deceptive AI content in elections.

Meta has been preparing for the EU Parliament elections for a long time. Last year, we activated a dedicated team to develop a tailored approach to help preserve the integrity of these elections on our platforms.

While each election is unique, this work drew on key lessons we have learned from more than 200 elections around the world since 2016, as well as the regulatory framework set out under the Digital Services Act and our commitments in the EU Code of Practice on Disinformation. These lessons help us focus our teams, technologies, and investments so they will have the greatest impact.

Since 2016, we’ve invested more than $20 billion into safety and security and quadrupled the size of our global team working in this area to around 40,000 people. This includes 15,000 content reviewers who review content across Facebook, Instagram and Threads in more than 70 languages — including all 24 official EU languages. 

Over the last eight years, we’ve rolled out industry-leading transparency tools for ads about social issues, elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. More recently, we have committed to taking a responsible approach to new technologies like GenAI. We’ll be drawing on all of these resources in the run up to the election.

As the election approaches, we’ll also activate an EU-specific Elections Operations Center, bringing together experts from across the company from our intelligence, data science, engineering, research, operations, content policy and legal teams to identify potential threats and put specific mitigations in place across our apps and technologies in real time.

Here are three key areas our teams will be focusing on:

Combating Misinformation

We remove the most serious kinds of misinformation from Facebook, Instagram and Threads, such as content that could contribute to imminent violence or physical harm, or that is intended to suppress voting.

For content that doesn’t violate these particular policies, we work with independent fact-checking organisations — 26 partners across the EU covering 22 languages  — who review and rate content. We are currently expanding the programme in Europe with 3 new partners in Bulgaria, France, and Slovakia. 

When content is debunked by these fact checkers, we attach warning labels to the content and reduce its distribution in Feed so people are less likely to see it. Between July and December 2023, for example, over 68 million pieces of content viewed in the EU on Facebook and Instagram had fact checking labels. When a fact-checked label is placed on a post, 95% of people don’t click through to view it.

Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognize that speed is especially important during breaking news events. We’ll use keyword detection to group related content in one place, making it easy for fact-checkers to find. Our fact checking partners are also being onboarded to our new research tool, Meta Content Library, that has a powerful search capability to support them in their work.

We don’t allow ads that contain debunked content. We also don’t allow ads targeting the EU that discourage people from voting in the election; call into question the legitimacy of the election; contain premature claims of election victory; and call into question the legitimacy of the methods and processes of election, as well as its outcome. Our ads review process has several layers of analysis and detection, both before and after an ad goes live, which you can read more about here

We are working with the European Fact-Checking Standards Network (EFCSN) on a project to help train fact-checkers across Europe on the best way to evaluate AI generated and digitally altered media, and on a media literacy campaign to raise public awareness of how to spot that type of content. We will begin accepting EFCSN certification as a prerequisite for consideration in the Meta fact checking program in Europe, in recognition of the strong standards it has established for the European fact checking community. Meta is also supporting The European Disability Forum to run a media literacy campaign ahead of EU Elections focusing on inclusion.

Tackling Influence Operations

We define influence operations as coordinated efforts to manipulate or corrupt public debate for a strategic goal – what some may refer to as disinformation – and which may or may not include misinformation as a tactic. They can vary from covert campaigns that rely on fake identities (what we call coordinated inauthentic behaviour), to overt efforts by state-controlled media entities. 

To counter covert influence operations, we’ve built specialised global teams to stop coordinated inauthentic behaviour and have investigated and taken down over 200 of these adversarial networks since 2017, something we publicly share as part of our Quarterly Threat Reports. This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity. In preparation, we conducted a session to focus on threats specifically associated with the EU Parliament elections.

We also label state-controlled media on Facebook, Instagram and Threads so that people know when content is from a publication that may be under the editorial control of a government. After we applied new and stronger enforcement to Russian state-controlled media, including blocking them in the EU and globally demoting their posts, the most recent research by Graphika shows posting volumes on their pages went down 55% and engagement levels were down 94% compared to pre-war levels, while “more than half of all Russian state media assets had stopped posting altogether.”

Countering the Risks Related to the Abuse of GenAI Technologies

Our Community Standards, and Ad Standards apply to all content, including content generated by AI, and we will take action against this type of content when it violates these policies. AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners. One of the rating options is Altered, which includes, “Faked, manipulated or transformed audio, video, or photos.” When it is rated as such, we label it and down-rank it in feed, so fewer people see it. We also don’t allow an ad to run if it’s been debunked.

For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI. We already label photorealistic images created using Meta AI, and we are building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads.

We will also be adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context.

Advertisers who run ads related to social issues, elections or politics with Meta also have to disclose if they use a photorealistic image or video, or realistic sounding audio, that has been created or altered digitally, including with AI, in certain cases. That is in addition to our industry leading ad transparency, which includes a verification process to prove an advertiser is who they say they are and that they live in the EU; a “Paid for by” disclaimer to show who’s behind each ad; and our Ad Library, where everyone can see what ads are running, see information about targeting and find out how much was spent on them. Between July – December 2023 we removed 430,000 ads across the EU for failing to carry a disclaimer.

Since AI-generated content appears across the internet, we’ve also been working with other companies in our industry on common standards and guidelines. We’re a member of the Partnership on AI, for example, and we recently signed on to the tech accord designed to combat the spread of deceptive AI content in the 2024 elections. This work is bigger than any one company & will require a huge effort across industry, government, & civil society.

For more information about how Meta approaches elections, visit our Preparing for Elections page.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy