Civic Integrity

We’re focused on serving the public conversation.

We are committed to protecting the integrity of the public conversation.

What we’re doing

Our service shows the world what’s happening, democratizes access to information and — at its best — provides people with insights into a diversity of perspectives on critical issues, all in real time. We’re always enhancing our safety policies, developing tools and resources for finding and stopping abuse, and taking action against activity that violates the Twitter Rules.

Protecting elections, globally

We take our learnings from every recent election and use them to improve our election integrity work worldwide. We’ve been building on our efforts to protect the public conversation and enforce our policies against deliberate attempts to mislead people. Partnerships with industry peers, as well as local, state, and national officials, has been critical to our success.

Fighting against malicious activity

We want to ensure that people's experience on Twitter is safe, secure, and informative. And while our open and real-time environment is a powerful antidote to the intentional spread of false information, we’re also taking proactive steps to stop abuse, spam, and manipulation before they happen.

We’re not in this alone

We work alongside political parties, researchers, experts, and election commissions and regulators around the world, all while investing in our proactive detection and enforcement efforts on the platform. We also stay in touch with national parties and state election officials to be sure they know how to report suspicious activity, abuse, and rule violations to us. Key election stakeholders also have channels to directly escalate any issues or concerns.


US elections

What we saw in 2018

Most Tweeted-about midterm elections

The 2018 US midterm elections were the most Tweeted-about midterm elections in history. Between the first primaries in March and Election Day, more than 99 million Tweets were sent. The overwhelming majority of these Tweets came from people who expressed their views on issues and candidates. Americans also Tweeted to encourage neighbors, friends, family, and complete strangers to register to vote.

Foreign information operations

Compared to 2016, we identified much less platform manipulation originating from bad-faith actors located in countries outside of the US. That said, as part of our ongoing investigations, we found limited operations potentially affiliated with Iran, Venezuela, and Russia. Thanks to the increasingly robust nature of our technology and internal mechanisms for spotting platform manipulation, the majority of these accounts were proactively suspended before Election Day.

As always, attribution is difficult — it takes time and significant resourcing to properly investigate. The datasets we removed are shared in the full retrospective review, and we’ve added them all to our public archive to empower further research by experts in the field. You can read our full 2018 US midterm review report here.

Domestic attempts at voter suppression

To protect this conversation, we took several proactive measures to combat malicious content posted by bad-faith actors: We developed new policies, created a dedicated partner escalation path, and taught our teams how to be most effective in the face of new threats.

In this context, we removed Tweets posted with the intent of deterring eligible voters from voting. This included a variety of problematic content — everything from voter intimidation to the spread of false information about voting or voter registration. The number of violations was relatively small. During the 2018 US midterms, we took enforcement action on nearly 6,000 Tweets we identified as voter suppression attempts; many of these originated in the US.

Ongoing work since the 2018 midterms

We’ve made significant progress since the 2018 US election to address, mitigate, and prevent future attempts to undermine the integrity of online conversation regarding elections and the democratic process. We prohibit state-controlled media from advertising globally, have made significant progress in our proactive approach to platform manipulation, and expanded our policies in the face of emerging threats.

Updating and enforcing our election integrity policy

Ahead of the 2018 US midterms, we updated the Twitter Rules around several key issues impacting the integrity of elections across the globe. The rules address: (1) fake accounts engaged in a variety of malicious behaviors, (2) removing accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules, and (3) the distributi on of hacked materials that contain private information or trade secrets, or could put people in harm’s way.

Additionally, Twitter took a number of steps to monitor our service internally. Twitter engaged in proactive monitoring, detection, and surfacing of suspicious behavior related to the election. These strategic efforts allowed us to gain visibility into metrics such as Tweet volumes, hashtag tracking, and suspicious behavior at the individual account level. Over the course of the 2018 election, our monitoring resulted in more than 200 account suspensions, and the removal of more than 5,500 Tweets in violation of our rules. From 2018 and into the 2020 elections, we’re building on these successful efforts and expanding their reach and usefulness in protecting the public conversation on Twitter.

Information operations archive

In October 2018, we launched the first archive of potential foreign state-backed information operations we identified on Twitter. It is our fundamental belief that these accounts should be made public and searchable so members of the public, governments, and researchers can investigate, learn, and build media literacy capacities for the future. It’s now the largest archive in the industry and exists so the public, governments, the media and researchers can investigate and learn from these tactics. Twitter continues to engage in intensive efforts to identify and combat state-sponsored attempts to abuse social media for manipulative and divisive purposes.


Prior to Election Day 2018, we onboarded partner organizations to the Partner Support Portal. Partner reports resulted in the removal of thousands of accounts and Tweets in violation of our rules. We collaborated with a number of nongovernmental organizations to promote voter registration, civic engagement, and media literacy, including Ballotpedia, Democracy Works, DoSomething, HeadCount, National Association of Secretaries of State, National Voter Registration Day, Rock the Vote, and TurboVote Challenge. You can find more here on the partnerships we’ve established globally to protect the conversation around elections and encourage voter participation and engagement.

Voter participation and engagement

Over the course of several elections, we’ve partnered with civil society groups and government entities to encourage voter participation and engagement.

  • We launched election labels to help people easily identify candidates for office.
  • We launched our #BeAVoter campaign to promote increased, informed participation in the 2018 midterm elections and to increase voter registration nationwide.
  • People on Twitter in the US saw an Election Day countdown in their Home timeline with information on how to find their polling place and who is on their ballot via, an initiative of the Voting Information Project (VIP).

What we’re doing to protect the Twitter conversation, encourage voter participation and engagement ahead of 2020

We have a cross-functional team focused on election integrity efforts that aims to foster an environment conducive to healthy, meaningful conversation on Twitter and address threats posed by hostile foreign and domestic actors.

Enforcing our election integrity policy

In early 2019, we strengthened our rules against deliberate attempts to mislead voters to now explicitly prohibit manipulating or interfering in the election process. This includes posting or sharing content that may suppress voter turnout or mislead people about when, where, or how to vote.

  1. We’re bringing back election labels, which we first launched during the 2018 US midterm election. These labels received overwhelmingly positive feedback from voters and candidates, and they played a prominent role in election conversation: In the week before Election Day 2018, people on Twitter saw labeled accounts approximately 100 million times each day, and 13% of US election conversation on Twitter included a Tweet with an election label.
  2. In December 2019, we began identifying candidates who qualify for the primary ballot for US House, US Senate, and gubernatorial races with a verified badge. For both primary candidate verification and election labels, we are partnering with Ballotpedia, as we did in 2018, to utilize their expertise in identifying the official campaign Twitter accounts of candidates. More about these tools here.

Approach to synthetic and manipulated media

In fall 2019, we announced our plan to seek input from around the globe on how we will address synthetic and manipulated media. We’ve defined synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning. These are sometimes referred to as deepfakes or shallowfakes.

In early 2020, after gathering more than 6,500 responses from people around the world, we announced the launch of a new rule surrounding synthetic and manipulated media: You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.

We’ll use the following criteria to consider Tweets and media for labeling or removal under this rule:

  • Are the media synthetic or manipulated?
  • Are the media shared in a deceptive manner?
  • Is the content likely to impact public safety or cause serious harm?

If we believe that media shared in a Tweet have been significantly and 
deceptively altered or fabricated, we will provide additional context on the Tweet. 
This means we may:

  • Apply a label to the Tweet;
  • Show a warning to people before they Retweet or like the Tweet;
  • Reduce the visibility of the Tweet on Twitter and/or prevent it from being recommended; and/or
  • Provide additional explanations or clarifications, as available, such as a landing page with more context.
  • In most cases, we will take all of the above actions on Tweets we label.

Voting misinformation reporting flow

Ahead of key moments in the 2020 US elections, we turned on a tool that enables people to report deliberately misleading information about how to participate in an election or other civic event. This reporting flow has been an important aspect of our efforts since early 2019 to protect the health of the conversation for elections around the globe, including India, the UK, and across the EU.
The tool will help us identify and remove misinformation that could suppress voter turnout and is one way we're protecting the integrity of the US 2020 election.

Supporting the 2020 US Census

We partnered with the US Census Bureau to inform our work to keep the conversation healthy around this important event. We’ve also hosted a multitude of trainings and educational sessions for nonprofit organizations and Census officials to learn about Twitter's civic integrity policies and best practice content tactics.

We’ve also worked with the US Census Bureau to launch a new tool so when someone searches for certain keywords associated with the Census, a prompt will direct individuals to the official Census site.

This website provides clear information on the 2020 Census, how to participate, 
and how the Census process will safeguard individual privacy and security.

Additionally, as part of our company-wide efforts to fight misinformation regarding participation in civic events, we’re applying our existing election integrity policy 
to Census-related content to make certain the Census conversation on Twitter 
remains healthy.


EU elections

Occurring once every five years, the EU elections are the second largest democratic exercise in the world. Much of the discussion around the elections, including official statements from candidates and campaigns, happens on Twitter. We are committed to ensuring a healthy, open, and meaningful conversation with high-quality information, where inauthentic activity and other politically motivated interference is countered.

Our global, cross-functional teams work proactively to protect the integrity of the election conversation, support partner escalations, and identify potential threats from malicious actors.

To augment our election integrity efforts, Twitter cooperated with campaigns and government agencies across the EU. We also signed up to the European Commission’s Code of Practice on Disinformation, taking steps to ensure the information available to voters was reliable.

Twitter’s chief priority remains serving and protecting the health of the public conversation. We’ll continue working to protect conversations on the service, particularly around election cycles, by investing in technology, developing new policies, and building meaningful partnerships to further our understanding of the political and social context within which Twitter operates.

Below is a summary of Twitter’s major EU election integrity initiatives, as well as key findings for the 2019 election cycle. To read a more comprehensive report, download the PDF (in English only).

Service improvements

By observing and learning from the tactics used by malicious actors during previous elections, we were able to strengthen the product, policy, and operational aspects of Twitter. These improvements enabled us to engage in intensive efforts to identify and combat attempts to abuse or undermine our service. These improvements include:

  • Enacting a new policy for political campaign ads that ensures ads are purchased legally and within the EU, as well as letting users easily find out who paid for the ad
  • Introducing new reporting tools to help users report fake accounts, and significantly improving the process for candidates and campaigns to report the exposure of personal information or hacked materials
  • Introducing a ​new tool that allows citizens to report deliberately misleading ​election-related content to us
  • Increasing access to Twitter data on malicious state-backed foreign actors so that researchers can help improve the public understanding of misinformation and other abuses

Key figures — 2019

  • There were 21 EU certified political campaigning accounts running ads for the EU elections on Twitter. These accounts contributed to 23,253,153 impressions on Twitter.
  • We received 49,945 user reports through our election-related misinformation reporting feature for the EU.

Policy outreach

Twitter’s Public Policy team increased its engagement with political parties and member state offices across the EU. We arranged Twitter trainings, distributed media literacy resources, and amplified campaigns that encourage voter participation, such as #ThisTimeImVoting.

A key focus was improving our response time to reports from people and organizations within the electoral arena, including election support organizations, EU-based research organizations, universities, and academics who study the spread of misinformation in the media, and key EU and national political parties and institutions.

To achieve a faster response time, we identified partners to enroll in Twitter’s Partner Support Portal, a special tool that allows pre-approved partners to rapidly report suspected violations of the Twitter Rules.

Key figures — 2019

  • 80 election partners across the EU were onboarded to the Partner Support Portal prior to Election Day

Civic participation

Twitter serves as a platform for voters to discuss the civic issues that are important to them, and to share their stories as they participate in the democratic process. Twitter launched two special emoji to drive engagement and unite citizens around common themes and issues, such as reaffirming the commitment to vote. The emoji served as a visual unifier across all official EU languages.

Key figures — 2019

  • Twitter serves as a platform for voters to discuss the civic issues that are important to them, and to share their stories as they participate in the democratic process. Twitter launched two special emoji to drive engagement and unite citizens around common themes and issues, such as reaffirming the commitment to vote. The emoji served as a visual unifier across all official EU languages.
  • 273% increase in Tweet volume compared with the EU elections in 2014
  • Over 6.2 million election-related Tweets discussed key issues such as climate change and Brexit, alongside key candidates in the lead-up to the elections
  • Approximately 2.5 million people watched the live stream of the ​#TellEurope EBU presidential debate from the Parliament’s​ ​@Europarl_EN​ account

Political advertising

Twitter globally prohibits the promotion of political content. We made this decision based on our belief that political message reach should be earned, not bought.

Digital advertising is incredibly effective, and we’re working to address the risk that brings when it comes to driving political outcomes.

We define political content as content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome. Ads that contain references to political content, including appeals for votes, solicitations of financial support, and advocacy for or against any of the above-listed types of political content, are prohibited under this policy.

We also do not allow ads of any type by candidates, political parties, or elected or appointed government officials. You can find more on these policies on our political content policy page, as well as our political content FAQs page.


Service integrity

We challenge millions of accounts attempting to spam or otherwise manipulate the Twitter platform every month.

An anti-spam challenge involves tasks like confirming a phone number or solving a reCAPTCHA — such assignments are easy for real people to carry out, but difficult or costly for spam accounts.

In the first half of 2019 we issued, on average, 16 million such challenges per month. While we observed an overall decline in anti-spam challenges issued in 2019 as compared to 2018, we attribute this at least in part to an intense focus on deterring fake account creation at signup rather than responding to an account’s post-signup behavior.

In 2019, we also published a clarified platform manipulation and spam policy, which takes into account the changing landscape of how spam tactics are being used in a widening range of deceptive and manipulative practices, none of which have any place on Twitter.

Our concurrent focus on countering malicious automation and abuse originating via our API led to the suspension of approximately 144,000 apps during a six-month time period spanning Q1-Q3 2019. This was achieved using both automated and proactive measures.

Furthermore, our enhanced developer onboarding process is designed to keep bad actors from accessing our API platform in the first place, and since the program’s launch in July 2018, approximately 800,000 use cases have been reviewed and checked for policy compliance prior to the developer being granted or denied access.

Manipulation, spam, and API abuse tactics employed by bad actors are constantly evolving, and we’re committed to the perpetual improvement of our response and preventive measures by working daily to develop improved approaches in policy, detection, and enforcement.

Please see our Transparency Report to learn more.


Data archive

To empower academic and public understanding of information operations around the world

And to empower independent, third-party scrutiny of these tactics on our platform, we disclose a comprehensive archive of state-backed information operations on Twitter.

We have made available all the accounts and related content associated with potential information operations that we have found on our service since 2016 and continue to release new data as we detect new activities.

To access our datasets, see our Transparency Report.



Elections and civic engagement partnerships are critical for tracking and promoting democratic conversation and engagement through Twitter.

We earmarked $1.9 million (an estimate of what we earned from ads placed by Russia Today and Sputnik, both linked to the Russian government) to fund partnerships that expand academic research into how Twitter is used for civic engagement and elections. This research will explore the use of manipulative techniques and disinformation, with an initial focus on elections, civic engagement, and automation.

We’re proud to be working with a wide range of organizations to help achieve this goal, including:

Annan Commission on Elections and Democracy in the Digital Age


The commission will examine how countries, businesses, and citizens may exploit the possibilities of new technologies to serve our democracies while mitigating the risks.

Atlantic Council’s DFRL Lab


The Atlantic Council’s Digital Forensic Research Lab is building a global hub of digital forensic analysts tracking events in governance, technology, security, and where each intersect as they occur.

First Draft


First Draft is a project of the Shorenstein Center on Media, Politics and Public Policy at Harvard University’s John F. Kennedy School of Government to fight mis- and disinformation through fieldwork, research, and education.

EU DisinfoLab


The EU DisinfoLab is a non-governmental organization based in Brussels with a mission to fight disinformation with innovative methodology and scientific support to the counter-disinformation community.

City University


City University’s Department of Sociology is undertaking research into the sociological aspects of digital media with a substantive interest in the cross-effects between online and offline social networks.



The National Democratic Institute (NDI) is leading the initiative to organize the Design 4 Democracy Coalition, which is working to strengthen democracy abroad in the digital age. NDI is supported in this effort by the International Republican Institute (IRI), as well as the organizations of many of the Coalition’s Advisory Board members.

Reporters Committee


The Reporters Committee for Freedom of the Press is a nonprofit association dedicated to assisting journalists since 1970.

Dive deeper

What's happening with @TwitterGov