The biggest election year in history is officially underway. As more than half the world’s population heads to the ballot box, it’s impossible to overstate the scale and gravity of the risks that social media poses for these countries and for the trajectory of the democratic experiment writ large.
Yet in our collective rush to sound the alarm about threats to elections in 2024, we may risk obscuring other threats posed by social media that are equally pressing. While much of the world prepares to vote, many members of the other half will still be living in conflict zones and fragile communities that are on the brink of war or—with enormous effort and a bit of luck—on the cusp of peace. These countries may not be electing leaders in 2024 but nevertheless warrant the attention, investment, and consideration of social media companies—indeed, lives depend on it.
In recent months, social media companies have highlighted their efforts to protect electoral integrity and ensure that their platforms are not used to fuel mis- and disinformation. Meta, for example, has built the “largest independent fact-checking network of any platform, with nearly 100 partners around the world to review and rate viral misinformation in more than 60 languages.” It has also developed policies requiring advertisers to disclose their use of artificial intelligence to create or alter political advertisements. In January, TikTok announced that it was building Election Centers that connect people to trustworthy information about voting, in partnership with nonprofit organizations and electoral commissions. Even X, formerly known as Twitter—which abruptly fired its election integrity team in 2023—has stated that any “attempt to undermine the integrity of civic participation undermines our core tenets of freedom of expression.”
These efforts are a testament to the work of both trust and safety professionals and civil society actors to guard against the worst outcomes. Although there is more to be done—particularly in countries outside the global north—these initiatives reflect an implicit understanding by platforms of the stakes of getting this wrong.
But in many contexts, social media risks cannot be confined to an electoral cycle. Rather, risks build cumulatively over the course of months and years, undermining trust in leaders and institutions, discrediting journalists and independent media, and driving wedges deeper into communities in the process. And while tensions often peak during elections, the most crucial time for platforms to act is often long before votes are cast.
It is worth bearing in mind that many of the world’s most acute crises of recent years have not been around elections. In Myanmar, for example, the devastating persecution of the Rohingya people—backed by a yearslong campaign of online hate speech and disinformation—started well before the election that precipitated a military coup in 2021. Meanwhile, in Ethiopia, disinformation spread through social media has had far more dangerous consequences for the country’s recent civil war than its 2021 elections.
Simply put, for many countries in conflict, elections are not the best marker of social media risk. In other countries, elections may not even happen at all. Oftentimes, democratic outcomes may depend more on the success of cease-fire negotiations or peace talks than a future election.
Consider Yemen, where a nine-year civil war—and one of the world’s worst humanitarian crises—has resulted in hundreds of thousands of civilian deaths. An online “infodemic” rages in the country, and all parties to the conflict have used social media to spread disinformation, effectively “eroding prospects for a durable peaceful settlement,” political scientist Robert Muggah wrote in Foreign Policy in 2022. Not only are elections not on the calendar for Yemen in 2024, but they are now years overdue. Even in 2012, the last time Yemenis went to the polls, with only a single presidential candidate up for consideration, the event was a questionable democratic exercise at best. Today, the primary question is not when Yemen will hold elections but whether the yearslong effort to end the war through negotiations will ever bear fruit. It is only after this happens that Yemenis can begin to even think about elections.
Similarly, in Libya, where elections have not been held since 2014, many see peace talks as a critical step on the way to free and fair elections. Yet, despite the enormous stakes they present, peace negotiations in Libya and elsewhere have often gone unnoticed and poorly moderated by social media platforms. Indeed, previous attempts to end the conflict in Libya were undermined by networks of inauthentic accounts spreading disinformation about the United Nations-led negotiations. Failures to stem disinformation can not only disrupt peace talks and put the lives of negotiating parties at risk but also fuel a reversion to conflict in societies that have worked for years to tiptoe tentatively toward peace.
Yet amid wide-reaching tech industry layoffs, a retraction in platforms’ human rights investments, and a veritable avalanche of elections in the year ahead, there is a genuine risk that countries on the brink of conflict or peace will be put on the back burner, to devastating effect.
This need not be the case. To better prepare, platforms should take a more holistic view of risk in fragile countries and invest further in understanding how dynamics in the information environment can trigger violence.
For years, civil society organizations have urged platforms to prioritize countries on the verge of conflict by considering atrocity risk indicators, conflict watchlists, press freedom ratings, and potential human rights violations; working with experts specialized in conflict and atrocity prevention; and engaging with local actors who have firsthand knowledge of how digital dynamics can trigger violence. Some platforms have already made strides in this direction, but they must do more to encourage such efforts and support those in their organizations already thinking about these kinds of risks.
Platforms do not have unlimited resources and must make impossible decisions about how to prioritize among an onslaught of potential crises. But, according to the U.N. Guiding Principles on Business and Human Rights, these life-or-death decisions on prioritization must be made according to the scale, scope, and irremediability of potential risks. This requires not isolating elections as their own category of events to be tiered but situating the risks they present within a broader framework for prioritizing a platform’s human rights impacts.
Once this framework is in place, platforms can act earlier to leverage the tools available to them. In the lead-up to elections—or in response to major global crises—social media companies often set up “war rooms” or election operations centers. In physical or virtual spaces, experts on coordinated inauthentic behavior, misinformation, human rights, and content moderation come together for a period of weeks or months to discuss emerging risks, detect spikes in automated accounts, or monitor viral rumors. The levers these war rooms can pull are powerful: Platforms may decide to apply protections to guard against the impersonation and harassment of electoral candidates, deploy resources to detect and disrupt influence operations, and closely monitor viral hate speech and misinformation that could incite violence. Yet many of these tools would arguably have greater impact in helping to avert harm if deployed earlier in an electoral cycle or in anticipation of—rather than in reaction to—major conflict risks.
Defining potential flash points outside of elections is certainly challenging: Conflicts do not follow a linear path, crises can escalate rapidly, and the status of peace talks is often closely guarded. Focusing on elections as a key risk marker is—by comparison—easy, particularly when resources are scarce and trust and safety teams are already stretched thin.
But platforms don’t have to work alone. They could, for example, coordinate with conflict mediation organizations such as the Centre for Humanitarian Dialogue, where we both work, to exchange information on potential flash points for violence.
While platforms have expanded partnerships with civil society organizations, there remain few sustainable, meaningful forms of joint risk exchange at a cadence that could support long-term peacemaking efforts in fragile communities. These exchanges are sorely needed and can help widen the aperture used to define critical events worthy of protection.
Safeguarding elections—and the tense moments that immediately follow—from social media interference is essential. But this goal, in and of itself, is insufficient to protect what is at stake. For those not going to the polls, and particularly for those living in war zones or places undergoing peace negotiations, a failure to proactively consider social media dangers puts lives at risk. By both taking a broader lens to the risks presented by social media and intervening earlier, we stand a better chance of restraining forces that, if unleashed, may contribute toward a society’s descent into violence.