News & Stories > Knocking out disinformation with Chequeado

Project Highlights

Knocking out disinformation with Chequeado

Words by Matt Stempeck • Dec 12 2022

As part of Matt Stempeck’s work researching the ongoing work within the Code for All Network confronting disinformation, we are sharing the most exciting takeaways from those exchanges in the Civic Tech & Disinformation Project Series.

These interviews were conducted in mid-2022. Please note that the information might change as the organization seems appropriate.

Get to know Chequeando!

Olivia Sohr is Directora de Impacto y Nuevas Iniciativas at Chequeado, Latin America’s first fact-checking initiative. Her role covers two important areas: evaluating impact (both daily metrics and in-depth research into their impacts) and new initiatives and experimentation.

The knock on fact-checking

So what do we think about the impact of fact-checking? I ask Olivia about a common criticism of the approach, which is that it tends to focus on the minute, atomic-level misinformation claims while losing sight of the broader disinformation wars designed to shift societal conversations.

Olivia responds that fact-checking has been asked to do more than it was designed to. She points to 2016 as a watershed moment for fact-checking, where misinformation’s effect on voters in major elections sparked panic amongst funders and other leaders. Fact-checking organizations were viewed as a quick solution and received a windfall of attention and funding. But then people voted again in more election cycles, and the results still weren’t what those leaders desired, so they concluded that fact-checking doesn’t work.

Chequeado worked to establish that fact-checking does work for its intended effect, which is correcting people’s misperceptions. It doesn’t, Olivia says, turn voters into rational beings who only vote based on data and fact-checks with entirely empirical views of the world, “because that’s not the way people work and that’s not the way people vote.”

They then set out to identify and communicate the contexts in which fact-checking is most useful, and when it’s not. They’ve found that fact-checking is helpful for correcting people’s misinformation or erroneous messages that people might have received. It falls short of solving other problems in the information ecosystem, Olivia says, which is why Chequeado has expanded its programming beyond just fact-checking.

Media literacy

For example, Chequeado runs media literacy programs for two target groups: journalists and journalism students, and adolescents. The goal is to develop media literacy and digital literacy so that people can be more prepared to debunk misinformation when they see it or at least identify that it’s suspicious and refrain from sharing it further.


Chequeado’s innovation program includes automation to accelerate the team’s workflow where possible while staying effective. They’re trying to scale up with tools, but are aware that there’s still no tech that can handle the entire debunking process or even identify if a given piece of information is accurate. New tech does help them quickly identify which information is a checkable and high priority.

Evaluating impact on different time scales

Chequeado’s daily impact measurement regimen consists of social media and website metrics, views on their videos, and all the other usual analytics that newsrooms follow.

They also use Impact Tracker, a tool shared with them by a US organization, that helps the team identify the qualitative, meaningful impacts of their work that aren’t visible in the engagement data. For example, they would track when one of their fact-checks leads to further media coverage and a politician correcting themselves as a result. A system is essentially a reporting form.

Tracking these broader impacts is a team effort – everyone at the organization is responsible for keeping an eye out for the impacts of their work and reporting it, even if it’s not their direct program.

Research partnerships

To lead more rigorous, in-depth research into the impact of their work, Chequeado has a coordinator of academic investigations who facilitates working with academic researchers to explore their efficacy from different perspectives. They’ve worked with researchers from various disciplines, like political sciences, computer sciences, and psychology. 

This research usually takes one of two approaches. The first is directly evaluating the outputs of Chequeado’s work: do their fact-checks stand up over time? When people get the corrected information, how long does the effect last?

The second approach is to better understand broader trends in how misinformation travels so that Chequeado and others can adapt their strategies. For example, they’re currently researching how misinformation hops from one country to the next in Latin America to try and identify regional patterns. Does misinformation travel through migrant communities, or through specific social media groups? Does it go through organized actors who routinely disinform? This might allow early detection when those actors begin spreading disinformation in other countries.

I ask Olivia whether the slow academic publishing cycles impede the benefits of the research in what’s often seen as an information arms race. “Timeframes in academia are different than the ones we’re used to, especially coming from journalism,” Olivia says. “But they’re still actionable in time.” 

For example, Chequeado commissioned Ernesto Calvo at the University of Maryland and Natalia Aruguete at the University of Quilmes to measure the impact of their work over the course of Argentina’s 2019 election cycle. The organization relinquished control over the research methodology and findings. The researchers’ results were ready 6-8 months later, and still “incredibly valuable.” Chequeado has already applied the findings.

Confirming truth matters

They found, for example, that fact-checking organizations spending time and energy to validate true claims has a meaningful effect on the audience and political actors. This was an open question for groups like Chequeado. “Aren’t politicians and public figures supposed to tell the truth?” Olivia asks. “So why are you congratulating them for saying the truth?” Some fact-checking organizations eschew the “True” label entirely. 

What the researchers found was that people are happy to share the organization’s
‘True’ statements because they affirmed the truth, rather than negated myths. (There’s been concern that when you share a ‘False’ fact-checking claim, you are still effectively contributing to   disinformation if the broader intent of that campaign was to get people talking about a certain topic or controversial framing.) 

The researchers concluded that ‘True’ statements have enormous value in getting people to interact with and share Chequeado’s content, and in building reputational value. This work validated the organization’s efforts to verify and publish true statements. 

Timing of intervention can significantly alter the final trajectory of misinformation

Another finding profiled how the spread of a piece of disinformation accelerates on Twitter and other social media. Chequeado found that when disinformation is quickly going viral, a well-timed intervention can meaningfully flatten the ultimate trajectory of that disinformation’s reach by dissuading people from sharing it. It’s common knowledge that disinformation outpaces corrections online, as measured by sharing metrics. Measuring themselves by that metric, Olivia says is “probably doing it wrong because our piece will never be as attractive as the disinformation. But it may have the effect that we want, which is that people will not believe the disinformation and will not share it.” It’s not as relevant if Chequeado’s content is equally shared, as long as it effectively depresses the reach of the disinformation.

Timing the right moment in which to reach the audience around a rapidly accelerating piece of misinformation is quite difficult, but knowing that an intervention can meaningfully limit its harm is a powerful place to start.

For example, during Argentina’s 2019 presidential election, a rumor started following one of the debates that one of the candidates had worn an earplug to receive answers from his team. (This rumor pops up frequently during major debates around the world, Olivia says). The rumor started circulating after the debate’s conclusion, and Chequeado was able to respond within an hour with a photo showing neither candidate wearing an earplug. 

Newsroom partnerships

Their election coverage alliance with Reverso, comprising over 100 media outlets, proved quite useful here. As soon as Chequeado published its story, many of the other media partners syndicated it and drastically increased its reach. 

Chequeado has long participated in media alliances like Reverso in order to reach as many people as possible, including the communities that might be most vulnerable to misinformation. As an organization that started with a newsroom mentality, Chequeado finds the interface with other media outlets through partner networks an easy culture fit.

Reverso was operational again for Argentina’s legislative elections in 2021, and will hopefully be a recurring initiative that pools newsrooms’ collective efforts during these critical democratic moments. 

[Click here to see similar newsroom partnership networks from around the world].


Chequeado considers innovation a key part of their work, affecting both how they work and how they present their work to audiences. They’re always looking to introduce tools to automate and streamline their verification process, for example. And they hold themselves to the novelty aspect of innovation. “We’ve always been very conscious of not rebuilding things that already exist, but instead forming alliances and joining forces with others,” Olivia says.

The Chequea bot, for example, runs an algorithm that parses media and public speeches and transcripts for factcheck-able statements. It gathers these discrete phrases so that a human can review them to determine which claims might be meaningful to evaluate. They’ve done this work in conjunction with Full Fact in the UK and Africa Check in South Africa because this approach can easily be applied in different countries and contexts.

This collaborative approach appeals to funders, who, Olivia’s found, are quite happy to see organizations doing as much as they can with their money.

Some of the fact-checking organizations Chequeado has recruited to its Latam Chequea network.

One of the main ways Chequeado collaborates with others is through its Latam Chequea network. Over thirty organizations from countries across the region, as well as organizations producing Spanish content in Spain, the US, and Portugal, contribute to the network so they can build better things together.

For example, in 2021 the COVID vaccination campaigns kicked off similar disinformation responses throughout the region. Latam Chequea adapted its focus to facilitate intensive collaboration between the partners on the recurring themes. They developed a joint database where everyone would enter their pieces. If a publication had already developed a fact-check in Mexico, Chequeado could just take it and adapt it, adding any local specificity needed.


During the vaccine campaigns, they started working together with other organizations to produce scripts that each group could adapt and record as shareable audio clips. They worked with local partners in Peru, Ecuador, Bolivia, Mexico, and Guatemala. 

Whatsapp voice notes are an incredibly popular way to communicate in Latin America, so they worked with partners to produce voice notes in indigenous languages that could be shared and broadcast on radio in communities that don’t speak Spanish. The audio nature of these recordings also helped address varying Spanish literacy rates. By prioritizing the most important debunks, recording them to audio, and sharing them with their distribution lists, Chequeado found a novel way to inform communities that get the content pushed out to them, rather than assume people will visit their website.

The beauty of the Latam Chequea network is that different organizations can take on the various aspects of the verification production workflow, and each additional organization expanded the group’s collective reach to more communities.

Is there an AI arms race between disinfo producers and fact-checkers?

“Disinformation is going to win it for sure,” Olivia says, “because producing disinformation is so much faster than debunking it. I can say, “This candidate is performing Satanic Rites and killing little kittens and drinking their blood. And it will take days to debunk that because you’ll have proof that he’s not. And then ask, ‘What does it mean to perform Satanic Rites? Do we have a Satanic Rites specialist who can verify what we’re saying?’ It takes a long time to debunk even the stupidest of this information.”

Advancing AI generative engines will likely create the same dynamic. It will be faster to author or alter images and videos. It will still take fact-checkers a relatively long time to disprove them. And thus far, there’s no AI that can meaningfully automate the entire debunking workflow.

Disinformation fighers use algorithms to look at a text’s formatting, emotional salience, or keywords to suggest it might be disinformation (like ALL CAPS SCREEDS), but that will also collect false positives. Horoscopes are a great example of this bycatch, where we can discuss whether they’re accurate or not, but they aren’t our focus.

AI tools can also help transcribe a video, or identify discrete checkable phrases and queue them up for a fact-checker to determine which are relevant, but it’s not going to do the whole process automatically, and Olivia’s skeptical it will be able to anytime soon.

Even with existing text posts on social media, the disinformers are often ahead in the arms race.

“They’re always adapting to fool the algorithms, so you’re always a step behind them,” Olivia says. For example, when the big social platforms started labeling social posts about COVID with public health labels, disinformation actors started writing out ‘COVID’ as ‘C0V1D’ to fool the algorithms while reaching humans. These supervisory tactics require constant tweaking of the algorithm, and people can constantly adapt to stay ahead of the platforms in many cases. 

Platform affordances

Olivia points out that specific social media platforms have affordances that directly impact the funding of disinformation, and actors adjust their tactics accordingly. For example, YouTube’s partner program pays people for their videos and does provide some degree of quality check before an influencer can join the program. This isn’t to say there isn’t plenty of misinformation on YouTube, but Olivia believes the program does keep disinformers a bit more in check on the platform because they want to be able to monetize their content. Instead, they’ll use YouTube for funding and exposure, and recruit followers over to a Telegram channel where they can unload their crazy viewpoints. 

Contrast this approach with Instagram, there’s no organic way to make money on the product. Influencers turn instead to private sponsorship agreements. There’s less oversight there because influencers don’t need to qualify for a partner program to monetize their content. The platforms’ revenue features have an effect on how disinformation circulates, Olivia says, and how disinformers recruit followers to more marginal platforms or private platforms where they can share their most absurd disinformation that the big platforms wouldn’t allow them to.

A misinformation management approach

Olivia concurs with Christopher Tuckwood at the Sentinel Project that misinformation is here to stay, and that we should take a mitigation approach instead of a prohibition strategy. “I don’t think we’re going to be able to stop misinformation. And all the efforts to stop disinformation, especially through regulation, tend to come from more authoritarian countries that just want to control freedom of expression and end up giving a solution that’s worse than the problem in many cases,” Olivia says. “Our biggest chance is that people are more prepared to identify what disinformation is, learn not to share things that sound fishy, and be more critical about the things that circulate, rather than think that we can beat it with technology or debunks,” which are by their very nature reactive.

Debunking is still important to show when information is false so that the public doesn’t linger in their misbelief. But the volume of false claims will always exceed our capacity to check them, Olivia says, even if we have really good technological tools. 

Public developing an immune response

The solution must lie in people becoming more critical and discerning about the disinformation they encounter. “This is my optimistic side,” Olivia says, “But I think we’re much better off than we were four or five years ago,” I suggest the idea that popular culture might actually get smarter over time. “The big challenge with that,” Olivia says, “is managing to get there without people just completely disconnecting from the public conversation and saying, ‘Oh, it’s too complicated, anything can be fooled'” and opting out of participation in important public conversations. 

“We have the double challenge of actually getting people interested in news and public discussion and the public discourse, while being aware of disinformation and not letting them just tune out the whole thing and say, ‘This is too complicated for me.’ This is an enormous challenge.”

Challenges for the disinfo space

  1. Helping people become more aware of disinformation without them becoming too cynical and checking out of public conversations. “Engaging people and getting them to care about true information is an enormous challenge. We’ve always had it and it’s still there: looking for attractive ways to present information, figure out how to engage people, how to keep them interested in the conversation,” Olivia says.
  2. Consistent funding. As a twelve-year-old organization employing 40 people, Chequeado can be considered one of the more successful and sustaining organizations in the misinformation world. Olivia is clear that they haven’t solved the fundraising challenge because every annual budget is a struggle, but they do think about how to consistently raise it.

    They are keenly aware that funder fashions come and go. Disinformation could easily be one of those subjects that funders eventually move on from, after being fascinated by it for a spell. Chequeado has been able to build a strong base for a sustainable organization over the years by diversifying its funding sources.

    Being headquartered in Argentina has presented its own challenges due to the Argentine peso’s extreme volatility – the actual value of Chequeado’s budget can vary wildly from day to day depending on unpredictable currency fluctuations.

    Another part of the challenge is balancing their priorities and funders’ priorities. Aligning the two is a recurring challenge for many groups to be able to do the work they think will be the most meaningful, and still fund that work by aligning with funders’ interests.
  3. Accessing data and the lack of platform transparency is an enormous problem for us.” A decent amount of the field’s research has focused on misinformation on Twitter, despite everyone knowing full well that it’s not the most representative or meaningful platform to study. “We’re missing lots of misinformation on other platforms just because it’s incredibly hard to find or to get to.”
  4. Platform action on disinformation and Anglocentrism. The social media giants are “not always the best at being transparent and clear” about their own work to mitigate disinformation on their platforms, Olivia says.

    When they do take measures, they’re far more likely to target English-language discussions than even Spanish misinformation, even though Spanish has nearly a hundred million more speakers than English (much fewer indigenous language conversations).

    The Spanish discourse gets relegated to lower priority, and receives less scrutiny, Olivia says, “because the Spanish-speaking governments are less powerful at holding the platforms accountable, compared to others.”

    “We could work much better if we had more transparency about what they’re doing and more clarity about what impact our work is having on their platforms.”
  5. Evolving disinformation formats. The constant evolution of social media formats further complicates their work. Text is one thing, but images are more difficult to debunk, and videos are more difficult than that. Parsing them for disinformation is time-consuming, particularly if the video is long.

    “We’ve been talking about deepfakes for a while, and luckily they have not multiplied that much because it’s still expensive to do them,” Olivia says. “But it is one of those things that we fear because if they do become cheaper and easier, it could be extremely complicated to evaluate and debunk them.”

    We don’t even have to wait for the democratization of deep fake technology. Someone with a similar-sounding voice can very easily create confusion today. In 2019, Chequeado responded to false audio recordings where an imposter recorded themselves pretending to be a candidate. The organization had to work with a group of audio engineers from a university to identify whether the audio recording was the person or not. “We had to ask the person to send us five audio samples so that they could compare the tone of voice,” Olivia says. Even with this engineering support, they could only say with 85% certainty that the recording wasn’t the person it purported to be. “It’s another example of how easy it is to create disinformation, and the enormous amount of work it takes to debunk it.”
  6. Knowledge exchange. “We’d be really happy to know what others are doing in terms of automating the identification of disinformation,” Olivia says. “We speak with other fact-checkers, but there are probably other actors in other spheres working in this line [of inquiry] that we may not be aware of. We’d be very happy to know a lot more about those projects.”

    Olivia sees a gap between fact-checkers and broader work to automate disinformation investigations, including by groups developing bots. Fact-checkers tend to be very heads-down debunking politically salient disinformation, but in recent years Chequeado is expanding their collaborations with others to try and see the broader picture of how disinformation moves online and other lessons.

    For example, Chequeado published an investigative series titled The Disinformers on the regional actors who consistently disinform. The research gave them a much clearer picture of the macro information environment in which they conduct their specific fact checks, and in which their audience resides. Olivia’s keen to learn about similar higher-level analyses of disinformation actors and their tactics, motives, and funding sources, so that Chequeado can keep the broader picture in mind in their fact-checking work.

Olivia is considering how Chequeado might bring together academics, practitioners, and fact-checkers. There’s funding for disinformation research in academia, but too often academics come in, analyze a disinfo database, and reach conclusions that the field already understood. She dreams of a stronger network where the different players in the field could interact more, and more valuable academic research that might result. 

On their side, fact-checkers don’t always have the time or academic literacy to read the publications about disinformation. What if we could do a better job of practically summarizing and communicating researchers’ findings so that they would be readily actionable for practitioners?

I suggest formats like MySociety’s The Impacts of Civic Tech Conference, which convenes civic tech researchers and practitioners to ask rigorous questions about impact, or the MisInfoCon events when they’re hosted at universities as potential places to start exploring this concept.

Chequeado and some other fact-checking organizations are able to field research coordinators and teams, so they could serve as an initial bridge to articulate the program with academics interested in the applications of their work. Chequeado’s own research efforts with Calvo and Aruguete suggest that at least some academics are able to work quickly enough to share results back in a timely manner to inform their partner. This model could improve the evaluation of misinformation management and introduce more certainty to its impact evaluation.

Interested in learning more?

Matt Stempeck, the National Democratic Institute, and Code for All worked on a research project to understand how civic tech can help confront disinformation. The project’s goal is to learn from (and share out!) lessons learned from organizations that focus on this area.

Check out the Disinformation Research Project to learn more and help us spread the word!

Author picture

Matt Stempeck

Disinformation Research Consultant

Matt Stempeck is a freelance technologist based in Lisbon and Berlin, where he researches civic tech and builds engagement products for clients like the Boston Globe. He serves on mySociety's The Impacts of Civic Tech Steering Group, and recently authored People Powered's Guide to Digital Participation Platforms. Matt is a Technologist in Residence at Cornell University, where he's helped launch Cornell Tech's Public Interest Tech program.

Related Projects

More News & Stories

Keep up to date with the global civic tech community