News & Stories > Tackling Disinformation in Brazil with InternetLab

Project Highlights

Tackling Disinformation in Brazil with InternetLab

Words by Matt Stempeck • Nov 10 2022

As part of Matt Stempeck’s work researching the ongoing work within the Code for All Network confronting disinformation, we are sharing the most exciting takeaways from those exchanges in the Disinformation Project Series.

These interviews were conducted in mid-2022. Please note that the information might change as the organization seems appropriate.

Meet InternetLab!

Heloisa Massaro, Director at InternetLab in São Paulo, provides us with a view of digital disinfo in the Brazilian context. Her work underscores an important area of this discussion that we haven’t touched on yet: the interplay between civil society and government. InternetLab’s work is a great example of how civil society organizations can contribute to improving and refining government responses to disinformation.

InternetLab is an independent research center that works between the academic debate of tech policy issues, policymakers, and industry. They produce interdisciplinary research and describe themselves as an “academic think tank.” 

The global rise in digital disinformation has spurred a wave of regulatory action by national and local governments around the world. But not all legal and regulatory responses are created equal. Some governments’ responses to disinformation are outright dangerous, as they disingenuously use the challenge as cover to target legitimate journalism, free speech, and civil liberties. This is why the European Commission’s 2018 report recommended against regulating disinformation.

Elections are a recurring flashpoint in disinformation work. Bad actors ramp up their efforts ahead of democratic processes, looking to create divisions and distort results. In Brazil, Heloisa says, digital disinformation reached new heights during the 2018 election cycle.

Bolsonaro’s campaign and its supporters regularly denounced reporters, accusing them of spreading disinformation through the media. Disinformation became one of the primary campaign topics, with discussions about it heading into Election Day. 

Connectivity affects media habits

The nature of internet connectivity has a direct effect on how information (and disinformation) flows in a community. Brazil benefits from widespread internet penetration, where almost 85% of the population has access. But their access typically consists of low-speed mobile data, rather than broadband connections. Mobile providers limit connection speeds, block some applications, and charge for SMS. 

This was the environment within which Whatsapp arrived. As a low–to-no-cost alternative to mobile carriers’ high SMS charges, the app quickly took its place at the center of Brazilians’ daily social lives and media diets. InternetLab’s research into the intersection of messaging apps with users’ broader media environments found that ninety-nine percent of internet users in Brazil use Whatsapp.

While many election observers across civil society and academia were focused on the role of political advertising through traditional channels and on social media platforms, Whatsapp emerged as the new arena, including for spreading political disinformation ahead of the election in its group chats.

As we’ve heard from other organizations, like Doublethink Labs, this practice was accelerated by the availability of paid businesses using bulk messaging utilities to scale their operations beyond what any individual with a smartphone could accomplish (in this case against Brazil’s Workers’ Party). To this day, researchers still aren’t sure what effect this had on the election but do know that people received disinformation spam from unknown numbers.

Takeaway: Digital disinfo is another venue for narrative framing battles, but with few to no gatekeepers

The political disinfo ricocheting across Whatsapp resembles analog battles to win the narrative framing game that has long dominated political conversations. Brazil’s growing far-right movement began promoting their own custom narratives, outside the mainstream discourse. They’d been building their narratives around gender, abortion, and national identity in recent years, but messaging apps allowed them to go direct to voters without having to convince publications to carry their theories.

Takeaway: We can’t consider disinfo in a vacuum. The broader media environment is a critical factor

In Brazil (and many other countries), media consolidation is rampant. A handful of big, traditional families has long set the agenda for Brazil’s national conversation narratives, concentrating journalism into a limited number of companies. InternetLab has studied the interplay between emerging digital channels and the broader media environment because they affect each other.

In studying and fighting disinformation, we must consider the 360-degree info ecosystem that’s actively contended for. Messaging apps, social platforms, and traditional media channels aren’t isolated places. News consumers might hear something a politician says on TV or radio, then search for it online, where the first page of Google results, or Telegram groups, might play a critical role in how they interpret the narrative. We know that people look for ideas that map to their interests and align with their beliefs and ideologies, and this dance can play out across every available communication channel. 

Disinfo as a political battleground

Heloisa says that since 2018, disinformation has emerged as a new political arena in national politics. Bolsonaro was elected president in 2018, inaugurated in 2019, and in 2020, the COVID pandemic took off. In 2019, some of Bolsonaro’s former supporters became enemies, and for them and other political opponents, disinfo became the arena in which they would battle each other.

The opposition response was a ‘fake news empire’ parliamentary investigation into disinformation’s role in the 2018 election. They rolled out a ‘Fake news bill‘ that was problematic, Heloisa says. The bill’s definition of disinformation was controversial, and required platforms to remove accounts that spread content it defined as disinfo. It also introduced real risks to freedom of expression and privacy, as platforms were required to maintain messaging data about the forwarding of information. In 2020, the discussion peaked and the bill was almost approved. 

Before that happened, civil society organizations and digital rights groups mobilized. They pushed to get a better bill. The result was a bill with a more coherent framework for platform regulation, rather than just a ‘fake news bill’. It no longer covers the removal of content, nor does it attempt to define misinformation. It does establish the transparency of duties of political officials on social media as well as regulate political ads, and government-funded advertising.

The role of the judiciary doesn’t come up as frequently in civic tech conversations as in the legislative and executive branches. But in Brazil, the Supreme Federal Court has become a central actor in national political battles, including disinformation. This is because the court became embroiled in political disputes with Bolsonaro and his supporters, who regularly attacked the court’s standing. Here, too, disinfo emerged as a key tactic, and the attacks got quite personal. 

In response, the justices opened an investigation into the attacks on the court. They oversaw the inquiry on issues directly affecting them and their families, with disinfo at the core, Heloisa says. She’s seen civil society rally to understand disinfo in Brazil and push for better responses by the tech platforms. At the same time, national politicians and the judiciary have adopted the lexicon of disinformation. 

Disinfo was still at the core of national political debates when we had our interview, just weeks ahead of Brazil’s October 2022 elections. The justices of the Superior Electoral Court enacted their own disinfo-countering program, and work with the tech platforms and other actors to counter it. They’re particularly focused on disinfo seeking to destabilize the country’s elections, and for good reason. In a strategy quite similar to Trump’s playbook in the United States, Bolsonaro has sought to discredit the electoral system and spread the message that it’s unreliable and rife with fraud whenever he’s down in the polls. 

InternetLab, for its part, specializes in applied research. As a think tank, they develop original research to better understand the dynamics of political communication in Brazil. They try to unpack the phenomenon that we call disinformation and learn which dynamics and strategies are in play. 

For example, in 2020 the organization worked with a consortium of digital rights groups to improve the ‘Fake News’ bill. InternetLab launched a series of policy papers proposing an improved regulatory approach to countering misinformation. They worked to include values like privacy and protection in policy conversations advocating for traceability and government data collection

InternetLab also studied how different social media platform affordances should affect potential disinformation regulation. There are important differences between major internet platforms like Wikipedia, Facebook, Telegram, and Whatsapp, so the organization argued that the regulation should differentiate, as well. In this way, InternetLab sought to introduce some nuance and a better understanding of the policy debates. 

Partner globally, act locally

The group works in tandem with other digital rights groups in strategic partner networks like AL SUR, Aliança por Algoritmos Inclusivos, Coalizão Direitos na Rede, and the Just Net Coalition. These networks have been vital to InternetLab’s work and broader efforts, for example in fighting the first version of the ‘Fake News’ Bill.

How people respond to disinfo

InternetLab’s work isn’t limited to government policy. They also study the public’s interaction with disinfo, particularly when using messaging apps. People clearly use messaging apps to communicate about politics, and InternetLab works to understand their perceptions and habits regarding the information they receive in these channels, with a hybrid qualitative and quantitative approach.

Might a collective immune response emerge?

Even if we never get universally effective media literacy programming, we might begin to see the public develop an immune response to rampant disinfo. Heloisa says that after the 2018 elections, people became more concerned about the information they see on messaging apps. They began to take more care to avoid starting political fights within family and friends chats and to be more careful about how they phrase things. This occurred in parallel with a broader shift, in many markets, to more closed messaging groups, with smaller, more intimate digital conversations.

Perhaps we shouldn’t discount the public’s ability to get savvier on these topics. In 2005, Stephen Johnson put forth the thesis in Everything Bad Is Good for You: How Today’s Popular Culture Is Actually Making Us Smarter that contrary to popular assumption, popular culture grows more complex and cognitively demanding over time, and the public levels up their understanding to enjoy it. Perhaps a similar dynamic could take place with digital disinformation: over time, wary of looking gullible in their social circles, some portion of the public will learn to self-regulate their consumption and amplification of obvious disinformation.

Fact-checking

Of course, InternetLab has also found that meticulous fact-checking loses out to tribal identity. They found that many people are well aware of fact-checking practices, and rationally understand that they should consider a piece of media’s content as well as the source promoting it. But the process “falls apart,” Heloisa says when they’re asked to investigate the source’s credibility. If the source is someone they have an affinity for, a random YouTube influencer will have credibility over a mainstream media source they don’t.

In this year’s research, InternetLab has found that people are more skeptical of the information they receive on Whatsapp. But their response is very sensitive to the sender. If they receive disinfo from someone they consider smart, savvy, or otherwise trustworthy, they’ll often believe it. This finding underscores the importance of Nichole Sessego’s argument that identifying locally trusted leaders is key to confronting the spread of disinformation.

Impact measurement

When you work to improve policy from a civil society perspective, wins can be rare, and your role in the victories is hard to determine. It’s a testament to the amount of hard work InternetLab has put in that they can credibly point to where their efforts clearly contributed to improved outcomes. For example, InternetLab is in dialogue with Brazil’s electoral court and participates in its program against electoral disinformation. It was a clear winner when the court included InternetLab’s policy suggestions in their updated election law resolutions. InternetLab shares policy materials with officials, and the electoral board has specifically cited the organization’s work in its decisions.

InternetLab also introduces its research findings in a variety of formats to help ensure they land with the target audience. In addition to publishing reports and policy proposals, the organization shares the work at relevant workshops and collaborates with media channels to amplify its findings. In a research partnership with two federal universities to monitor disinfo ecosystems, InternetLab is helping articulate the research and connect the findings to other civil society organizations and decision-makers.

Multi-sector challenges demand multi-sector approaches

A key method for achieving impact across several sectors (public, private, academia, and civil society) are sector-spanning partnerships. This is a strategy we’ve seen several times in the civic tech community’s fight against disinformation (see Code for Africa’s newsroom partnerships, for example).

InternetLab does its part, with many partners across a variety of sectors. Heloisa sees the organization as “occupying the role of diplomatic relationships, talking to everyone and understanding the challenges from the private sector, the public sector, and civil society, and working on creating that dialogue.” She relies on civil society organizations to advocate and campaign for change, as InternetLab’s role is primarily that of providing research and facilitating dialogue. 

Invite everyone (everyone) to the table, and progress might result

Heloisa shared a unique private-sector partnership approach with me that I haven’t come across in my other conversations. Many groups are working to guide the tech platforms to become better stakeholders. InternetLab does too, but they’ve also opened a dialogue on disinfo with marketing agencies and their customers. 

Several civic tech organizations have pointed out the role of economic actors in disinfo. The people seeking profit through social media utilities, the influencer economy, and traditional marketing and advertising campaigns don’t necessarily have nefarious intentions. Yet their work sometimes ends up supporting influencers and others peddling disinfo.

InternetLab sought to develop a conversation with the marketing industry to promote good practices for the sector so that it can have a positive (or at least benign) influence on public debates. This approach has already proven useful, as they’ve been able to engage high-level CEOs and leadership from marketing firms in conversation on these issues. 

InternetLab just launched its guide for the digital marketing sector together with two firms. Broadly speaking, the guidelines will recommend that marketers prioritize authenticity and consider early in the development of a campaign which societal narratives their advertising will reinforce and support. By following the guidelines, marketing firms can keep themselves and their client brands far away from hate speech, discrimination, and disinformation narratives. The guide will also advise on how to redirect advertising dollars away from purveyors of disinformation. For their part, Heloisa says, the firm’s leadership has been responsive and shared perspectives that InternetLab hadn’t previously considered.

Disinformation isn’t simple so our responses can’t be, either

We sometimes reduce digital disinformation to simplistic terms of good vs. bad, or disinfo producers and the public audience. Heloisa argues that it’s just not that simple. For example, the structure, architecture, and affordances of the tech platforms we use to communicate affect our agency and habits in consuming and creating information. We are active participants, not passive subjects. And disinformation isn’t just manufactured – it’s a complex phenomenon that involves a lot of different factors.

Heloisa says that it’s important for this field of disinformation response to maintain the nuance, and avoid easy answers: “It’s not so simple to say that disinformation flows through Whatsapp and that if you have fact-checks, or if you delete accounts, you solve the problem. Actually, our research says that fact-checking is not exactly the biggest part of the solution. People are building their own [information] ecosystems.”

In Brazil, many of the proposed responses to disinfo align with extremes of either super regulation of the internet, or super freedom of speech. We need to move the conversation beyond ‘the tech platforms are evil’ or ‘the government should regulate everything’, Heloisa says. “We are dealing with a changing society that has changed how they communicate, crossed with a lot of other factors. We need to try to dialogue with everyone.” We mustn’t get stuck with the simplest responses.

Finding funding

InternetLab is primarily funded by philanthropy, but also receives funding from the private sector to a lesser degree. Heloisa views this funding as an opportunity for dialogue with all the relevant stakeholders and says InternetLab has policies in place to maintain its independence.

Resources needed

Like almost every organization we’ve spoken to, more funding is always helpful to the work. Beyond that, Heloisa is also keenly interested in learning what her peers in this field are finding out through their work. In particular, understanding how people use digital platforms in their media habits elsewhere. This peer exchange could allow groups to share what they’re working on and what’s working, and candidly, what’s not working. Plus, sharing data would be super useful.

Free idea: Reach out to everyone who’s influential in driving public conversations, even the profit-driven attention industries

InternetLab’s work to build bridges to the marketing and advertising industries is a great example of an outreach strategy based on reality. Through their creative campaigns and huge promotional budgets, the attention industries have massive sway over our societal conversations. We can exert more influence over their impact by actively engaging their leadership in our work to counter disinformation than we can by keeping them at arm’s length.

Interested in learning more?

Matt Stempeck, the National Democratic Institute, and Code for All worked on a research project to understand how civic tech can help confront disinformation. The project’s goal is to learn from (and share out!) lessons learned from organizations that focus on this area.

Check out the Disinformation Research Project to learn more and help us spread the word!


Author picture

Matt Stempeck

Disinformation Research Consultant

Matt Stempeck is a freelance technologist based in Lisbon and Berlin, where he researches civic tech and builds engagement products for clients like the Boston Globe. He serves on mySociety's The Impacts of Civic Tech Steering Group, and recently authored People Powered's Guide to Digital Participation Platforms. Matt is a Technologist in Residence at Cornell University, where he's helped launch Cornell Tech's Public Interest Tech program.

Related Projects

More News & Stories

Keep up to date with the global civic tech community