“It was the perfect storm”

Before 700,000 Rohingya fled genocide in Myanmar in 2017, the military had incited millions of users against the group in a hate speech campaign on Facebook. Why did the company not intervene? And could it happen again? Human rights experts Matthew Smith (Fortify Rights) and Alan Davis (Institute for War and Peace Reporting), who both witnessed the events leading up to the genocide, shared their insights with me over the phone.

Published on the NUDGED blog, March 18, 2020 >>

The interviews were edited for length and clarity.

Alan, you led a study on hate speech in Myanmar prior to the genocide, from 2015-2017, for the Institute for War and Peace Reporting. What kind of rhetoric did you observe?

When we started our observation, four years after the end of the civil war, everybody seemed to attack everybody else based on their ethnicity. But the Bamar, who are by far the largest majority [68% of the population], had the loudest voice – and together with the military, the Tatmadaw, and some widely respected Buddhist monks they started targeting Muslims, especially the Rohingya in Rakhine State.

On Facebook, they were presented as the “other: They were labeled as “dark-skinned” (“kalar”), dirty and compared to dogs, crows and insects. They were never referred to as Rohingya but as “Bengali”, thus reaffirming the false narrative that they are “illegal immigrants” from Bangladesh. Hatred against Burmese Muslims had culminated in racialized attacks several times during the 20th century, but this time it was amplified by Social Media.

Why were Myanmar citizens so susceptible to falling for this rhetoric?

After half a century of dictatorships, the country had officially transitioned to democracy in 2011 but the military still held power. Whenever a society transitions from closed to open, there is potential for hate speech – I have seen that during my time as a journalist in Yugoslavia in the 1990s. In Myanmar, people with no media literacy, little education and little contact with the “other” suddenly gained access to mobile phones and Social Media – it was the perfect storm.

If this was so obvious for me, a silly ex-journalist and non-expert, it should have been easy to see for the development community and the diplomatic community. But they did not react until it was too late.

Over the course of five years, the Myanmar military orchestrated a large-scale hate speech campaign, according to ex-military officials, researchers, and civilians who spoke to the New York Times. Their strategy was to first attract followers by setting up fan pages devoted to Burmese celebrities, like a beauty queen, and to then use those pages to distribute fake news and hate speech. Troll accounts run by the military helped share the content, shut down critics and fuel arguments between commenters. Did you know this back then?

It felt orchestrated but we couldn’t be sure at the time of the research. We realized that over time, hate speech and conspiracy theories became more systematic: Muslims in Myanmar were accused of flying ISIS flags over their mosques in Yangon for example; they indeed had black flags, but those were part of a thousand-year-old local festivity. The accounts made use of the ISIS attacks in Europe at that time to fuel mistrust, and it didn’t take long until claims spread that all Muslims were terrorists and should be eradicated. Of course, this is dangerous on two levels: It can lead to genocide, and it can become a self-fulfilling prophecy: If you keep demonizing a people, some of them might eventually say, “okay, you keep calling us terrorists, we’re going to get guns and become terrorists”.

Did anybody in Myanmar try to counter those narratives – journalists or critical users for example?

Not really. Most people were too afraid to become targets themselves. When we tried to fund local journalists to investigate the problem of disinformation, they all declined for fears to be seen as the enemy within. While mainstream media in Myanmar is no longer part of the problem, it has yet to become part of the solution. Even our donors were scared…

Who were the donors of your study project?

I cannot tell you; they still want to keep their support quiet for fear of retribution. They asked us to keep our office address secret and to stop our original plans to organize a public education campaign with teach-ins and debates on media literacy alongside the research. To a certain degree, I understand this. But if nobody can take a little risk and ring the alarm bells, this does not help the case.

 

Before the 1994 Rwandan genocide, Hutu hard-liners spread hate propaganda on a radio station funded by the government, Radio Mille Collines. Why was Facebook the stage for propaganda against the Rohingya?

Facebook is very popular in Myanmar [with 15 million users in 2017 up from only 1.2 million in 2014 according to Hootsuite’s We Are Social report] because it allows people who have not talked to each other for near 50 years to talk to everybody. And Facebook is the internet in Myanmar. Since the civil war, mobile phones are very cheap, but they have a Facebook app installed by default which in many areas is the only way to access the internet: Since 2013, the app “Free Basic” connects you to low-bandwidth services and Wi-Fi hotspots operated by local shop owners. But it only gives you access to very few sites, and all delivered by Facebook.

Facebook has been criticized to violate net neutrality because it brings such a limited version of the internet to remote areas around the world.

For good reasons. Due to these limits, Burmese users could not verify the allegations via external sites. And as we know, Facebook’s algorithms reward whatever triggers an emotional response and perpetuate what users already believe and like, while falling short on reviewing posts. Together with already existing ethnic tensions and a lack of media literacy, this made Facebook become a breeding ground for hate speech.

Thank you very much, Alan!

Matt, with your organization Fortify Rights, you have warned about mass atrocities against the Rohingya as early as 2013. What did you witness?

Eight years after moving to South-East Asia, I co-founded Fortify Rights to stop violence against the Rohingya in Rakhine State that had started in 2012. We were on the frontlines while the attacks were unfolding. We spoke with people who had just hours before witnessed horrific crimes, we were seeing people with injuries that were indicative of mass-scale violence. Across the Naf River, we even saw villages going up in smoke.

Over the years it got even more worrisome and we realized that we were seeing something systematic, something that could result in a massive loss of life. So we alerted not only Myanmar officials but also people in positions of power in foreign governments, within the UN system and elsewhere, trying to prevent what ended up unfolding as a genocide.

Given that the military was using Facebook to turn millions of users against the Rohingya, did you confront the company?

Yes, we had a number of meetings with Facebook executives, both regionally and from the headquarters over the last several years.

Many other organizations, scholars and even the US ambassador to Myanmar at the time, Derek Mitchell, also warned Facebook as early as 2013, but the company did not take action until 2018. Did they ignore this?

I cannot get into the heads of the executives at the company. And I cannot fully understand how a company that is so well resourced, earning tens of billions of dollars annually, could make such profound mistakes with regard to human rights. The platform was essentially an arm of this genocide that was unfolding. It’s – baffling. It is baffling to think about why they failed so miserably.

After the genocide, Facebook started employing Burmese speakers to control comments, removed user accounts and pages associated with the military with a total of 12 million followers. And it released an independent report acknowledging failures in Myanmar. Was that enough?

I’m impressed that the company now appears to understand that it has serious human rights issues. But it should do more: For example, Facebook has taken information offline that could potentially be useful for ongoing prosecutions. If I were Mark Zuckerberg, I would hire experts in international justice and have them analyze evidence of hate speech that Facebook has taken offline and provide those to prosecutors. Facebook executives have told us that the company would cooperate with formal mechanisms, but to my knowledge, this didn’t happen yet. This is astonishing because, in other contexts, the company routinely cooperates with law enforcement – to prosecute people who are spreading child pornography in the US for example. I also wonder what Facebook is going to do for the communities that fled and survived the genocide and other mass atrocity crimes.

What is the atmosphere in Myanmar like now? Could this happen again, for example, before the elections this fall?

Most definitely because there still is quite a lot of problematic content online in Burmese. It is difficult to assess from the outside because Facebook does not make transparent how its platform works in regard to hate speech: Can Facebook’s algorithms automatically detect hate speech in multiple languages? And how do content reviewers interact with those algorithms? Just by way of observation, Facebook is not out of the woods yet.

After this scandal, Facebook will be careful with Myanmar in the future. But there are many other countries in the Global South that have this unilateral relationship with the company. For people in the countries, Facebook is really important but for the company, these are small markets with low ad revenue and small languages that are not profitable. Will Facebook be as diligent in those countries in curating comments and news as in richer countries?

Well, I would implore the company not to let revenue factor into serious decisions about human rights violations. In our conversations with different individuals at Facebook, they were trying to assure us that they’re looking at all these questions regardless of the market in which the problems are occurring. But without knowing what is really going on internally, it is hard to know how they’re making those decisions.

Facebook seems to struggle with the question of whether they, as a US company, can block harmful government accounts from other countries. For example, they still allow General Hemeti from Sudan to promote himself on the platform although he spreads hatred because he is an elected official. What should they do?

From our perspective, it’s a no-brainer that anybody propagating hate speech should be kicked off the platform, government official or not. We have had some conversations with Facebook about this as well. It’s not as though the company would have to go out and investigate – they simply have to listen to reliable sources: the UN special procedures, the office of the High Commissioner for Human Rights, rights organizations like ours and Human Rights Watch, or community organizations that are often the first to understand and document the truth.

Mark Zuckerberg assigned an independent oversight board that will decide whom to block starting this summer because he doesn’t feel comfortable with doing that. What should such an institution look like to prevent harm?

I think that’s an interesting idea – like a Supreme Court for Facebook. They consulted with us and with others in Southeast Asia to get feedback on this. It’s unprecedented in the sense that we’re entering into an age when a lot of these companies have more resources and more reach than governments. In some ways, they’re like quasi-states where questions of human rights are becoming more serious. I think the rubber will hit the road when the board makes a decision that Mark Zuckerberg disagrees with.

Thanks for your time, Matt!

Alan Davis is Asia and Eurasia Director of the Institute for War and Peace Reporting (IWPR) whom he joined in 1994. Prior to that, he was the media advisor for Eastern Europe and the Former Soviet Union at the UK’s Department for International Development (DFID), and he worked as a journalist from Indochina, East Asia and Yugoslavia.

 

Matthew Smith is a co-founder and CEO of the organization Fortify Rights based in Southeast Asia. Matthew previously worked with Human Rights Watch, EarthRights International, Kerry Kennedy of Robert F. Kennedy Human Rights, and as a community organizer and emergencies social worker in the US. Since 2019, he is a fellow at the Carr Center for Human Rights Policy at the Kennedy School of Government at Harvard University.

 

Header photo: A Rohingya woman in the Balukhali refugee camp in March 2018, half a year after the violence erupted in Myanmar, (cc) UN Women/Allison Joyce

Continue reading on NUDGED (archived as of 2024) >>