No menu items!

Facebook: Brazil is among the three countries most in need of moderation

RIO DE JANEIRO, BRAZIL – According to the company’s internal files, since late 2019, the platform has maintained a ranking of nations that demand special attention, particularly in sensitive periods, such as elections, demonstrations, and social instability. Brazil, the U.S., and India are at the “level 0” (tier 0), top priority.

They were the focus of more resources and proactive moderation work, including dedicated specialized teams 24 hours a day, aided by artificial intelligence. Facebook has set up veritable “war rooms,” officially called “enhanced operations centers,” to constantly monitor the platform in these three countries.

Brazil is one of the three countries that most need content moderation from Facebook, alongside India and the United States. (photo internet reproduction)

According to investigations:

At “level 1” (tier 1) are Indonesia, Israel, Iran, Italy and Germany. They are allocated the least resources, with special attention only at election time.
At “level 2” (tier 2), there are 22 other countries, with no “war rooms.”
In “tier 3” is the rest of the world where Facebook operates. In this category, there is minimal intervention and content evaluation is almost non-existent: an inappropriate post is only taken down if it is manually located by a moderator.

Only in the case of crises and extraordinary events, such as coups d’état and human rights violations, can one of these countries or regions from group three be temporarily given more active efforts.

In other words, only the 30 nations with the most users and accesses to the social network actually have their posts reviewed. According to internal documents, the ranking – created months before the last U.S. presidential elections in 2020 was a way to better distribute the use of moderation resources around the world.

CONSEQUENCES OF INEQUALITY

With about 3 billion users, the company preferred to choose and prioritize the places where it is most popular rather than divide the resources equally. This inequality of moderation has come under criticism in recent years.

Countries like Myanmar, Pakistan and Ethiopia, despite constantly undergoing sectarian, political and social conflicts, do not have content classifiers, which contributes to the increase in the spread of violence and hatred on the social network.

In Myanmar the situation is even worse, as Facebook is the main way to browse the Internet, due to the constant blackouts and censorship imposed by the government.

One of the greatest obstacles is the language: the company does not have enough specialists, such as translators and moderators, who speak the language of most Tier 2 and 3 countries. Only then would it be possible to detect hate speech and fake news, and train artificial intelligence to do likewise.

“We have dedicated teams working to prevent abuse on our platform in countries where there is a high risk of conflict and violence. We also have global teams with native speakers who review content in over 70 languages, along with experts on humanitarian and human rights issues,” a Facebook spokesperson assured Insider website.

“We’ve made progress in addressing major challenges, like the evolution of hate speech terms, and we’ve built new ways to quickly respond to problems when they arise. We know these challenges are real and we’re proud of the work we’ve done so far.”

Check out our other content

×
You have free article(s) remaining. Subscribe for unlimited access.