Facebook is valiantly trying but failing to moderate hate speech in some languages
fforts by Facebook to moderate content in one of its fastest-growing countries has evidently not been easy.
Like what it’s doing in many other markets, the social media giant is trying to stem content deemed hateful in Myanmar.
The emerging market experienced a surge of new internet users in recent years, and Facebook benefited with 10 million Burmese users by 2016.
With these new users come challenges. One hot button topic, the ongoing Rohingya refugee crisis, has resulted in posts made insulting the minority Muslim Rohingyas.
On Facebook’s end, it’s trying to cut derogatory terms that have traditionally been thrown at the group. But like many automated processes dealing with a language’s nuances, it can get those wrong.
In a Medium post, Facebook user Aung Kaung Myat, points out that Facebook has — nearly comically — blocked posts with any reference to banned words. This includes puns and words that sound like them.
In a statement, Facebook said that the company’s teams of moderators regularly engages and listens to feedback from the community, safety experts and NGOs in Myanmar.
“Once we’re made aware of errors we quickly act to resolve them,” a spokesperson told Mashable, adding that the company conducts “regular audits and quality assessments” so errors will not happen again.
Moderation is an uphill battle
The move to ban slurs in Myanmar is the latest in Facebook’s efforts to ban hate speech on its platform. In a series of leaked documents published by the Guardian last week, Facebook outlined racial slurs as unacceptable on their platform, except in cases of ironic use.
The social media giant is facing pressure by governments to stop hate from its nearly two billion users. The company is also attempting to use machine learning and AI to ease the burden on its 4,500 content moderators.