Rights organizations are urging Meta Platforms to take advantage of the chance to enhance content moderation in Africa after the social media giant’s primary third-party contractor in the continent announced it would stop screening damaging posts for it.
In order to focus on data labelling work, Kenyan outsourcing company Sama announced on January 10 that it would stop providing content moderation services to the owner of Facebook, WhatsApp, and Instagram in March. The announcement comes as Sama and Meta are being sued for allegedly abusing their employees’ rights to unionize and prohibiting them from doing so in Kenya.
On February 6, a judge is anticipated to make a decision regarding the complaint’s admissibility in a Kenyan court. Another complaint against Meta was filed last month, this time alleging that the corporation encouraged the spread of violent posts on Facebook, escalating the civil strife in Ethiopia.
Two Ethiopian researchers and Kenya’s Katiba Institute rights group filed the case, which claims that Facebook’s recommendation systems magnified hostile and violent remarks in Ethiopia, including several that appeared before one of the researchers’ fathers was killed.
The plaintiffs are requesting that Meta take immediate action to degrade violent content, boost the number of moderators in Nairobi, and establish restitution funds totaling around $2 billion for victims of incited violence worldwide. According to Meta, Facebook and Instagram have rigorous policies stating what is and isn’t permitted. The working conditions of content moderators on Meta have already come under fire, and criticism of its action to stop hate speech and violent content.
The same year, Rohingya refugees from Myanmar filed a lawsuit, suing Meta for $150 billion over allegations the company did not take action against anti-Rohingya hate speech which contributed to violence against the minority group.