In Kenya, three complaints against Meta lift the veil on “the dark side of social networks”

Sentenced at the end of May in Europe on data protection, the Meta group is attacked on another front in Kenya: the work of content moderators, shadow employees responsible for removing violent and hateful publications from Facebook.

In Kenya, three complaints target Meta and the Californian company Sama, to which the group that owns Facebook, WhatsApp and Instagram outsourced the moderation of its social network content for sub-Saharan Africa between 2019 and 2023. Two were filed. by content moderators employed by Sama in Nairobi. Their job was to review and remove from Facebook posts that were violent, calling for hatred or spreading misinformation.

Asked by AFP, neither Sama nor Meta wanted to comment on the current affairs. AFP is a partner of Meta, providing fact-checking services in Asia-Pacific, Europe, the Middle East, Latin America and Africa.

Trauma

A first complaint was filed in May 2022 at the Employment and Labor Relations Tribunal by a South African, Daniel Motaung. He denounces working conditions “inhuman”, misleading hiring methods, irregular and insufficient remuneration, as well as the lack of psychological support in the face of the trauma caused by this activity. He also claims to have been fired after trying to form a union. The case is not yet judged.

In March, a second complaint was filed by 184 other employees claiming to have been wrongfully dismissed by Sama, which announced that it was ceasing its activity in content moderation. They demand compensation for their wages “insufficient” For “the risk to which they were exposed” and the “damage to their mental health”. Pending a judgment on the merits, these dismissals were suspended on June 2 by the Employment Tribunal, which ordered Meta and Sama to “provide appropriate psychological and medical care to complainants”. Meta and Sama have announced their intention to appeal.

Read also: Meta, Facebook’s parent company, accused of “modern slavery” in Kenya

Another complaint, in December 2022, accuses them of inaction in the face of hate speech, which the complainants say culminated in the 2021 murder of a university professor in Ethiopia.

These cases are the most important on the subject of content moderation since a class action launched in 2018 in the United States. In May 2020, Facebook agreed to pay moderators $52 million as compensation for the effects of their work on their mental health. The complaints lodged in Nairobi are aimed at exposing a system of subcontracting which, according to its critics, is used by Meta to try to avoid responsibility.

“Dark rooms”

As with Sama in Nairobi, Meta outsources content moderation on Facebook to companies that operate in more than 20 locations around the world and process more than 2 million items daily, according to data provided by the group to AFP. . Its lawyers argued that the group could not be tried in Kenya, where it does not itself have any activity and where it is not a direct employer. But in its judgment of June 2, the justice considered that Meta was “owner of the digital work and the digital workspace”.

“These cases lift the veil on the true darkrooms of content moderation”says Brandie Nonnecke, director of the Center for Law and Technology at the University of California, Berkeley. “The general public does not realize how dangerous and horrible content can be and what the human cost of moderation is”she believes.

Read also: In Kenya, artificial intelligence contractors set up Africa’s first content moderators’ union

For Cori Crider, director of the British association Foxglove, which supports complaints in Kenya, “the main objective in this business is to change the way work is done”. The Ethiopia murder case and the working conditions case are “two sides of the same coin”, she argues, because degraded working conditions lead to poor moderation, which can have deadly consequences. Beyond Meta, these cases reveal “the dark side of social media in general”she points out.

For Brandie Nonnecke, they are “the symptom” of a deeper problem: “The platforms (of social networks) have built systems that are a powder keg for shared harmful content to go viral. And they are hardly held responsible. » According to her, “They need to be forced to design their platforms in a way that doesn’t encourage the posting and sharing of harmful content. This could stop the publication of harmful content at the source. »

The World with AFP

Leave a Reply

Your email address will not be published. Required fields are marked *