Facebook has agreed to pay $52m (£42m) to content moderators as compensation for mental health issues developed on the job.
The agreement settles a class-action lawsuit brought by the moderators, as first reported by The Verge.
Facebook said it is using both humans and artificial intelligence (AI) to detect posts that violate policies.
The social media giant has increased its use of AI to remove harmful content during the coronavirus lockdown.
In 2018, a group of US moderators hired by third-party companies to review content sued Facebook for failing to create a safe work environment.
The moderators alleged that reviewing violent and graphic images – sometimes of rape and suicide – for the social network had led to them developing post-traumatic stress disorder (PTSD).
The agreement, filed in court in California on Friday, settles that lawsuit. A judge is expected to sign off on the deal later this year.
The agreement covers moderators who worked in California, Arizona, Texas and Florida from 2015 until now. Each moderator, both former and current, will receive a minimum of $1,000, as well as additional funds if they are diagnosed with PTSD or related conditions. Around 11,250 moderators are eligible for compensation.
Facebook also agreed to roll out new tools designed to reduce the impact of viewing the harmful content.
A spokesperson for Facebook said the company was “committed to providing [moderators] additional support through this settlement and in the future”.