GettyImages-952605520
A former Facebook moderator sued the site. This illustration picture taken on April 29, 2018, shows the logo of social network Facebook displayed on a screen and reflected on a tablet in Paris. Lionel Bonaventure/AFP/Getty Images

In recent years, Facebook has enacted increased measures to protect its users from content that is violent, sexually inappropriate or otherwise objectionable in any way. One of the contract employees responsible for weeding through that content filed a lawsuit against Facebook on Friday for not adequately protecting moderators, Reuters reported.

The class-action lawsuit was filed by former Facebook moderator Selena Scola in the Super Court of the State of California. Scola’s suit claimed she and others in her position were forced to see “child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder” as part of their everyday routine. Scola now suffers from post-traumatic stress disorder stemming from her duties at Facebook, according to the suit.

Scola's lawsuit accused Facebook of violating California employment law by ignoring “the workplace safety standards it helped create.” Moderators are forced to “work in dangerous conditions that cause debilitating physical and psychological harm.”

“Facebook is ignoring its duty to provide a safe workplace and instead creating a revolving door of contractors who are irreparably traumatized by what they witnessed on the job,” lawyer Korey Nelson said in a statement.

Facebook, for its part, has maintained for months that content reviewers can see mental health professionals on-site whenever they want. In a July blog post, Facebook claimed that content moderators have full health benefits, whether they are full-time employees of the Menlo Park, California-based company or employed by third-party moderation firms.

In total, more than 7,500 employees review content for Facebook. The site has worked to purge objectionable posts from the site, but in order to do that, it combines the efforts of real people and artificial intelligence. For example, the site announced in late 2017 that its AI can track down suicidal posts.

Instagram, a photo-sharing subsidiary of Facebook, was the subject of moderation scrutiny last week. The app’s algorithm recommended videos featuring child exploitation and genital mutilation to users, according to an investigation by Business Insider.

Facebook was criticized earlier this year when an investigative report found that third-party moderators were told not to remove hateful content because the pages that posted it were popular. The posts in question clearly violated the site’s guidelines for objectionable content.