Most Of Meta's Anti-Misinformation Team Laid Off Amid Oversight Board Call Out: Report
KEY POINTS
- Retained misinformation team members are being transitioned to other similar units
- The Oversight Board said Meta's algorithms amplified harmful health misinformation
- A recent study suggested Facebook users were more likely to have read fake news during the 2020 election
Majority of the team handling misinformation across Facebook and Instagram were affected the layoffs announced by Meta Wednesday.
The cuts at Meta's anti-misinformation department came as an Oversight Board called out the Mark Zuckerberg-owned social media giant for concerns over health-related information dispersed on the platform.
Meta spokesperson Dave Arnold told The Verge Thursday that retained members of the misinformation team, which reportedly had about 50 members before the cuts, are being transitioned into other trust and safety teams that were delivering similar work.
"We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community," Arnold said in an emailed statement to the outlet.
Layoffs in the misinformation team came as part of the second round of reductions announced by Meta to cut around 4,000 employees.
In February, Meta declined to comment on a report by The New York Times which stated that companies' "efforts to combat false and misleading information online" seems to have "waned" even as misinformation "remains as pernicious as ever."
Reports of Meta layoffs affecting the misinformation team came as the Oversight Board responded to the social media giant's request for help regarding its policies on COVID-19 disinformation. The Board recommended Meta to adopt more rigorous scrutiny in handling health-related misinformation.
"The Board strongly recommends that Meta publish information on government requests to remove COVID-19 content, take action to support independent research of its platforms, examine the link between its platforms' architecture and misinformation, and promote understanding around COVID-19 misinformation globally," the Board said in a public advisory response.
While the Board acknowledged Meta's efforts in removing "about 80 distinct COVID-19 misinformation claims," it also noted that Meta "has frustrated the Board's efforts" in reconciling varying views from stakeholders and board members on addressing health-related false information by "insisting" that a localized approach was not feasible.
The Board also called out the social media giant for not consulting public health authorities for a re-evaluation of the misinformation claims that it removed from the platforms. The Board called on Meta to "reassess" which claims were still qualified for removal and which should be retained as the pandemic has "evolved" since the guidelines were first established.
Experts have raised concerns about "the architecture of Meta's platforms" amplifying harmful health-related misinformation, according to the advisory. The Board then recommended that Meta obtain a human rights impact assessment on how its algorithms, News Feed and other features amplify misinformation.
Finally, the Board asked Meta to ensure that its rules on removing COVID-19 misinformation are clear, fair, consistent and transparent.
As long as the World Health Organization continues to declare COVID-19 as an international emergency, Meta should retain its current misinformation policies, the Board said.
Meta sought the Board's advice on its COVID-19 misinformation policies in mid-2022. At the time, the tech giant said the goal was to "keep our users safe while still allowing them to discuss and express themselves on this important topic."
The Board was created to help Facebook answer questions related to freedom of expression on the platform, particularly "what to take down, what to leave up and why," according to its website.
Earlier this month, the Washington State University published a study that found Facebook users were more likely to have read fake news about the 2020 elections than users on other social media platforms, NBC affiliate KGW8 reported.
The study suggested that reading fake news and political persuasion were the two main forces that propelled doubt among citizens on the vote counting process during the 2020 election.
Robert Crossler, WSU associate professor and study co-author, said social media users who depended on Facebook for news "often got disinformation from stories not connected to mainstream news media sources," which then sowed doubt on the election results.
Meanwhile, Facebook asserts that it removes misinformation "where it is likely to directly contribute to the risk of imminent physical harm." Instagram, on the other hand, maintains it identifies false information or altered content then makes the said content "harder to find" and reduces its visibility in both stories and feeds.
© Copyright IBTimes 2024. All rights reserved.