How Robots Can Help: Google Uses Artificial Intelligence To Track Abusive Comments On New York Times, Other Sites
Google Inc. announced Thursday its new artificial intelligence software for weeding out particularly abusive or hateful remarks from comments sections in an attempt to restrict platforms to more thoughtful debate.
The Menlo Park, California-based company launched the program, called Perspective, using an interactive demo allowing viewers to gradually purge three hypothetical comments sections—on climate change, the 2016 presidential election and the U.K.’s separation from the European Union, also known as “Brexit”—of their inflammatory remarks. Move the slider from right to left and phrases like “If they voted for Hilary [sic] they are idiots” are replaced by comments such as “Horrible, but the lesser of two evils won.” Slide the toggle further and what’s left are remarks like “Did you vote for what you truly believe is right and why?” and the sincere if not improbable “I honestly support both, as I was a Bernie [Sanders] supporter.”
The software “uses machine learning models to score the perceived impact a comment might have on a conversation,” according to the site, which listed the Economist, the Guardian, the New York Times and Wikipedia as partners. The latter two have reported that they’ve been using the Google software Jigsaw, the incubator that created Perspective, to moderate their platforms.
As Economist Community Editor Denise Law told Fortune Thursday, the magazine planned to move toward giving its human moderators a break by testing out Jigsaw’s AI software to keep the “really good” comments from falling into a black hole of toxic ones.
“In [Donald] Trump’s America, there’s this bifurcation,” Law said. “The Economist has long been a place for debate, but the comments are not at the level we’ve hoped them to be.”
Many news outlets have deleted their comments sections in recent years in the hopes that conversation will move to social media, prompting many to accuse them, along with their counterparts employing AI, of censorship.
Mary Hamilton, the Guardian’s executive editor for audience, painted the issue a bit differently.
“When we talk about improving the comments, we aren’t just speaking about censorship or eliminating criticism,” she wrote in a January 2016 column. “What we want instead is to free the voices that struggle to be heard, so that we can listen.”
© Copyright IBTimes 2024. All rights reserved.