Alphabet is helping The New York Times improve its comments section
Why it matters to you
Online comment sections are often thought of as a breeding ground for trolls, but Alphabet’s automated moderation techniques could change all that.
The New York Times recently announced plans to retire its public editor position, on the basis that comments submitted by readers served a similar purpose. Now, the publication has detailed how a partnership with Alphabet will help ease the transition to an automated comment moderation system dubbed Moderator.
A team of 14 human moderators were responsible for approving an average of 12,000 reader comments every day up until recently, according to a report from The Verge. However, Moderator currently approves around 20 percent of comments, and is set to take on a greater role over the coming months.
In February 2017, Google announced a technology called Perspective, developed by Alphabet subsidiary Jigsaw. Perspective uses machine learning to determine which comments are acceptable, and which are “toxic” and don’t contribute anything to the overarching discussion.
Moderator is similar to Perspective in that it analyzes comments by comparing them to examples that have previously been flagged by human moderators. This creates a percentage score, with a higher number indicating that the comment is more likely to be deemed inappropriate — some comments will be published to the site automatically, while flagged comments will be looked at by a human moderator.
The New York Times expects this implementation of automated moderation to allow for more open comment sections on its website, without any detriment to the quality of discussion. From today, all of the publication’s articles that are deemed to be “top stories” will offer a comments section during business hours.
Of course, there are still some big questions to be asked about how effective Moderator will be in practice, especially when it comes to the ever-evolving world of online discourse. As new insults and derisive nicknames rear their heads, particularly those that refer to public figures covered by the site’s reporting, the tool will need to kept abreast of what’s appropriate and what isn’t — and that process will likely require a human touch.