Skip to content

June 13, 2017

Alphabet is helping The New York Times improve its comments section

by John_A

Why it matters to you

Online comment sections are often thought of as a breeding ground for trolls, but Alphabet’s automated moderation techniques could change all that.

The New York Times recently announced plans to retire its public editor position, on the basis that comments submitted by readers served a similar purpose. Now, the publication has detailed how a partnership with Alphabet will help ease the transition to an automated comment moderation system dubbed Moderator.

A team of 14 human moderators were responsible for approving an average of 12,000 reader comments every day up until recently, according to a report from The Verge. However, Moderator currently approves around 20 percent of comments, and is set to take on a greater role over the coming months.

In February 2017, Google announced a technology called Perspective, developed by Alphabet subsidiary Jigsaw. Perspective uses machine learning to determine which comments are acceptable, and which are “toxic” and don’t contribute anything to the overarching discussion.

Moderator is similar to Perspective in that it analyzes comments by comparing them to examples that have previously been flagged by human moderators. This creates a percentage score, with a higher number indicating that the comment is more likely to be deemed inappropriate — some comments will be published to the site automatically, while flagged comments will be looked at by a human moderator.

The New York Times expects this implementation of automated moderation to allow for more open comment sections on its website, without any detriment to the quality of discussion. From today, all of the publication’s articles that are deemed to be “top stories” will offer a comments section during business hours.

Of course, there are still some big questions to be asked about how effective Moderator will be in practice, especially when it comes to the ever-evolving world of online discourse. As new insults and derisive nicknames rear their heads, particularly those that refer to public figures covered by the site’s reporting, the tool will need to kept abreast of what’s appropriate and what isn’t — and that process will likely require a human touch.




Advertisements
Read more from News

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

%d bloggers like this: