Skip to content

September 1, 2017

AI writes Yelp reviews that pass for the real thing

by John_A

On any given day, hordes of people consult online reviews to help them pick out where to eat, what to watch, and products to buy. We trust that these reviews are reliable because they come from everyday folk just like us. But, what if the feedback blurbs on sites ranging from Amazon to iTunes could be faked — not just by nefarious humans, but by AI? That’s what researchers from University of Chicago tried to do, with surprising results. Not only did the Yelp restaurant reviews written by their neural network manage to pass for the real thing, but people even found the posts to be useful.

As part of their attack method, the researchers utilized a deep learning program known as a recurrent neural network (RNN). Using large sets of data, this type of AI can be trained to produce relatively high-quality, short writing samples, writes the team in its paper. The longer the text, the more likely the AI is to mess up. Fortunately for them, short-length posts were ideal for their Yelp experiment.

They fed the AI a mixture of publicly available Yelp restaurant reviews, which it then used to generate its own fake blurbs. During the second stage, the text was further modified, using a customization process, to hone in on specific info about the restaurant (for example, the names of dishes). The AI then produced the targeted fake review.

Here’s a typical post by the robot foodie about a buffet place in NYC: “My family and I are huge fans of this place. The staff is super nice and the food is great. The chicken is very good and the garlic sauce is perfect. Ice cream topped with fruit is delicious too. Highly recommended!”

Not too shabby. Here’s another about the same restaurant: “I had the grilled veggie burger with fries!!!! Ohhhh and taste. Omgggg! Very flavorful! It was so delicious that I didn’t spell it!!” Okay, so that’s not perfect, but we all make errors now and again.

As it turns out, these were good enough to evade machine learning detectors. And, even humans couldn’t distinguish them as fake. Furthermore, people ranked them as high up on Yelp’s “usefulness” scale as real reviews.

These days sites use both machine learning and human moderators to track down spam and misinformation. This approach has proven successful in catching crowdturfing campaigns — when attackers pay a large network of people to write fake reviews. But, the researchers warn, current modes of defense could come up short against an AI attack method like theirs. Instead, they claim the best way to fight it is to focus on the information that is lost during the RNN’s training process. Because the system values fluency and believability, other factors (like the distribution of characters) can take a hit. According to the team, a computer program could snuff out these flaws, if it knew where to look.

The paper warns that in the wrong hands, this type of attack could even be used on bigger platforms, like Twitter, and other online discussion forums. The researchers conclude that it is therefore critical that security experts come together to build the tools to stop it.

Via: Business Insider

Read more from News

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

%d bloggers like this: