Skip to content

September 25, 2014

Computers are learning to size up neighborhoods using photos

by John_A

MIT's deep learning algorithm checks out a neighborhood

Us humans are normally good at making quick judgments about neighborhoods. We can figure out whether we’re safe, or if we’re likely to find a certain store. Computers haven’t had such an easy time of it, but that’s changing now that MIT researchers have created a deep learning algorithm that sizes up neighborhoods roughly as well as humans. The code correlates what it sees in millions of Google Street View images with crime rates and points of interest; it can tell what a sketchy part of town looks like, or what you’re likely to see near a McDonald’s (taxis and police vans, apparently).

Once a computer teaches itself using the algorithm, it’s surprisingly effective. While humans are still quicker at finding their way to a given location, machines are better at gauging how close they are based on individual photos. You sadly won’t see this technology used in the real world any time soon, since it’s just a proof of concept at this stage. However, it’s already good enough that MIT’s team believes it could help navigation apps steer you around crime-ridden areas, or give retailers a sense of where to set up shop. Eventually, you may not have to set foot in an unfamiliar neighborhood before you get a feel for what it has to offer.

Filed under: ,

Comments

Source: MIT News, CSAIL

.CPlase_panel display:none;

Read more from News

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

%d bloggers like this: