Skip to content

Archive for

7
Nov

Not feeling that spark? Here’s how to test a car battery


When it comes to car problems, nothing is ever convenient. It seems like your vehicle plots the perfect moment to give you grief, and the majority of the time, it involves a dead battery. While it’s true the starter motor, alternator, or spark plugs could be behind your vehicle’s refusal to start, it’s most likely that your battery is zapped. In this article, we’ll cover how to test a car battery, specifically its voltage, and also break down what each reading means.

Testing

Diagnosing a car battery is a breeze, but you will need a piece of equipment called a multimeter. These can be picked up for cheap either at your local auto parts store or online, and will quickly tell you whether or not your battery is out of juice. Though you can find analog multimeters, we’d recommend investing in a digital unit so there’s no misinterpreting the readout.

Finding your vehicle’s battery should be a cinch, but some automakers put them in odd places such as the trunk or under the rear seats. The vast majority can be found under the hood however, to the right or left of the engine. You can identify the battery by the positive (red/plus sign) and negative (black/minus sign) terminals that either route to a rectangular housing box or directly to the exposed battery.

Once you’ve located the unit, make sure your vehicle is turned off. If you’re using a digital multimeter, set the dial to DC voltage. Next, take your multimeter’s black lead to the negative battery terminal and the red lead to the positive terminal.

At this point, your multimeter will give a voltage readout. Here are some guidelines, courtesy of yourmechanic.com:

12.66+ volts
100% charged
12.45 volts
75% charged
12.24 volts
50% charged
12.06 volts
25% charged
11.89 volts
0% charged

If you’re seeing 12.45 volts or higher, your battery is in good shape and it’s time to check other common culprits. If you’re below a 75% charge, your battery might still bring the car to life, but not reliably. Below this threshold, your battery may need recharging or even replacing.

Final thoughts

If you struggle with any part of this process, take a trip to your local auto parts store and ask for help. Most shops are happy to help test, remove, recharge, and replace your car’s battery. You get a free hand, and they will (hopefully) earn your business in the future.

Editors’ Recommendations

  • Stranded? Learn how to jump-start a car with this quick guide
  • Never face a dead battery again with one of these portable car battery chargers
  • Need more juice? Here are the five phones with the best battery life
  • Plug that phone in while you navigate: Our 11 favorite iPhone car chargers
  • 2018 Zero Motorcycles: release date, prices, specs, and features




7
Nov

Here’s how to rotate your tires, and why it’s important


If you want your tires to last, you’re going to have to rotate them. Knowing how to rotate your tires is as essential to your car’s regular maintenance routine as changing the oil. It’s also a relatively simple process, although it does require some preparation and the right tools. Here’s how it’s done.

Why rotate?

Rotating tires is one of the most straightforward car-maintenance tasks, but it’s also one of the most essential. Tires wear at different rates, and switching their position periodically ensures that they wear more evenly. That means you won’t have to buy new tires prematurely. If tires all wear at the same rate, it also means they respond the same way, ensuring your car’s handling characteristics stay safe and consistent.

Tire rotations should be done once every 6,000 miles or so, and it’s a good idea to check the tread and pressure while you’re at it. You can time tire rotations to correspond with other maintenance like oil changes, and have them done while the car is in the shop. That’s probably the more convenient route, but it’s a snap to rotate your own tires.

Know your tires

The way tires are rotated varies depending on the vehicle, so consult your owners’ manual to find the exact procedure. You may want to mark the tires with chalk to help keep track of them.

Assuming all four wheels are the same size and the tires are non-directional, you’ll want to rotate the tires in a “rearward cross” pattern for rear-wheel drive or four-wheel drive/all-wheel drive cars. In this pattern, the front tires move diagonally, so that the left front tire is mounted in the right rear position, and the right front tire is moved to the left rear position. The rear tires are moved to the front, but stay on the same side.

The pattern for front-wheel drive cars is the opposite. You do a “forward cross” where the rear tires are moved to the front and change sides. Alternatively, you can shuffle the tires in an “X” pattern, where each tire is moved diagonally, regardless of which wheels are driven.

Some cars have directional tires (you can usually tell by the V-shaped tread pattern), and these can only be rotated back-to-front, front-to-back, not side-to-side. If your car’s front wheels and rear wheels are different sizes, you can only rotate side-to-side. If your tires are directional and the wheels are different sizes, the tires have to be removed from the rims and remounted, which is not really something you can do at home unless you have a tire-mounting machine.

Most modern cars have four full-size tires and a compact spare tire, typically called a “donut.” That donut tire should only be driven on in emergencies, but you can use it as a placeholder while rotating your tires. If your car has a full-size spare (meaning it’s identical to the four tires you drive on), you might want to institute a five-tire rotation so that it wears at the same rate as your other tires.

Finally, dually pickup trucks — which have two sets of rear wheels — have their own rotation pattern. Again, this can vary if the front and rear wheels are different sizes, so check your owners’ manual for specific details.

Getting started

First, prepare to jack up your car. Find a flat piece of ground to work on, apply the parking brake, and place chocks in front of the front wheel and behind the rear wheel on the opposite side of the one you’re working on (the wheels on that side will be off the ground).

You’ll need something to hold up the car while you’re moving tires around. Jack stands are the most straightforward solution, but you can also mount the spare tire on any hub that’s currently missing a tire as a placeholder. If your car has a full-size spare, you may want to rotate it anyway.

Seeing stars

Before jacking up the car, you’ll want to loosen the lug nuts, without removing them completely. The ground will provide resistance, allowing you to get them off more easily. If you try this while the wheel is in the air, it will be much more difficult, and because the drive wheels are free to spin, you could strain the transmission.

Most modern cars have a five-lug pattern. The correct way to loosen them is in a “star pattern,” meaning you should start at the top and loosen them in a pattern that draws a star. This prevents the rims from getting warped, and is especially crucial for aluminum or magnesium wheels, which are more fragile than steel rims. If your car has a four-lug pattern, just loosen the nut diagonally across from the one you just loosened.

Lug nuts can be difficult to loosen. One trick is to place your foot on the handle of the wrench and kick it once to crack the lug, then loosen it the rest of the way in a normal manner.

With the lugs loosened but still on their threads, place the jack in the correct position under the car. Every car has specific locations where a jack can safely be placed. Consult your owners’ manual to find the jacking point, and do not place the jack in just any old spot. A seemingly solid surface may not be able to support the weight of the car, or may allow the car to slide off the jack.

Remounting tires

Once the tire you want to remove is in the air, you can remove the lug nuts with your fingers. If you’re using the spare tire as a placeholder, get it mounted. Two lug nuts, tightened enough to keep the wheel seated on the hub, should be sufficient, since you won’t be driving on it. Move on to the next wheel and repeat the process.

When you’re putting a wheel back on for real, reinstall all lug nuts and tighten them enough to keep the rim seated on the hub. Then lower the car back onto the ground and tighten them some more. Strictly speaking, you should use a torque wrench to ensure the lugs are tightened to the specific torque specifications listed in the owners’ manual. But doing it by feeling with a conventional wrench should work fine. Tighten the lugs until the level of resistance sharply increases. Don’t over-tighten them, as that can damage the rims. Again, this is especially important for aluminum or magnesium rims.

Once the tires are in their new, correct, positions, you’re all set. Since you’re already playing with your tires, this is a good opportunity to check their pressures and tread depths. Other than that, you should be good to go!

Editors’ Recommendations

  • Want to be your own mechanic? Here’s how to jack up a car (and do it safely)
  • Here’s how to unlock your phone automatically with Android Smart Lock
  • Here’s everything you need to know about the 2018 Chevy Silverado 1500
  • Get started in ‘South Park: The Fractured But Whole’ with our beginner’s guide
  • How to find a lost phone whether it’s Android, iPhone, or any other smartphone




7
Nov

With the Pika app, kids teach an A.I. program how to recognize colors


Why it matters to you

A.I. is normally used to teach a computer program — but what if teaching a computer program could help kids deepen their understand of something like colors?

Artificial intelligence teaches computers but what happens when a child is the one doing all the teaching? Pika is a startup app company using A.I., computer vision, and augmented reality to create a camera-focused app where the kids do the teaching.

Pika is a camera app that takes kids on a color scavenger hunt of sorts. When kids find the specific color, they photograph it. Find that color three times and the child teaches the program how to recognize that hue. With each color, kids earn a badge. The Pika robot character leads kids on their hunt for specific colors, creating on-screen augmented reality effects when a color is spotted.

The app is based on the idea that when children teach something, they deepen their own learning, as well as building confidence, Pika says. Another computer vision algorithm confirms that what Pika is learning is correct, the company says.

“Pika encourages your child to explore the world visually in new and interesting ways,” said Bim Malcolmson, a parent and educator that tested an early version. “Through having to teach Pika concepts like blue or yellow, children will start seeing how many shades of colors there are, encouraging them to be particularly observant and interested in their own environments.”

As a kids iOS app, the company encourages parents to repurpose an old iPhone an iPod touch, though the device needs iOS 11, which means an iPhone 6 or sixth generation iPod Touch is required.

Pika was founded in London after Aisha Yusaf started looking for a camera to buy her daughter and found only pink and blue cameras with basic hardware and “uninspiring software.” That started a journey leading to the development of the Pika app as a way to mix photography with learning.

Pika is taking to Kickstarter to raise the funds to finish developing the app. If the project is successful, early backers can get the app for pledges starting at about $13. The campaign needs to reach about $23,000 by November 23 and has so far funded over $4,000. The company says more than 100 kids have already tested the app prototype. If the Kickstarter is successful, Pika plans on getting the app to early backers in April.

Editors’ Recommendations

  • STEMosaur is an educational talking toy dinosaur kids can build and program
  • How do you get kids into coding? Tynker and Parrot let them use it to fly drones
  • Teach the kids how to build a PC with the Kano Computer Kit Complete
  • 20 Android and iOS apps for kids to keep them entertained (and quiet)
  • Prepare your kids — SpongeBob SquarePants is now an Alexa skill




7
Nov

Pigs are flying! Intel and AMD collaborate on all-in-one chip for laptops


Why it matters to you

Intel and AMD have teamed up to create a streamlined solution for high performance and low power draw in ultra-slim form factors.

Look up in the sky, as you soon may start to see pigs flying overhead. Intel said on Monday, November 6 that it teamed up with processor rival AMD to create an all-in-one package for OEMs to use in thin-and-light PCs. This package contains eighth-generation Intel Core H processor cores, custom AMD discrete Radeon graphics cores, dedicated HBM2 graphics memory, and a special “highway” connecting the three components together called an embedded multi-die interconnect bridge, or EMIB.

There are several benefits in using this all-in-one chip. First, the resulting thin-and-light device will have equal if not better performance than bulky laptops because of the fast, dedicated connection between Intel’s CPU cores and AMD’s graphics cores. The use on HBM2 graphics memory also means more space for OEMs to use because HBM2 stacks vertically instead of horizontally like GDDR5. Intel says this new all-in-one solution provides a unique power-sharing feature, too, to maximize performance without gobbling up the battery like candy.

“We’ve added unique software drivers and interfaces to this semi-custom discrete GPU that coordinate information among all three elements of the platform,” the company said. “Not only does it help manage temperature, [and] power delivery and performance state in real time, it also enables system designers to adjust the ratio of power sharing between the processor and graphics based on workloads and usages, like performance gaming. Balancing power between our high-performing processor and the graphics subsystem is critical to achieve great performance across both processors as systems get thinner.”

Typically, Intel is no competition in the mainstream discrete graphics space — that’s solely fought between Nvidia (GeForce) and AMD (Radeon). But in the mainstream processor space, Intel and AMD fight hard for your hard-earned dollars. Intel just launched its eighth-generation Core processor family, and AMD recently announced its new Ryzen processors for desktops and laptops. The collaboration is indeed a surprise.

To get high graphics performance in a laptop, you typically need a stand-alone, discrete graphics chip despite the integrated component in Intel’s CPUs. But throwing a discrete graphics chip into the mix typically means a bulkier form factor, a higher power requirement, additional cooling requirements, and a larger price tag.

Understanding this, Intel apparently approached AMD with the idea of providing high-performance graphics in super-thin laptops using a single optimized package. But the duo isn’t stopping at thin-and-light notebooks. OEM’s can use this device to create standard notebooks, 2-in-1s, and miniature desktops as well.

At the very heart of Intel’s new all-in-one chip is the EMIB technology. First revealed in 2014, Intel said in April of this year during its Technology and Manufacturing conference that this platform enables Intel to throw in components from any manufacturer, not just AMD. It’s a “mix and match” heterogeneous design, and AMD appears to be the first participant in Intel’s EMIB roadmap. The resulting chip won’t include Intel’s own integrated graphics cores typically found in its mobile and desktop processors.

This new all-in-one chip from Intel and AMD will be made available to OEMs in the first quarter of 2018.

Editors’ Recommendations

  • AMD crams desktop performance into ultra-thin laptops with its new Ryzen APUs
  • AMD vs Intel: Does Threadripper mean it’s time to root for the underdog?
  • 8th Gen Intel Core news: Mobile quad-cores confirmed, desktop rumors stay strong
  • Everything you need to know about Vega: Prices remain high due to limited supply
  • Desktops are dead? Lenovo says no as it shoves new gaming PCs into the spotlight




7
Nov

Pigs are flying! Intel and AMD collaborate on all-in-one chip for laptops


Why it matters to you

Intel and AMD have teamed up to create a streamlined solution for high performance and low power draw in ultra-slim form factors.

Look up in the sky, as you soon may start to see pigs flying overhead. Intel said on Monday, November 6 that it teamed up with processor rival AMD to create an all-in-one package for OEMs to use in thin-and-light PCs. This package contains eighth-generation Intel Core H processor cores, custom AMD discrete Radeon graphics cores, dedicated HBM2 graphics memory, and a special “highway” connecting the three components together called an embedded multi-die interconnect bridge, or EMIB.

There are several benefits in using this all-in-one chip. First, the resulting thin-and-light device will have equal if not better performance than bulky laptops because of the fast, dedicated connection between Intel’s CPU cores and AMD’s graphics cores. The use on HBM2 graphics memory also means more space for OEMs to use because HBM2 stacks vertically instead of horizontally like GDDR5. Intel says this new all-in-one solution provides a unique power-sharing feature, too, to maximize performance without gobbling up the battery like candy.

“We’ve added unique software drivers and interfaces to this semi-custom discrete GPU that coordinate information among all three elements of the platform,” the company said. “Not only does it help manage temperature, [and] power delivery and performance state in real time, it also enables system designers to adjust the ratio of power sharing between the processor and graphics based on workloads and usages, like performance gaming. Balancing power between our high-performing processor and the graphics subsystem is critical to achieve great performance across both processors as systems get thinner.”

Typically, Intel is no competition in the mainstream discrete graphics space — that’s solely fought between Nvidia (GeForce) and AMD (Radeon). But in the mainstream processor space, Intel and AMD fight hard for your hard-earned dollars. Intel just launched its eighth-generation Core processor family, and AMD recently announced its new Ryzen processors for desktops and laptops. The collaboration is indeed a surprise.

To get high graphics performance in a laptop, you typically need a stand-alone, discrete graphics chip despite the integrated component in Intel’s CPUs. But throwing a discrete graphics chip into the mix typically means a bulkier form factor, a higher power requirement, additional cooling requirements, and a larger price tag.

Understanding this, Intel apparently approached AMD with the idea of providing high-performance graphics in super-thin laptops using a single optimized package. But the duo isn’t stopping at thin-and-light notebooks. OEM’s can use this device to create standard notebooks, 2-in-1s, and miniature desktops as well.

At the very heart of Intel’s new all-in-one chip is the EMIB technology. First revealed in 2014, Intel said in April of this year during its Technology and Manufacturing conference that this platform enables Intel to throw in components from any manufacturer, not just AMD. It’s a “mix and match” heterogeneous design, and AMD appears to be the first participant in Intel’s EMIB roadmap. The resulting chip won’t include Intel’s own integrated graphics cores typically found in its mobile and desktop processors.

This new all-in-one chip from Intel and AMD will be made available to OEMs in the first quarter of 2018.

Editors’ Recommendations

  • AMD crams desktop performance into ultra-thin laptops with its new Ryzen APUs
  • AMD vs Intel: Does Threadripper mean it’s time to root for the underdog?
  • 8th Gen Intel Core news: Mobile quad-cores confirmed, desktop rumors stay strong
  • Everything you need to know about Vega: Prices remain high due to limited supply
  • Desktops are dead? Lenovo says no as it shoves new gaming PCs into the spotlight




7
Nov

This cocktail glass lets you customize your drink’s flavor using an app


Why it matters to you

Programmable cocktail glass will let you program your drink to taste just the way you like it.

Ever sat down to have a drink with friends and found yourself wishing the whole experience was a bit more high tech? Researchers at the National University of Singapore are here to help. Under the leadership of Nimesha Ranasinghe, they developed a programmable cocktail glass called the “Vocktail,” which is capable of tricking your senses into thinking that you’re drinking … well, just about anything you can imagine, really.

The device contains several different elements which, combined, give you the sense of drinking an almost infinite number of flavored beverages. One such element is an LED light that alters the color of your drink. Another is the use of electrodes, placed around the rim, which stimulate the tongue so that it tastes the liquid as salty, sweet or sour. If that sounds a bit familiar, it’s because it’s the basis for a previous project of Ranasinghe’s that we covered: A glass designed to mimic the sour taste of lemonade. What makes the Vocktail project a major step forward, however, is the addition of a smell component, capable of fooling a person’s nose into tasting a far greater variety of subtle flavors. This smell component is the result of three smell chambers and air pumps, which spring into action as a person drinks.

Heck, there’s even an app which lets you customize the experience!

“Imagine next time you order a cocktail you can customize its flavor using a mobile app, or try out entirely new flavors — for example, you order a mojito, but you want to try it with a hint of chocolate or strawberry,” Ranasinghe told Digital Trends. “Even though we call it a Virtual Cocktail, it is not only a virtual cocktail, it is mainly about augmenting the cocktail drinking experiences. So far, from our demonstrations, people love it mainly due to the presence of smell sensations. Also, it seems that having the smell sensation combine the different sensory channels — smell, taste, color, and other factors — to create a seamless flavor experience. Adding smells help the consumers to explore flavors, and experimentally create new cocktails.”

According to Ranasinghe, the team wants to take the experience even further by adding the element of virtual reality dining, so you and your date can sip augmented e-drinks in a wide variety of different virtual settings. You wouldn’t even have to necessarily be in the same physical space for this to work. “This will also enable us to study how people enjoy food and beverages when experiencing different VR scenarios,” Ranasinghe continued.

The Vocktail project was presented at the Association for Computing Machinery Multimedia Conference in October.

Editors’ Recommendations

  • 10 ‘smart’ gadgets that are just plain dumb
  • Your future kitchen has a smart oven, a burger-flipping bot, and 36 bacon programs
  • DribbleUp soccer ball isn’t only smarter than an average ball, it’s cheaper too
  • Wine taste bad? The temperature may be to blame, but this gadget can help
  • Your hotel stay could soon involve a smart room thanks to Hilton




7
Nov

This cocktail glass lets you customize your drink’s flavor using an app


Why it matters to you

Programmable cocktail glass will let you program your drink to taste just the way you like it.

Ever sat down to have a drink with friends and found yourself wishing the whole experience was a bit more high tech? Researchers at the National University of Singapore are here to help. Under the leadership of Nimesha Ranasinghe, they developed a programmable cocktail glass called the “Vocktail,” which is capable of tricking your senses into thinking that you’re drinking … well, just about anything you can imagine, really.

The device contains several different elements which, combined, give you the sense of drinking an almost infinite number of flavored beverages. One such element is an LED light that alters the color of your drink. Another is the use of electrodes, placed around the rim, which stimulate the tongue so that it tastes the liquid as salty, sweet or sour. If that sounds a bit familiar, it’s because it’s the basis for a previous project of Ranasinghe’s that we covered: A glass designed to mimic the sour taste of lemonade. What makes the Vocktail project a major step forward, however, is the addition of a smell component, capable of fooling a person’s nose into tasting a far greater variety of subtle flavors. This smell component is the result of three smell chambers and air pumps, which spring into action as a person drinks.

Heck, there’s even an app which lets you customize the experience!

“Imagine next time you order a cocktail you can customize its flavor using a mobile app, or try out entirely new flavors — for example, you order a mojito, but you want to try it with a hint of chocolate or strawberry,” Ranasinghe told Digital Trends. “Even though we call it a Virtual Cocktail, it is not only a virtual cocktail, it is mainly about augmenting the cocktail drinking experiences. So far, from our demonstrations, people love it mainly due to the presence of smell sensations. Also, it seems that having the smell sensation combine the different sensory channels — smell, taste, color, and other factors — to create a seamless flavor experience. Adding smells help the consumers to explore flavors, and experimentally create new cocktails.”

According to Ranasinghe, the team wants to take the experience even further by adding the element of virtual reality dining, so you and your date can sip augmented e-drinks in a wide variety of different virtual settings. You wouldn’t even have to necessarily be in the same physical space for this to work. “This will also enable us to study how people enjoy food and beverages when experiencing different VR scenarios,” Ranasinghe continued.

The Vocktail project was presented at the Association for Computing Machinery Multimedia Conference in October.

Editors’ Recommendations

  • 10 ‘smart’ gadgets that are just plain dumb
  • Your future kitchen has a smart oven, a burger-flipping bot, and 36 bacon programs
  • DribbleUp soccer ball isn’t only smarter than an average ball, it’s cheaper too
  • Wine taste bad? The temperature may be to blame, but this gadget can help
  • Your hotel stay could soon involve a smart room thanks to Hilton




7
Nov

What is an artificial neural network? Here’s everything you need to know


Why it matters to you

Neural networks are ruling the field of artificial intelligence. Let us help you join the conversation.

If you’ve spent any time reading about artificial intelligence, you’ll almost certainly have heard about artificial neural networks. But what exactly is one? Rather than enrolling in a comprehensive computer science course or delving into some of the more in-depth resources that are available online, check out our handy layperson’s guide to get a quick and easy introduction to this amazing form of machine learning.

What is an artificial neural network?

Artificial neural networks are one of the main tools used in machine learning. As the “neural” part of their name suggests, they are brain-inspired systems which are intended to replicate the way that we humans learn. Neural networks consist of input and output layers, as well as (in most cases) a hidden layer consisting of units that transform the input into something that the output layer can use. They are excellent tools for finding patterns which are far too complex or numerous for a human programmer to extract and teach the machine to recognize.

While neural networks (also called “perceptrons”) have been around since the 1940s, it is only in the last several decades where they have become a major part of artificial intelligence. This is due to the arrival of a technique called “backpropagation,” which allows networks to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for — like a network designed to recognize dogs, which misidentifies a cat, for example.

Another important advance has been the arrival of deep learning neural networks, in which different layers of a multilayer network extract different features until it can recognize what it is looking for.

Sounds pretty complex. Can you explain it like I’m five?

For a basic idea of how a deep learning neural network learns, imagine a factory line. After the raw materials (the data set) are input, they are then passed down the conveyer belt, with each subsequent stop or layer extracting a different set of high-level features. If the network is intended to recognize an object, the first layer might analyze the brightness of its pixels.

The next layer could then identify any edges in the image, based on lines of similar pixels. After this, another layer may recognize textures and shapes, and so on. By the time the fourth or fifth layer is reached, the deep learning net will have created complex feature detectors. It can figure out that certain image elements (such as a pair of eyes, a nose, and a mouth) are commonly found together.

Once this is done, the researchers who have trained the network can give labels to the output, and then use backpropagation to correct any mistakes which have been made. After a while, the network can carry out its own classification tasks without needing humans to help every time.

Beyond this, there are different types of learning, such as supervised or unsupervised learning or reinforcement learning, in which the network learns for itself by trying to maximize its score — as memorably carried out by Google DeepMind’s Atari game-playing bot.

How many types of neural network are there?

There are multiple types of neural network, each of which come with their own specific use cases and levels of complexity. The most basic type of neural net is something called a feedforward neural network, in which information travels in only one direction from input to output.

A more widely used type of network is the recurrent neural network, in which data can flow in multiple directions. These neural networks possess greater learning abilities and are widely employed for more complex tasks such as learning handwriting or language recognition.

There are also convolutional neural networks, Boltzmann machine networks, Hopfield networks, and a variety of others. Picking the right network for your task depends on the data you have to train it with, and the specific application you have in mind. In some cases, it may be desirable to use multiple approaches, such as would be the case with a challenging task like voice recognition.

What kind of tasks can a neural network do?

A quick scan of our archives suggests the proper question here should be “what tasks can’t a neural network do?” From making cars drive autonomously on the roads, to generating shockingly realistic CGI faces, to machine translation, to fraud detection, to reading our minds, to recognizing when a cat is in the garden and turning on the sprinklers; neural nets are behind many of the biggest advances in A.I.

Broadly speaking, however, they are designed for spotting patterns in data. Specific tasks could include classification (classifying data sets into predefined classes), clustering (classifying data into different undefined categories), and prediction (using past events to guess future ones, like the stock market or movie box office).

How exactly do they “learn” stuff?

In the same way that we learn from experience in our lives, neural networks require data to learn. In most cases, the more data that can be thrown at a neural network, the more accurate it will become. Think of it like any task you do over and over. Over time, you gradually get more efficient and make fewer mistakes.

When researchers or computer scientists set out to train a neural network, they typically divide their data into three sets. First is a training set, which helps the network establish the various weights between its nodes. After this, they fine-tune it using a validation data set. Finally, they’ll use a test set to see if it can successfully turn the input into the desired output.

Do neural networks have any limitations?

On a technical level, one of the bigger challenges is the amount of time it takes to train networks, which can require a considerable amount of compute power for more complex tasks. The biggest issue, however, is that neural networks are “black boxes,” in which the user feeds in data and receives answers. They can fine-tune the answers, but they don’t have access to the exact decision making process.

This is a problem a number of researchers are actively working on, but it will only become more pressing as artificial neural networks play a bigger and bigger role in our lives.

Editors’ Recommendations

  • How would Mozart play ‘Hotline Bling?’ AI will soon help us find out
  • Huawei Kirin 970: Everything you need to know
  • Want to teach an AI to Zerg Rush? Blizzard and DeepMind have the tool for you
  • A beginner’s guide to A.I. superintelligence and ‘the singularity’
  • A robot named Heliograf got hundreds of stories published last year




7
Nov

What is an artificial neural network? Here’s everything you need to know


Why it matters to you

Neural networks are ruling the field of artificial intelligence. Let us help you join the conversation.

If you’ve spent any time reading about artificial intelligence, you’ll almost certainly have heard about artificial neural networks. But what exactly is one? Rather than enrolling in a comprehensive computer science course or delving into some of the more in-depth resources that are available online, check out our handy layperson’s guide to get a quick and easy introduction to this amazing form of machine learning.

What is an artificial neural network?

Artificial neural networks are one of the main tools used in machine learning. As the “neural” part of their name suggests, they are brain-inspired systems which are intended to replicate the way that we humans learn. Neural networks consist of input and output layers, as well as (in most cases) a hidden layer consisting of units that transform the input into something that the output layer can use. They are excellent tools for finding patterns which are far too complex or numerous for a human programmer to extract and teach the machine to recognize.

While neural networks (also called “perceptrons”) have been around since the 1940s, it is only in the last several decades where they have become a major part of artificial intelligence. This is due to the arrival of a technique called “backpropagation,” which allows networks to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for — like a network designed to recognize dogs, which misidentifies a cat, for example.

Another important advance has been the arrival of deep learning neural networks, in which different layers of a multilayer network extract different features until it can recognize what it is looking for.

Sounds pretty complex. Can you explain it like I’m five?

For a basic idea of how a deep learning neural network learns, imagine a factory line. After the raw materials (the data set) are input, they are then passed down the conveyer belt, with each subsequent stop or layer extracting a different set of high-level features. If the network is intended to recognize an object, the first layer might analyze the brightness of its pixels.

The next layer could then identify any edges in the image, based on lines of similar pixels. After this, another layer may recognize textures and shapes, and so on. By the time the fourth or fifth layer is reached, the deep learning net will have created complex feature detectors. It can figure out that certain image elements (such as a pair of eyes, a nose, and a mouth) are commonly found together.

Once this is done, the researchers who have trained the network can give labels to the output, and then use backpropagation to correct any mistakes which have been made. After a while, the network can carry out its own classification tasks without needing humans to help every time.

Beyond this, there are different types of learning, such as supervised or unsupervised learning or reinforcement learning, in which the network learns for itself by trying to maximize its score — as memorably carried out by Google DeepMind’s Atari game-playing bot.

How many types of neural network are there?

There are multiple types of neural network, each of which come with their own specific use cases and levels of complexity. The most basic type of neural net is something called a feedforward neural network, in which information travels in only one direction from input to output.

A more widely used type of network is the recurrent neural network, in which data can flow in multiple directions. These neural networks possess greater learning abilities and are widely employed for more complex tasks such as learning handwriting or language recognition.

There are also convolutional neural networks, Boltzmann machine networks, Hopfield networks, and a variety of others. Picking the right network for your task depends on the data you have to train it with, and the specific application you have in mind. In some cases, it may be desirable to use multiple approaches, such as would be the case with a challenging task like voice recognition.

What kind of tasks can a neural network do?

A quick scan of our archives suggests the proper question here should be “what tasks can’t a neural network do?” From making cars drive autonomously on the roads, to generating shockingly realistic CGI faces, to machine translation, to fraud detection, to reading our minds, to recognizing when a cat is in the garden and turning on the sprinklers; neural nets are behind many of the biggest advances in A.I.

Broadly speaking, however, they are designed for spotting patterns in data. Specific tasks could include classification (classifying data sets into predefined classes), clustering (classifying data into different undefined categories), and prediction (using past events to guess future ones, like the stock market or movie box office).

How exactly do they “learn” stuff?

In the same way that we learn from experience in our lives, neural networks require data to learn. In most cases, the more data that can be thrown at a neural network, the more accurate it will become. Think of it like any task you do over and over. Over time, you gradually get more efficient and make fewer mistakes.

When researchers or computer scientists set out to train a neural network, they typically divide their data into three sets. First is a training set, which helps the network establish the various weights between its nodes. After this, they fine-tune it using a validation data set. Finally, they’ll use a test set to see if it can successfully turn the input into the desired output.

Do neural networks have any limitations?

On a technical level, one of the bigger challenges is the amount of time it takes to train networks, which can require a considerable amount of compute power for more complex tasks. The biggest issue, however, is that neural networks are “black boxes,” in which the user feeds in data and receives answers. They can fine-tune the answers, but they don’t have access to the exact decision making process.

This is a problem a number of researchers are actively working on, but it will only become more pressing as artificial neural networks play a bigger and bigger role in our lives.

Editors’ Recommendations

  • How would Mozart play ‘Hotline Bling?’ AI will soon help us find out
  • Huawei Kirin 970: Everything you need to know
  • Want to teach an AI to Zerg Rush? Blizzard and DeepMind have the tool for you
  • A beginner’s guide to A.I. superintelligence and ‘the singularity’
  • A robot named Heliograf got hundreds of stories published last year




7
Nov

The best place to print photos online, from budget-friendly to gallery quality


Photographs deserve to exist in more than just digital pixels. But sometimes details get lost in translation from digital to print, spitting out prints with weird colors, fuzzy details, unexpected borders, and horrible customer service. So what’s the best place to print photos online? We’ve rounded up the options from personal experience, pro photographer recommendations, and web reviews to put together our seven favorite photo printers, from online to in-person.

Best fast online photo printer: Snapfish

If you’re looking for a photo product from calendars to pillow cases, chances are, Snapfish has it. Snapfish is a quick, simple consumer option for printing out snapshots at affordable prices. In many categories, Snapfish is more affordable than its main competition, Shutterfly; for example, a 4 x 6 starts at 9 cents instead of 14 cents. While Snapfish won’t get you the same quality as the pricier professional printing options, its prints are solid compared to other similarly priced competitors. Plus, partnerships with Walmart, CVS, and Walgreens lets you pick up your orders from those retailers.

Along with its web platform, the Snapfish app makes it easy to order from a smartphone — and offers 100 free prints for a month per year. It’s only open to U.S. residents and you still have to pay shipping and taxes, but the ease of using the app and the price are tough to beat.

Runner-up: Shutterfly

Shutterfly may be a bit pricier than Snapfish, but it’s not to be overlooked. Chances are, you can probably find a coupon for Shutterfly, and it also often offers coupons for 100 free 4 x 6 prints too. Like Snapfish, Shutterfly has a mobile app. The perk of the Shutterfly app isn’t free prints though — it’s free, unlimited cloud storage for your photos. That’s a big plus for casual photographers who need to back up their files, since paid cloud storage options get pretty pricey.

Best fast in-person photo printer: CVS

In-person photo kiosks are quick and convenient, but they are often inconsistent since there are a number of different factors that affect print quality. One drugstore or superstore may have solid print quality, while the same store by the same name in the next town over may have funky colors. This writer has had some success at one drugstore, only to get 5 x 7s printed on 8 x 10 sheets (and having to dig out the scissors to cut them out manually) at another.

While the quality can vary from location to location, CVS is perhaps the most consistent. Printing at the chain’s Kodak kiosks is quick for consumers who just can’t wait a few days for an online order. Ordering is also fairly simple, though photo printing kiosks aren’t without glitches. Photo quality won’t match up to professional printers, but CVS appears to have the fewest complaints for odd colors and fuzzy images. Expect to pay 29 cents for one print. Note: You can also send prints to CVS from your smartphone via the CVS or Snapfish apps.

Best professional photo lab: Mpix

Mpix offers high-quality photos printed in the U.S. and ready for delivery in 24 hours. The image quality from Mpix is much higher than what you’d get from a drugstore, superstore, or consumer online print shop, and the service checks every photo by hand. Mpix offers three different paper options and prints start at 19 cents.

Runner-up: Nations Photo Lab

Nations Photo Lab may have slightly longer processing times (up to two days for prints), but it has an excellent customer service team – whenever a print wasn’t what this writer expected, the company offered a reprint or even a refund. Nations offers pro-level quality, even though you don’t have to be a professional shooter to order from it, and its online ordering platform is easy to use. Compared to Mpix, its product range is a bit wider, including custom wood or metal USB drives for delivering digital files, but, again, its processing is a bit slower.

Best fine art photo printer: WhiteWall

Pro-level prints not enough? Germany’s WhiteWall is a photo lab that only produces gallery-quality products, and it has launched a U.S.-based facility for even more efficient service. While it is more expensive than other options, WhiteWall offers a higher level of quality. Using a range of professional canvas and inks, including Fuji papers, WhiteWall offers more than basic prints, but prints with acrylic, metallic prints, frames, and photo books.

Runner-up: Pro DPI

Pro DPI is known for its high-end prints and is often commended for the image sharpness as well as color accuracy. Pro DPI gives photographers a choice in photo paper, including both Fuji and Kodak options. Prices for a simple 4 x 6 range from 10 to 15 cents.

What to print photos at home? Check out our guide here.

Gannon Burgett contributed to this article.

Editors’ Recommendations

  • This isn’t the end of printed photos, it’s the golden age
  • You shot the perfect photo. Here’s how to get the perfect print
  • These are our favorite 5 laser printer deals to save you time and money
  • How to use Lightroom: A beginner’s guide to Adobe’s photo editing software
  • Flickr will no longer allow you to print Wall Art or photo books