Skip to content

Archive for

6
Jun

Scientists need you to play classic Atari games, teach their AI new tricks


Why it matters to you

Watching skilled humans solve problems helps AI learn faster. And yes, sometimes that involves Ms. Pac-Man.

Learning valuable skills by playing video games sounds suspiciously like the kind of feeble excuse we used as teenagers to explain why we were playing GoldenEye 007 instead of doing our homework. But in the case of a new AI project carried out by computer scientists at RWTH Aachen University in Germany and Microsoft Research, it turns out to be absolutely true.

“What we’ve developed is a way to collect data of humans playing five Atari games, a large dataset of humans playing them, and the insight that — with current algorithms — less data of better players seems to be more useful for learning than more data of worse players,” Lucas Beyer, a researcher on the project, told Digital Trends. “This might sound obvious, but really it’s not: The common theme being ‘the more data the better.’”

There has been interesting work done before involving AI and classic Atari 2600 games. For example, a couple of years ago, an artificial agent created by the Google-owned DeepMind was able to learn to play games like Breakout without a human showing it how to. As Beyer notes, in the case of his and his colleagues’ work, humans are involved — since the bots are watching human players play through the games Q*Bert, Ms. Pac-Man, Space Invaders, Video Pinball, and Montezuma’s Revenge.

This playthrough data was gathered from Redditors, who turned out to be more than happy to revisit some vintage arcade games — all in the interests of improving AI. What is impressive about the work, however, is that the AI was able to learn new skills, such as problem solving, by extracting patterns from the human-led playthroughs it analyzed.

“This dataset is an open testbed for developing reinforcement learning algorithms that can get a head start by looking at human demonstration, as opposed to learning everything from scratch,” Beyer continued.

The algorithm was even able to learn to sort good players from bad, without ever being told what a “good” or “bad” player might look like.

Next up, the team wants to build on their system — by adding data from more expert Atari players. And did we mention the best part? They want you (yes, you!) to help them.

“It would be cool if you can ask people to play games on our website and add more data as a result,” Yobi Byte, another researcher on the project, told us.

Now how’s that for the homework you always dreamed of?




6
Jun

Apple unveils MacBook Pro upgrade, plus a price cut for entry-level option


Why it matters to you

The MacBook Pro lineup is now faster than before, and the entry-level model is just $1,300 — that’s a $200 price cut.

wwdc-2017-topic-banner-280x75.png

At WWDC on Monday, Apple announced the latest MacBook Pros will be getting yet another update, this time bringing Intel’s latest seventh-generation “Kaby Lake” processors to the flagship notebook lineup. While it wasn’t exactly the star of the show at WWDC, it comes as a welcome reminder that Apple remains committed to the MacBook Pro as a platform.

Starting today, the entire MacBook Pro line will be receiving a much-needed hardware upgrade, with Intel’s latest-generation Core processors replacing the sixth-gen Core chips that shipped with the new MacBook Pro lineup in late 2016. This will mean better performance, improved power efficiency, and higher turbo boost clock speeds.

The MacBook Pro 13 can now hit 3.5GHz with the top-end Intel Core i7, while the MacBook Pro 15 will hit 3.1GHz with its own seventh-generation Intel Core i7 chip.

The 15-inch model will also feature more powerful discrete graphics options, as well as more video memory.

Thankfully, Apple isn’t charging more for the new chips. The opposite, in fact: The entry-level MacBook Pro 13 (without the Touch Bar) will not only receive a faster processor, but also a cheaper price, starting at $1,300 instead of its debut price of $1,500. That’s definitely a step in the right direction for anyone eyeballing a low-price MacBook Pro.

We’ll have to wait and see just how well the new seventh-gen processors perform up and down the MacBook Pro lineup, but with strong performance from the current sixth-generation chips, we can expect higher clock speeds and better all around performance. Our tests of Windows 10 systems found that notebooks with seventh-gen Intel Core hardware were 10 to 15 percent quicker than those with sixth-gen chips.

Even the lowly MacBook is getting an upgrade, the new seventh-generation Intel Core i7 hits 1.3GHz, and offers a 50 percent faster SSD and supports twice the amount of RAM.

Meanwhile, the MacBook Air struggles on. Apple announced the MacBook Air will be receiving a similar hardware upgrade, but didn’t get into the specifics. That likely means it will be similar to its predecessors in most respects, but receive a bump to Intel seventh-gen Core.

The latest MacBook Pros and MacBook Air won’t ship running Apple’s new MacOS High Sierra — a refinement to the last major MacOS update — but it will be available for download today via Apple’s Developer program. For everyone else, the MacOS High Sierra will be coming this fall.




6
Jun

Walmart’s new VR training program sounds like a bad ‘Black Mirror’ episode


Why it matters to you

Walmart’s VR tech will enable its employees to practice scenarios to provide better customer service.

Virtual reality can transplant users into any manner of amazing, exotic locations — so why would you want to use it to put yourself in the shoes of a retail employee on the most stressful shopping day of the year? If you’re Walmart, the answer is obvious: training.

Thanks to a handy assist from Stivr — a VR company that has previously used its tech to help NFL players train — Walmart has announced that its 200 “Walmart Academy” training centers will be using virtual reality training by the end of the year. And, yes, this includes a VR version of Black Friday.

“For the past several months, we have been testing VR at 31 Walmart Academies, which are the regionalized training facilities Walmart uses to train new and existing employees,” Danny Belch, vice president of strategy, sales & marketing at Stivr, told Digital Trends. “By the end of 2017, VR will be at all 200 of Walmart’s Academies. Over 140,000 Walmart employees will get to experience VR every year as a result of using it in the Academy system, which is one of the largest, if not the largest, VR rollouts in the history of virtual reality. Walmart has displayed incredible innovation in choosing to utilize VR in this way, and it has been really fun working with their team over the last several months.”

But why exactly is a Black Friday simulator necessary? Belch says that the Stivr team recommends VR when a training scenario might be prohibitively dangerous or expensive to carry out in the real world. But despite how much loot Black Friday brings in, he notes that the virtual reality setups the team has developed go way beyond preparing for just one shopping day.

“That is only one small part of what Walmart is doing with training in VR,” he said. “We have created a library of virtual content that addresses lots of different scenarios a Walmart employee may encounter on a daily basis in his or her job. These range from spotting errors in different parts of a store to engaging in different types of customer service modules. The general idea is to give employees more repetitions at the mental decisions they have to make on a daily basis, which in turn will lead to a better experience for all Walmart customers.”




6
Jun

Apple’s redesigned App Store makes it easier than ever to find new apps


Why it matters to you

Your app-browsing experience is about to get a whole lot cleaner thanks to the redesigned App Store.

wwdc-2017-topic-banner-280x75.png

For the first time since its launch, Apple has completely redesigned the App Store experience on iOS — meaning that not only will it be easier for you to find and download apps, but it should also make for a cleaner experience in general.

Perhaps the most interesting aspect of the new App Store is the addition of several new tabs. Notably, the App Store will now have a “today” tab, which will show the latest apps to hit the Store, and make it easier to discover new content.

Today isn’t the only tab that will be part of the redesigned store. The App Store will also now feature a “Games” tab, which will allow you to see the latest and greatest games to hit the App Store. You can even get a little more granular than that by being able to also see a particular kind of game.

The pages for individual apps have also gotten a redesign — making for a much cleaner and nicer-looking App Store. From the pages, you’ll be able to quickly and easily see information you need about an app before you download it, as well as user reviews. Now, you can see if an app is all it’s hyped up to be before downloading it.

The update is a welcome change to the App Store. According to Apple, a massive 180 billion apps have been installed since the launch of the App Store, and Apple has paid out a whopping $70 billion to developers.

The App Store update was announced at Apple’s Worldwide Developer Conference on Monday, but  it wasn’t the only big announcement — the company also showed off new versions of iOS, WatchOS, and MacOS. In general, WWDC is one of Apple’s biggest shows of the year — and you can keep up with all of our WWDC coverage here.




6
Jun

It may soon be possible to accurately re-create facial images from memory


Why it matters to you

Research suggests it may one day be possible to accurately generate images of faces, based only a person’s memory of them.

Researchers at the California Institute of Technology have demonstrated that it is possible to recreate images of human face based on the monitoring of macaque monkey brain cells. The work shines a light on how exactly faces are processed by the brain.

Using functional magnetic resonance imaging (fMRI) technology, six areas of the brain were shown to be involved with the identification process. The team referred to the neurons in these areas as “face cells.”

In an experiment that involved inserting electrodes into the brains of monkeys to record their physical response to looking at images, the researchers found that 205 neurons encode different characteristics of a face. When these are combined using some smart machine learning technology, it’s possible to reconstruct the face the monkey had been looking at in a way that was strikingly close to the original image.

“The face cells we are studying are at the highest level of the visual system,” Steven Le Chang, a researcher on the project, told Digital Trends. “Normally people think the code for neurons at this level should be rather complicated. However, our result shows that once we find the appropriate coordinates for faces, the code of faces could be understandable. Using this code, we are able to reconstruct the face the monkey saw and predict responses of face cells to an arbitrary face.”


Doris Tsao

But as interesting as the work is from a biological perspective, is there any possible real world application for it? Quite possibly yes, Le Chang explained.

“Potentially, if we could decode faces from neural activity in the human brain [as well as the monkey one], there will be a lot of real world applications,” he said. “In general, that will helps human subjects convey concepts which are otherwise difficult. For example, a witness of a crime scene may have a hard time describing the face of the criminal. If we could directly decode the face based on the witness’ memory, we can extract the criminal’s face in a much more objective and quick way. Of course, whether memory activates the same population of cells as seeing the face is still an open question.”

Next up, the researchers wish to extend their study from neutral faces to expressive faces, as well as other types of object. As noted, they also want to investigate how imagination or memory of faces affects the representation in face patches.




6
Jun

It may soon be possible to accurately re-create facial images from memory


Why it matters to you

Research suggests it may one day be possible to accurately generate images of faces, based only a person’s memory of them.

Researchers at the California Institute of Technology have demonstrated that it is possible to recreate images of human face based on the monitoring of macaque monkey brain cells. The work shines a light on how exactly faces are processed by the brain.

Using functional magnetic resonance imaging (fMRI) technology, six areas of the brain were shown to be involved with the identification process. The team referred to the neurons in these areas as “face cells.”

In an experiment that involved inserting electrodes into the brains of monkeys to record their physical response to looking at images, the researchers found that 205 neurons encode different characteristics of a face. When these are combined using some smart machine learning technology, it’s possible to reconstruct the face the monkey had been looking at in a way that was strikingly close to the original image.

“The face cells we are studying are at the highest level of the visual system,” Steven Le Chang, a researcher on the project, told Digital Trends. “Normally people think the code for neurons at this level should be rather complicated. However, our result shows that once we find the appropriate coordinates for faces, the code of faces could be understandable. Using this code, we are able to reconstruct the face the monkey saw and predict responses of face cells to an arbitrary face.”


Doris Tsao

But as interesting as the work is from a biological perspective, is there any possible real world application for it? Quite possibly yes, Le Chang explained.

“Potentially, if we could decode faces from neural activity in the human brain [as well as the monkey one], there will be a lot of real world applications,” he said. “In general, that will helps human subjects convey concepts which are otherwise difficult. For example, a witness of a crime scene may have a hard time describing the face of the criminal. If we could directly decode the face based on the witness’ memory, we can extract the criminal’s face in a much more objective and quick way. Of course, whether memory activates the same population of cells as seeing the face is still an open question.”

Next up, the researchers wish to extend their study from neutral faces to expressive faces, as well as other types of object. As noted, they also want to investigate how imagination or memory of faces affects the representation in face patches.




6
Jun

Apple shows off new photography features coming to iOS 11


Why it matters to you

Apple iOS users now shoot over one trillion photos per year, and with iOS 11’s new photo features, that number will only go up.

wwdc-2017-topic-banner-280x75.png

For iPhone users, photography has long been a central part of the mobile experience. In fact, iOS users now take over one trillion photos per year, according to Apple. So it comes as no surprise that at the 2017 Worldwide Developer Conference (WWDC), Apple announced several new photography-related features coming to iOS 11.

One of the more unique features that’s been in iOS for a couple of generations now is Live Photos. A Live Photo is essentially a short video, capturing more of the moment than a single frame. With iOS 11, Live Photos will become significantly more flexible, potentially changing how iPhone owners use them. Users will be able to select a new keyframe from anywhere within the Live Photo, which could be incredibly helpful for shooting things like sports, pets, or kids. Additionally, iOS 11 users can trim the length of the Live Photo to change the total duration, or save it as an autoplaying video loop or a Boomerang-style “bounce.” It will also be possible to save long exposure photos similar to shooting with a slow shutter speed on a DSLR camera.

Apple also promises more creative control over the look of iPhone photos thanks to new professional-quality filters built into the camera app. Users can select a filter for a variety of effects, from making skin tones more natural to applying classic looks to portraits.

ios_11_iphone_photos_loops.gif

When it comes to viewing your photos and videos, iOS 11 also includes a revamped Memories feature. Currently, the auto-generated slideshows that appear in Memories are formatted to be viewed in landscape orientation only, with any portrait-orientation content displaying in a cropped or downscaled format. With iOS 11, playing back Memories slideshows will automatically adjust to fill the screen regardless of how you hold your device, which should make for a more natural and better-looking experience.

Additionally, developers will soon be able to take advantage of the dual-camera Portrait mode on the iPhone 7 Plus with a new Depth API. Third-party apps will be able to implement the same depth-sensing capabilities that Apple uses in the default iOS camera app to simulate a shallow depth of field.

IOS 11 also introduces some under-the-hood technology updates that will help optimize a device’s storage and data usage. Apple is moving from standard JPEG compression for its still images to a new file format it’s calling HEIF, for High Efficiency Image Format. Apple claims HEIF offers twice the compression effectiveness  of JPEG but will still be fully shareable, which saves space on a user’s device and iCloud storage, shortens the time it takes to share an image, and uses less data when doing so.

Video files will receive a similar treatment, thanks to the new HEVC codec. This is especially important given the high-resolution 4K videos that iPhones now shoot, and could potentially save a lot of storage space for users who shoot video frequently.

A developer preview of iOS 11 is available today, with the public release planned for sometime in fall 2017.

Updated June 5, 2017 to include additional details from Apple’s iOS 11 preview page.




6
Jun

New iMacs at last! And the $5,000 iMac Pro is the most powerful ever


Why it matters to you

If you’ve been waiting for a real workstation-class MacOS desktop, then Apple’s upcoming iMac Pro should be worth the wait.

wwdc-2017-topic-banner-280x75.png

Apple held its 2017 Worldwide Developers Conference (WWDC) today, and as usual revealed a slew of new features coming for its various platforms. While WWDC tends to focus on software, Apple’s Mac hardware wasn’t left out of this year’s event.

The iMac is one of Apple’s more important MacOS hardware products, representing its main presence on PC users’ desktops. While the Mac Pro remains in limbo, the iMac got some serious love at today’s event with both refreshed models available today and a sneak peek at an upcoming iMac Pro.

iMac Pro

The iMac Pro is perhaps the most interesting machine introduced today. It represents Apple’s effort to provide power users with a much more powerful machine that can compete with the most advanced workstations on the market. Apple’s penultimate desktop machine, the Mac Pro, might be awaiting its own redesign, but the iMac Pro offers a great alternative for anyone looking for a high-end MacOS desktop today.

Apple will be stuffing the iMac Pro with everything on the typical power user’s checklist, including:

  • Up to 18-core Intel Xeon processor
  • Up to 128GB ECC RAM
  • Up to 4TB SSD storage
  • AMD Radeon Vega GPUs providing up to 11 Teraflops of single precision and up to 22 Teraflops of half precision
  • Four Thunderbolt 3 ports that can drive up to 44 million pixels of display and two external drive enclosures
  • 10Gb Ethernet, Apple’s first machine with that level of network performance

In addition, the iMac Pro will utilize a new thermal design that utilizes a new dual centrifugal fan system that will keep all that power in check. The overall design of the iMac Pro will be similar to today’s iMac, but with a new Space Grey color scheme. Apple didn’t spend too much time covering all of the new features and capabilities coming to the iMac Pro, but it did flash a quick slide that provided some additional details.

new_2017_imac_pro_thermal.gif

Apple is being aggressive in terms of pricing the iMac Pro, as well. While the typical PC workstation, according to Apple, runs $7,000 or more, the entry-level iMac Pro configuration will be priced at $4,999. The basic iMac Pro will include the following components:

  • Retina 5K display
  • 8-core Intel Xeon processor
  • Radeon Vega graphics
  • 32GB ECC RAM
  • 1TB SSD
  • Thunderbolt 3
  • 10Gb Ethernet

Apple will be shipping the iMac Pro in December 2017.

iMac

The standard iMac line also received a significant update, with Apple focusing on refining the existing physical design with updated components. Everything that Apple announced today is available at the Apple Store for immediate purchase, meaning that anyone in the market for a new MacOS desktop has some nice new options to consider.

Some of the improvements to the iMac include:

  • Significantly improved display brightness for both the 4K and 5K displays, up 43 percent to 500 nits
  • Improved color support, with 10-bit dithering, up to one billion colors, and improved color gamut support
  • Intel seventh-generation Core processors
  • Up to 32GB RAM on the 21.5-inch model, and up to 64GB on the 27-inch model
  • Apple’s Fusion Drive is now standard on the 27-inch and on the high-end 21.5-inch models
  • SSD storage options have been increased to 2TB
  • Two Thunderbolt 3 ports are now available, supporting both an external display and drive enclosure for the first time
  • The entry-level 21.5-inch model now comes standard with Intel Iris Plus Graphics 640 GPU with 64MB graphics cache
  • The 4K 21.5-inch iMac has new AMD Radeon Pro 555 and 560 options with up to 4GB VRAM
  • The 5K 27-inch iMac has added AMD Radeon Pro 570, 575, and 580 options with up to 8GB VRAM
  • The iMac now supports virtual reality (VR) systems

iMac pricing starts at $1,099 for the entry-level 21.5-inch iMac, $1,239 for the 4K 21.5-inch iMac, and $1,799 for the 5K 27.5-inch iMac. Apple announced that the iMacs will be available starting today, but the Apple Store is still unavailable.




6
Jun

Apple plans to make iOS the largest augmented reality platform around, overnight


wwdc-2017-topic-banner-280x75.png

Pokemon Go was undeniably one of 2016’s biggest gaming success stories, offering a wholly new mobile experience that hinged on location-based gameplay and augmented reality technology. Today at WWDC 2017, Apple unveiled ARKit, a new platform that will allow developers to integrate computer vision into their projects with greater ease than ever before.

Apple’s senior vice president of software engineering, Craig Federighi, debuted ARKit with a live demonstration on stage. He used an iPhone to run a test application that could place virtual objects on a surface in front of him at the touch of a button.

The coffee cup that Federighi set on the table remained in place as he moved around with his iPhone in hand, without the kind of stuttering or unwanted rotation that can break immersion in this kind of experience. He went on to add a lamp to the scene, which cast a realistic shadow from the coffee cup to demonstrate the way virtual objects can interact.

ARKit will provide fast, stable motion tracking that developers can build into their apps. It’s capable of seeking out planes like tables and floors, it can estimate how much ambient light is in the vicinity, and it’s capable of balancing the scale of various objects to ensure the scene is congruent.

The utility will apparently work alongside third-party frameworks like Unity and Unreal as well as Apple’s own SceneKit to give developers more freedom in their creative process.

Apple is bullish that ARKit will become the largest AR platform in the world overnight, thanks to the hundreds of millions of iPads and iPhones already out in the wild, already capable of running content created using the tools. The company has already been working with some big-name partners, with IKEA and Lego both having experimented with the platform.

Meanwhile, Niantic has used ARKit to make Pokemon Go even more immersive. The way that Pokemon are rendered in real-world environments has been improved, integrating them into their environment with a greater sense of place and space.

Wingnut AR, a sister studio of Peter Jackson’s movie production company, then demonstrated its usage of the technology on stage. A standard table top was transformed into a sci-fi battle, with explosions and artillery covering the surface and spilling out into the ‘skies’ above.

ARKit is set to launch as part of iOS 11 later this year.




6
Jun

Apple plans to make iOS the largest augmented reality platform around, overnight


wwdc-2017-topic-banner-280x75.png

Pokemon Go was undeniably one of 2016’s biggest gaming success stories, offering a wholly new mobile experience that hinged on location-based gameplay and augmented reality technology. Today at WWDC 2017, Apple unveiled ARKit, a new platform that will allow developers to integrate computer vision into their projects with greater ease than ever before.

Apple’s senior vice president of software engineering, Craig Federighi, debuted ARKit with a live demonstration on stage. He used an iPhone to run a test application that could place virtual objects on a surface in front of him at the touch of a button.

The coffee cup that Federighi set on the table remained in place as he moved around with his iPhone in hand, without the kind of stuttering or unwanted rotation that can break immersion in this kind of experience. He went on to add a lamp to the scene, which cast a realistic shadow from the coffee cup to demonstrate the way virtual objects can interact.

ARKit will provide fast, stable motion tracking that developers can build into their apps. It’s capable of seeking out planes like tables and floors, it can estimate how much ambient light is in the vicinity, and it’s capable of balancing the scale of various objects to ensure the scene is congruent.

The utility will apparently work alongside third-party frameworks like Unity and Unreal as well as Apple’s own SceneKit to give developers more freedom in their creative process.

Apple is bullish that ARKit will become the largest AR platform in the world overnight, thanks to the hundreds of millions of iPads and iPhones already out in the wild, already capable of running content created using the tools. The company has already been working with some big-name partners, with IKEA and Lego both having experimented with the platform.

Meanwhile, Niantic has used ARKit to make Pokemon Go even more immersive. The way that Pokemon are rendered in real-world environments has been improved, integrating them into their environment with a greater sense of place and space.

Wingnut AR, a sister studio of Peter Jackson’s movie production company, then demonstrated its usage of the technology on stage. A standard table top was transformed into a sci-fi battle, with explosions and artillery covering the surface and spilling out into the ‘skies’ above.

ARKit is set to launch as part of iOS 11 later this year.