Skip to content

Archive for

4
May

‘Record Player’ app searches Spotify when you snap a pic of an album


Say you’re at a record store and you come across an album that you’d like to listen to before you buy it. Now there’s an app that can help you out with that. It’s sort of like Shazam, but for album covers, Pitchfork reports, and once you snap a picture of the album, it will find it for you on Spotify. The image you use — whether uploaded from your camera roll or taken in the moment — is sent to the Google Vision API, which will provide a guess as to what the image is. The app then uses that guess to search Spotify and surface the first result for you. The app was built on Glitch, Fog Creek Software’s collaborative app-building platform that launched last month.

Newest experiment made with @glitch, @Spotify and @googlecloud, a record player with computer vision: https://t.co/fhQsEruCoF pic.twitter.com/z54s9GIbYc

— Patrick Weaver (@patrickweave_r) May 2, 2018

You might prefer to just type in your album search into Spotify itself. But since Record Player, as the app’s called, lets you upload any photo, you can have a little fun with it — like see what album matches your selfie, for example. This morning, I uploaded an overly excited selfie, which led me to Third Eye Blind’s Losing A Whole Year, while a picture of my cat brought me to Boogie T and Whiskers’ “Bad Motha.” You can try it out for yourself here.

Via: Pitchfork

Source: Record Player

4
May

YouTube gets 1.8 billion logged-in viewers monthly


On stage today at Radio City Music Hall, YouTube CEO Susan Wojcicki made a surprising revelation: the service gets 1.8 billion logged-in viewers every month. And that doesn’t include people who aren’t logged in — which means the actual number of people watching YouTube is definitely much higher. Last June, the service had 1.5 billion logged-in watchers. On TVs alone, people are now watching 150 million hours of YouTube every day. The latest figures are yet another sign that YouTube’s reach is staggering, something that Wojcicki wanted to make crystal clear for the audience of advertisers and potential partners at its annual BrandCast event.

“It’s incredibly important to me and everyone at YouTube that we grow responsibly,” the CEO said, addressing concerns about the company’s ability to properly monitor offensive content. “There is not a playbook for how open platforms operate at our scale… The way I think about it, it’s critical we’re on the right side of history.”

Wojcicki said YouTube has built “infrastructure and tools” to properly manage its platform over the past year. Additionally, it’s also made guidelines and policies more restrictive, committed more people to monitoring content, and it’s relying on the “latest machine learning” technology to help. While that’s all well and good, she also didn’t reveal any new strategies for better managing YouTube’s platform.

Naturally, YouTube also took the opportunity to reveal its next batch of originals. Will Smith’s Grand Canyon bungee jump was the biggest announcement. And in addition to its highschool basketball series with Lebron James, Best Shot, there’s also If I Could Tell You Just One Thing, which features Priyanka Chopra traveling the world and talking with inspirational people. Jack Whitehall will also work out with professional soccer players in Training Days. As for existing shows, YouTube is renewing three: Kevin Hart’s What the Fit, The Super Slow Show and the Untitled Demi Lovato Project.

Source: YouTube

4
May

Will Smith’s Grand Canyon bungee jump airs on YouTube September 25th


If you haven’t noticed, Will Smith has found a new surge of creative energy with his YouTube channel. And in March, he agreed to bungee jump over the Grand Canyon from a helicopter on YouTube, after being challenged from the folks at Yes Theory. Now, we know that stunt is going to take place on his 50th birthday, September 25th. YouTube made the announcement at its BrandCast event in New York City tonight, where it’s trying to woo over advertisers and partners. Smith, naturally, will be donating proceeds from the jump to charity.

4
May

Microsoft preps next Windows update, as latest build breaks Chrome browser


Microsoft has reportedly begun work in earnest on the next planned update for Windows 10, code-named internally as Redstone 5. However, early adopters of the latest April update for Windows 10 will hope that it devotes some time to bug fixes. too, as a problem with the Chrome browser emerged recently, with affected users faced with crashes in their favorite browser after installing the OS update.

The April update, previously known as Redstone 4 for Windows 10, brought with it a number of neat features. Alongside Timeline and Focus Assist though, it also appears to have brought a few new bugs with it. One of those has been particularly problematic for a number of Google Chrome users, who have found their browser unresponsive or prone to crashing after updating, TechRadar reports.

The bug doesn’t seem to be one that affects all Chrome users after the update, but instances are continuing to crop up as more people download the update manually, or are put through the installation process by Windows 10’s automatic update procedure. The bug apparently causes Chrome to crash or in some instances freeze, locking the entire system until it is rebooted.

Microsoft appears to be aware of the issue, with the official support Twitter account promising to have the development team look into it. In the case of other bugs, like a compatibility issue with discrete GPUs, Microsoft managed to get ahead of the problem. With select Alienware laptops, Microsoft blocked the update while it fixes a compatibility problem which purportedly causes a black-screen issue, according to TechRadar. A fix is said to be around three weeks away.

While it works on that and the Chrome issue, Microsoft is also looking to the future. Head of the Windows Insider program, Dona Sarkar, said on Twitter that we can expect Redstone 5 builds to begin releasing to those on the fast and slow rings in the near future. Previously, only those who had opted to be part of Microsoft’s “Skip Ahead” branch of the Insider Program had had access to such builds. With the release of the April update, that is set to change.

Redstone 5 is expected to be released before the end 2018.

In the meantime, Windows and Chrome fans have come up with some homebrew workarounds for the Chrome crash problem. One Redditor reported that installing the 17134.5 update via the cumulative KB4135051 update, followed by running a specific command in the Windows 10 Command Prompt, fixed it.

While we can’t vouch for the veracity of such a claim, if you’re faced with crashes it may be your best bet to give it a try until Microsoft or Google releases an update that fixes the problem properly. Alternatively, use a browser like Firefox in the meantime.

Editors’ Recommendations

  • Hackers can bypass the Windows 10 S lockdown due to security flaw
  • Microsoft is working to cut the file size of the basic Windows 10 install
  • Apple releases an iOS update to fix infamous Telegu text bug
  • A recent leak says Windows 10’s next update has been pushed back to May 8
  • Microsoft Windows Defender extension offers Chrome users extra protection


4
May

QooCam twists to swap between 4K 360 and 3D 180 with Lytro-like refocusing


Swapping lenses gives cameras versatility — but the QooCam aims for versatility not from entirely new lenses, but with a quick twist of the camera body. Announced on May 2, the QooCam can shoot 360 footage at 60 fps 4K, but flip the top end of the camera and the compact camera is ready to shoot 180-degree footage with a depth map for 3D or even refocusing the shot later.

Designed by Kandao, the same China-based company whose 360 camera was applauded by Facebook last year as part of the Surround 360 project, the QooCam combines both 360 and 3D 180 footage in the same camera body. In the first orientation, the camera’s three lenses are vertical for capturing 360 in 4K detail, but without 3D.

Kandao

Flip the lens section of the camera to the side and the two lenses on the front can work together to gather depth data. In this horizontal position, the camera shoots 180, a format supported by YouTube. The offset lenses can shoot in 3D and still at 4K (though at 30 fps and not 60 fps) for headset viewing. Based on the software introduced in the company’s high-end Obsidian camera, the depth mapping also allows the shots to be refocused later using a related app. The effect is similar to the refocusing ability of the Lytro but uses 3D mapping rather than light field technology.

The QooCam is designed using three 216-degree, f/2.2 lenses. Sony sensors are built in, allowing for a still photo resolution at 4,320 pixels in width. Wi-Fi and Bluetooth are also built in, with images stored on a MicroSD card. 

QooCam uses depth mapping for 3D and re-focusing. Kandao

Kandao built-in real-time stitching software, allowing both the 3D and 360 content to be stitched in-camera. Using a mix between a built-in sensor and software, the camera also incorporates image stabilization. 

The camera’s app, which will be available for both iOS and Android, allows users to connect to the camera and share fully stitched photos and videos. The company is also building editing options into the app, including the option to track a subject around the 360 view and to build panning effects into the 360 footage. Livestreaming, time-lapses, and Little Planet mode are also included.

With a thin body more closely resembling the Theta than rectangular bodies like the GoPro Fusion and Garmin Virb 360, the camera weighs only six ounces. The body is made with aluminum alloy to mix in durability with that lightweight profile. The built-in battery is rated for up to three hours of shooting. Kandao says the camera is compatible with action camera mounts to allow the QooCam to be added to a drone, helmet, tripod, and a handful of other accessories.

Kandao is taking to Kickstarter to launch the QooCam and the campaign reached full funding after only three hours. If the remainder of the testing and production is successful, the QooCam will ship in August. Early backers could get the QooCam for pledges starting at $299, a $100 discount from the expected retail price.

Editors’ Recommendations

  • Photo FOMO: Get a (lens) grip, Sony a7 III firmware fix, Rylo gains 180 mode
  • Facebook steps into 3D memories and photos without a specialized camera
  • Ricoh Theta V review
  • MADV Madventure 360 review
  • Photo FOMO: Nikon’s 3D printer, Canon’s mirrorless push, Pixel 2’s secrets


4
May

AMD says the patches for its recent Ryzen flaws are almost ready


Bill Roberson/Digital Trends

Whether you believe CTS Labs’ intentions were honest when it revealed a number of bugs in AMD’s Ryzen and Epyc chips earlier this year, the bugs are certainly real. Real enough that AMD has been working to get them fixed and we’re almost there. Reportedly, the required firmware updates and patches are with distribution partners and will be released to the general public in the near future.

2017 saw the revelation of two major CPU bugs: Meltdown and Spectre. While both Intel and AMD were affected by those two major exploits, a whole lot more were soon discovered in AMD’s Ryzen and Epyc CPUs. While there has been some speculation as to whether the revelation of such bugs was designed to manipulate stock, that isn’t particularly pertinent to the affected AMD users around the world.

Fortunately, AMD responded swiftly. Following some investigation into the matter by TomsHardware, AMD claims bug fixes are only just around the corner. While CTS Labs has asserted that some of the “Ryzenfall” bugs, like “Chimera,” would take months or even hardware revisions to fix, AMD is far more confident.

“Within approximately 30 days of being notified by CTS Labs, AMD released patches to our ecosystem partners mitigating all of the CTS identified vulnerabilities on our EPYC platform as well as patches mitigating Chimera across all AMD platforms,” AMD said in a statement released to Toms.

Those patches are said to be undergoing the final testing phase at the various AMD “ecosystem partners” and will soon be released to the public. AMD went on to suggest that all other Ryzenfall bugs had fixes that were in the works and would be released shortly to board partners for validation.

Although this doesn’t give us a specific deadline for when third-party manufacturers will release their versions of the updates, nor which ones have received the firmware tweaks from AMD, it is encouraging. Combined with news from AMD’s chief technical officer in March that bug fixes for AMD hardware would not impact performance, it could be that the Ryzenfall bugs may not end up being as dramatic as initially thought. Certainly, the fallout from Spectre and Meltdown is likely to be felt far further into the future.

Editors’ Recommendations

  • AMD is working on fixes for the reported Ryzenfall, MasterKey vulnerabilities
  • AMD has a fix for Spectre variant II, but will motherboard makers support it?
  • Nowhere is safe now that AMD has suffered its own Meltdown
  • Intel starts rolling out new Spectre firmware fixes, Skylake goes first
  • Intel’s 9th-generation ‘Ice Lake’ CPUs will have fixes for Meltdown, Spectre


4
May

Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.


A.I. is everywhere at the moment, and it’s responsible for everything from the virtual assistants on our smartphones to the self-driving cars soon to be filling our roads to the cutting-edge image recognition systems reported on by yours truly.

Unless you’ve been living under a rock for the past decade, there’s good a chance you’ve heard of it before — and probably even used it. Right now, artificial intelligence is to Silicon Valley what One Direction is to 13-year-old girls: an omnipresent source of obsession to throw all your cash at, while daydreaming about getting married whenever Harry Styles is finally ready to settle down. (Okay, so we’re still working on the analogy!)

But what exactly is A.I.? — and can terms like “machine learning,” “artificial neural networks,” “artificial intelligence” and “Zayn Malik” (we’re still working on that analogy…) be used interchangeably?

To help you make sense of some of the buzzwords and jargon you’ll hear when people talk about A.I., we put together this simple guide help you wrap your head around all the different flavors of artificial intelligence — If only so that you don’t make any faux pas when the machines finally take over.

Artificial intelligence

We won’t delve too deeply into the history of A.I. here, but the important thing to note is that artificial intelligence is the tree that all the following terms are all branches on. For example, reinforcement learning is a type of machine learning, which is a subfield of artificial intelligence. However, artificial intelligence isn’t (necessarily) reinforcement learning. Got it?

So far, no-one has built a general intelligence.

There’s no official consensus agreement on what A.I. means (some people suggest it’s simply cool things computers can’t do yet), but most would agree that it’s about making computers perform actions which would be considered intelligent were they to be carried out by a person.

The term was first coined in 1956, at a summer workshop at Dartmouth College in New Hampshire. The big current distinction in A.I. is between current domain-specific Narrow A.I. and Artificial General Intelligence. So far, no-one has built a general intelligence. Once they do, all bets are off…

Symbolic A.I.

You don’t hear so much about Symbolic A.I. today. Also referred to as Good Old Fashioned A.I., Symbolic A.I. is built around logical steps which can be given to a computer in a top-down manner. It entails providing lots and lots of rules to a computer (or a robot) on how it should deal with a specific scenario.

Selmer Bringsjord

This led to a lot of early breakthroughs, but it turned out that these worked very well in labs, in which every variable could be perfectly controlled, but often less well in the messiness of everyday life. As one writer quipped about Symbolic A.I., early A.I. systems were a little bit like the god of the Old Testament — with plenty of rules, but no mercy.

Today, researchers like Selmer Bringsjord are fighting to bring back a focus on logic-based Symbolic A.I., built around the superiority of logical systems which can be understood by their creators.

Machine Learning

If you hear about a big A.I. breakthrough these days, chances are that unless a big noise is made to suggest otherwise, you’re hearing about machine learning. As its name implies, machine learning is about making machines that, well, learn.

Like the heading of A.I., machine learning also has multiple subcategories, but what they all have in common is the statistics-focused ability to take data and apply algorithms to it in order to gain knowledge.

There are a plethora of different branches of machine learning, but the one you’ll probably hear the most about is…

Neural Networks

If you’ve spent any time in our Cool Tech section, you’ve probably heard about artificial neural networks. As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their own code to find the link between input and output — or cause and effect — in situations where this relationship is complex or unclear.

Artificial neural networks have benefited from the arrival of deep learning.

The concept of artificial neural networks actually dates back to the 1940s, but it was really only in the past few decades when it started to truly live up to its potential: aided by the arrival of algorithms like “backpropagation,” which allows neural network to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for. (For instance, a network designed to recognize dogs, which misidentifies a cat.)

This decade, artificial neural networks have benefited from the arrival of deep learning, in which different layers of the network extract different features until it can recognize what it is looking for.

Within the neural network heading, there are different models of potential network — with feedforward and convolutional networks likely to be the ones you should mention if you get stuck next to a Google engineer at a dinner party.

Reinforcement Learning

Reinforcement learning is another flavor of machine learning. It’s heavily inspired by behaviorist psychology, and is based around the idea that software agent can learn to take actions in an environment in order to maximize a reward.

As an example, back in 2015 Google’s DeepMind released a paper showing how it had trained an A.I. to play classic video games, with no instruction other than the on-screen score and the approximately 30,000 pixels that made up each frame. Told to maximize its score, reinforcement learning meant that the software agent gradually learned to play the game through trial and error.

Unlike an expert system, reinforcement learning doesn’t need a human expert to tell it how to maximize a score. Instead, it figures it out over time. In some cases, the rules it is learning may be fixed (as with playing a classic Atari game.) In others, it keeps adapting as time goes by.

Evolutionary Algorithms

Known as a generic population-based metaheuristic optimization algorithm if you’ve not been formerly introduced yet, evolutionary algorithms are another type of machine learning; designed to mimic the concept of natural selection inside a computer.

The process begins with a programmer inputting the goals he or she is trying to achieve with their algorithm. For example, NASA has used evolutionary algorithms to design satellite components. In that case, the function may be to come up with a solution capable of fitting in a 10cm x 10cm box, capable of radiating a spherical or hemispherical pattern, and able to operate at a certain Wi-Fi band.

The algorithm then comes up with multiple generations of iterative designs, testing each one against the stated goals. When one eventually ticks all the right boxes, it ceases. In addition to helping NASA design satellites, evolutionary algorithms are a favorite of creatives using artificial intelligence for their work: such as the designers of this nifty furniture.

Editors’ Recommendations

  • Nest Hello review
  • Hold my beer, I’ve got a new UTV!
  • Robot chefs are the focus of new Sony and Carnegie Mellon research
  • A.I. helps scientists inch closer to the ‘holy grail’ of computer graphics
  • Google makes it even easier to get into A.I. with Raspberry Pi bundles


4
May

Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.


A.I. is everywhere at the moment, and it’s responsible for everything from the virtual assistants on our smartphones to the self-driving cars soon to be filling our roads to the cutting-edge image recognition systems reported on by yours truly.

Unless you’ve been living under a rock for the past decade, there’s good a chance you’ve heard of it before — and probably even used it. Right now, artificial intelligence is to Silicon Valley what One Direction is to 13-year-old girls: an omnipresent source of obsession to throw all your cash at, while daydreaming about getting married whenever Harry Styles is finally ready to settle down. (Okay, so we’re still working on the analogy!)

But what exactly is A.I.? — and can terms like “machine learning,” “artificial neural networks,” “artificial intelligence” and “Zayn Malik” (we’re still working on that analogy…) be used interchangeably?

To help you make sense of some of the buzzwords and jargon you’ll hear when people talk about A.I., we put together this simple guide help you wrap your head around all the different flavors of artificial intelligence — If only so that you don’t make any faux pas when the machines finally take over.

Artificial intelligence

We won’t delve too deeply into the history of A.I. here, but the important thing to note is that artificial intelligence is the tree that all the following terms are all branches on. For example, reinforcement learning is a type of machine learning, which is a subfield of artificial intelligence. However, artificial intelligence isn’t (necessarily) reinforcement learning. Got it?

So far, no-one has built a general intelligence.

There’s no official consensus agreement on what A.I. means (some people suggest it’s simply cool things computers can’t do yet), but most would agree that it’s about making computers perform actions which would be considered intelligent were they to be carried out by a person.

The term was first coined in 1956, at a summer workshop at Dartmouth College in New Hampshire. The big current distinction in A.I. is between current domain-specific Narrow A.I. and Artificial General Intelligence. So far, no-one has built a general intelligence. Once they do, all bets are off…

Symbolic A.I.

You don’t hear so much about Symbolic A.I. today. Also referred to as Good Old Fashioned A.I., Symbolic A.I. is built around logical steps which can be given to a computer in a top-down manner. It entails providing lots and lots of rules to a computer (or a robot) on how it should deal with a specific scenario.

Selmer Bringsjord

This led to a lot of early breakthroughs, but it turned out that these worked very well in labs, in which every variable could be perfectly controlled, but often less well in the messiness of everyday life. As one writer quipped about Symbolic A.I., early A.I. systems were a little bit like the god of the Old Testament — with plenty of rules, but no mercy.

Today, researchers like Selmer Bringsjord are fighting to bring back a focus on logic-based Symbolic A.I., built around the superiority of logical systems which can be understood by their creators.

Machine Learning

If you hear about a big A.I. breakthrough these days, chances are that unless a big noise is made to suggest otherwise, you’re hearing about machine learning. As its name implies, machine learning is about making machines that, well, learn.

Like the heading of A.I., machine learning also has multiple subcategories, but what they all have in common is the statistics-focused ability to take data and apply algorithms to it in order to gain knowledge.

There are a plethora of different branches of machine learning, but the one you’ll probably hear the most about is…

Neural Networks

If you’ve spent any time in our Cool Tech section, you’ve probably heard about artificial neural networks. As brain-inspired systems designed to replicate the way that humans learn, neural networks modify their own code to find the link between input and output — or cause and effect — in situations where this relationship is complex or unclear.

Artificial neural networks have benefited from the arrival of deep learning.

The concept of artificial neural networks actually dates back to the 1940s, but it was really only in the past few decades when it started to truly live up to its potential: aided by the arrival of algorithms like “backpropagation,” which allows neural network to adjust their hidden layers of neurons in situations where the outcome doesn’t match what the creator is hoping for. (For instance, a network designed to recognize dogs, which misidentifies a cat.)

This decade, artificial neural networks have benefited from the arrival of deep learning, in which different layers of the network extract different features until it can recognize what it is looking for.

Within the neural network heading, there are different models of potential network — with feedforward and convolutional networks likely to be the ones you should mention if you get stuck next to a Google engineer at a dinner party.

Reinforcement Learning

Reinforcement learning is another flavor of machine learning. It’s heavily inspired by behaviorist psychology, and is based around the idea that software agent can learn to take actions in an environment in order to maximize a reward.

As an example, back in 2015 Google’s DeepMind released a paper showing how it had trained an A.I. to play classic video games, with no instruction other than the on-screen score and the approximately 30,000 pixels that made up each frame. Told to maximize its score, reinforcement learning meant that the software agent gradually learned to play the game through trial and error.

Unlike an expert system, reinforcement learning doesn’t need a human expert to tell it how to maximize a score. Instead, it figures it out over time. In some cases, the rules it is learning may be fixed (as with playing a classic Atari game.) In others, it keeps adapting as time goes by.

Evolutionary Algorithms

Known as a generic population-based metaheuristic optimization algorithm if you’ve not been formerly introduced yet, evolutionary algorithms are another type of machine learning; designed to mimic the concept of natural selection inside a computer.

The process begins with a programmer inputting the goals he or she is trying to achieve with their algorithm. For example, NASA has used evolutionary algorithms to design satellite components. In that case, the function may be to come up with a solution capable of fitting in a 10cm x 10cm box, capable of radiating a spherical or hemispherical pattern, and able to operate at a certain Wi-Fi band.

The algorithm then comes up with multiple generations of iterative designs, testing each one against the stated goals. When one eventually ticks all the right boxes, it ceases. In addition to helping NASA design satellites, evolutionary algorithms are a favorite of creatives using artificial intelligence for their work: such as the designers of this nifty furniture.

Editors’ Recommendations

  • Nest Hello review
  • Hold my beer, I’ve got a new UTV!
  • Robot chefs are the focus of new Sony and Carnegie Mellon research
  • A.I. helps scientists inch closer to the ‘holy grail’ of computer graphics
  • Google makes it even easier to get into A.I. with Raspberry Pi bundles


4
May

Facebook wants to make your virtual self appear as real as possible in VR


Facebook

Facebook wants to transport your physical self into the virtual world by allowing you to create photorealistic avatars for virtual reality headsets. The technology was unveiled by Facebook chief technology officer Mike Schroepfer during the second day of the company’s developer-centric F8 conference in Silicon Valley, California.

The company has made progress over the years to evolve its avatar creation technology to make digital representations more lifelike and emotive. Facebook originally represented avatars as a simple blue face, but the technology eventually allowed users to personalize their virtual selves with more details and lifelike features. The result is still a cartoon-like representation of an individual, and it is not dissimilar to Snap’s personalized Bitmoji or Samsung’s AR emoji. Creating a photorealistic avatar is a logical next step for Facebook’s VR journey, as the company hopes that advancing technologies will help blur the borders between the real and virtual worlds.

Facebook didn’t detail much about its work with photorealistic avatars. We learned from demos that the company is using motion-capture technology to map facial images from photographs to capture various points on a user’s face. By carefully mapping a user’s face and facial characteristics, Facebook can synchronize facial movements and expressions in real life with the photorealistic avatar in VR.

In late 2016, Facebook began experimenting with bringing more emotions into its VR avatar experience, according to Techristic. The company used body tracking to allow users to show that they were racing their fists in the air, demonstrating an angry expression, or a shoulder shrug. Photorealistic avatars with face tracking could allow Facebook to inject even more emotions into our digital personas in the virtual world. During his demo, Schroepfer showed that when an Oculus employee with a Rift headset spoke, his avatar’s mouth moved in unison with his speech.

Still, we don’t know when photorealistic avatars will arrive for consumers. New hardware may also be required, according to Upload VR, that will allow Oculus to track eye movements, which could allow our avatars to blink when we do. Similarly, Midmaze has been working on its sensor mask, which is designed to sit between your face and a VR headset, like the Oculus Rift, to track your facial movements and display them in real time in the virtual world.

Photorealistic avatars could help to further define and transform the virtual and augmented reality experiences that we have today, allowing users from remote locations to experience games and movies together as if they’re in the same room. If you look over and see a photorealistic image of your friend enjoying the same experience as you in the VR space, it would make virtual experiences feel more convincing.

Editors’ Recommendations

  • More than 1,000 experiences are available for the Oculus Go VR headset
  • HTC Vive review
  • $200 Oculus Go VR headset hits Amazon
  • 8 Amazing accessories that could make virtual reality even more immersive
  • Qualcomm’s Snapdragon 845 VR reference headset puts body tracking in mobile VR


4
May

Oculus’ new prototype VR headset has something the HTC Vive Pro doesn’t


The Vive Pro certainly impressed us when we got our hands on HTC’s new top-tier VR headset, but there was one thing that a lot of people wished it had expanded upon: The field of view. Fortunately, Oculus seems to have read everyone’s minds and has been working away at that for some time now. At Facebook’s F8 event this week, Oculus showed off its new prototype, termed the “Half-Dome” headset, which takes the VR view from 110-degrees to 140.

Using the existing Oculus Rift and HTC Vive, the virtual world is displayed in front of your very eyes in gorgeous detail. But you don’t really want to look to the extreme left or right, as you’ll be staring at plastic and foam. The Vive Pro didn’t do anything to improve that. While HTC did make the virtual world more detailed with higher-resolution displays, Oculus may be the first of the two companies to develop a headset with an expanded field of view.

Where the Vive Pro comes with the same lenses as the original Vive, Oculus’ prototype adds new, larger lenses to the design. That’s what enables the wider field of view which stretches into the wearer’s peripheral vision. In our experience, this wider field of view has a bigger effect on how immersive a VR world can feel. Nothing’s worse than the goggle-like confines of a headset surrounding the user’s view. The Half Dome’s aren’t as wide as Pimax’s crazy, 200-degree VR headset, but it’s a good start.

Better yet though, those new lenses are also mechanically-controlled varifocals. Think a fancy version of your grandparents’ glasses. Much like those lenses help them see near and far, Oculus’ new design would allow for various levels of focus throughout the visual plane. If you’re looking at an object up close, the lenses would refocus there and similarly so in the distance.

Oculus suggested it would use software and hand-tracking to facilitate this, though it seems likely that some measure of eye-tracking would also be involved.

In theory, such technology could also enable performance-saving measures such as foveated rendering, which renders only the section of the screen a user is looking at in the highest detail, leaving peripheral vision to be rendered to a lesser standard. That may be why Oculus opted for mechanically manipulated lenses, rather than software-driven field of view effects. Where any forced field-of-view rendering would require additional GPU power, mechanically altered lenses would have no such impact.

Although the Half-Dome headset is very much a prototype and no real indication of what any future-generation Oculus headset will look like, it is a welcome sight from the company that kickstarted the modern VR revolution.

Now all we need is for HTC and Oculus to steal from each other so that we get a VR headset with a wider field of view and a higher-resolution display. And with a wireless module! It’s not too much to ask, is it?

Editors’ Recommendations

  • Facebook teases new Oculus prototype with mechanical ‘varifocal’ lenses
  • Leap Motion’s prototype augmented reality headset includes hand tracking
  • Baidu-owned ‘Netflix of China’ jumps into VR with a 4K headset with 8K support
  • Bose’s new prototype AR glasses focus on what you hear, not what you see
  • Google’s upcoming OLED display for VR headsets may pack a 3182p resolution