High-tech search for Malaysia Airlines passenger plane ends in disappointment
The mystery of Malaysia Airlines flight MH370 endures.
The latest privately funded search for the missing aircraft came to an end this week after more than four months scouring an area of interest in the southern Indian Ocean.
MH370 disappeared with 239 passengers and crew during a flight from Kuala Lumpur to Beijing on March 8, 2014. The cause of the Boeing 777’s disappearance is still unknown, and while several parts of the aircraft have shown up on several shorelines in the intervening years, the main section of the aircraft remains missing despite search efforts.
Ocean Infinity search
Keen for answers to the mystery, and to recover the bodies of the passengers and crew, the Malaysian government struck a deal with U.S. seabed exploration firm Ocean Infinity at the start of 2018 to embark on a new exploration effort using its powerful search technology.
The onset of winter weather has brought the search to an end, with the company saying on Tuesday that it had failed to find any sign of the aircraft.
During the course of the operation, Ocean Infinity searched more than 43,000 square miles (112,000 square km) of ocean floor using high-definition (HD) cameras.
“Part of our motivation for renewing the search was to try to provide some answers to those affected,” Ocean Infinity CEO Oliver Plunkett said in a statement. “It is therefore with a heavy heart that we end our current search without having achieved that aim.”
The CEO added that there “has not been a subsea search on this scale carried out as efficiently or as effectively ever before.”
Seabed search
Operating from its main multi-purpose ship, Ocean Infinity used a number of autonomous vehicles for its underwater searches, including six machines capable of operating at a depth of 6,000 meters while collecting HD imagery from even further down.
Six unmanned surface vehicles worked with the submersibles to help the team maintain precise positioning and constant communication during the meticulous seabed search.
This was the first time for Ocean Infinity to try to find a missing plane, though its experience in using its deep-sea technology for operations such as seabed mapping and imaging, marine geological surveys, and environmental monitoring provided it with valuable knowledge for its most ambitious project to date.
Previous efforts to track down the plane included a multinational search carried out by Malaysia, China, and Australia, but it was called off at the start of 2017 after failing to make any significant finds.
Such was Ocean Infinity’s confidence, it agreed to conduct its mission on a “no find, no fee” basis. It was set to receive as much as $70 million if it found MH370, but the Texas-based firm will now have to cover all of its costs.
But Plunkett hasn’t entirely given up on the idea of one day resuming the hunt for the missing plane, saying, “We sincerely hope that we will be able to again offer our services in the search for MH370 in [the] future.”
Relatives of those on board the fateful flight are continuing to press the Malaysian government to resume the search when better weather returns later in the year, though at the current time there are no plans to do so.
Amazon delivery drone may use lights and music when it shows up at your home
If you didn’t already know it, Amazon really wants to deliver stuff to your door using unmanned aerial vehicles (UAVs), commonly known as drones.
Amazon boss Jeff Bezos unveiled a prototype of the Prime Air delivery drone in 2013, and several redesigns later, the company is surely coming close to a platform that it hopes will transform its delivery operation. Of course, it first has to convince the Federal Aviation Administration that such an aerial-based delivery system is safe, so a full-fledged, autonomous delivery service could still be a ways off.
But that hasn’t stopped Amazon investing huge amounts of money in developing and testing a system that it believes will one day drive its drone-based delivery operation.
Part of that work includes filing numerous patents that may or may not one day become part of the final platform, though each one offers insight into some of the team’s thought processes as it tackles various challenges.
The latest patent, granted by the United States Patent and Trademark Office this week and spotted by GeekWire, explores the various methods by which Amazon’s drone could signal its arrival at a delivery address, and how it might interact with the customer as they step outside their home.
For some people, parts of the patent are likely to conjure up images from Close Encounters of the Third Kind when the alien spaceship lands on Earth, lights blazing as it plays a catchy ditty together with colorful flashing lights. Though Amazon’s drone won’t be quite as big.
Alarm and confusion?
Noting how a customer “may be alarmed or confused” when a noisy drone approaches their property, Amazon’s filing goes on to list a number of ways that its flying machine could signal its intentions or interact with them.
It describes the kind of procedures you might expect, such as sending texts to the customer to update them on the drone’s whereabouts, with its real-time location shown on a map.
But it also talks about how lights and speakers could activate as the drone approaches the customer’s property. It might announce its arrival “by emitting a warning sound, a pleasant tune, or other audio,” the patent says.
The drone may even incorporate a projector capable of displaying a landing or drop zone, or even a message that it projects onto a wall or the ground. That message might tell the customer to clear away an object — anything from a piece of furniture to the pet dog — for a safe delivery, or the projector could beam light directly onto the obstacle, signaling to the customer to take action.
Of course, for the full Close Encounters effect, which would have a good chance of scaring the bejeezus out of unwitting neighbors with little knowledge of Amazon’s drone operations, the deliveries and subsequent light show would have to take place at night, though the company’s patent says the projectors could still function effectively in the evening or in shaded areas.
Other Amazon patents for its planned airborne delivery service include ideas for using the tops of street lights, cell towers, and church steeples as drone recharging stations to enable long-distance deliveries; beehive-like drone towers located in or close to urban areas for deliveries to city addresses; and even giant floating warehouses that the drones buzz to and from during delivery runs. No, we can’t imagine that last one happening anytime soon, either.
Editors’ Recommendations
- Amazon drone deliveries may involve lots of shouting and frantic arm-waving
- Switzerland’s new air traffic control system to put drones, planes in same skies
- Amazon Home Assistants is a new service that offers house cleaning
- Halo Drone Pro review
- Intel drones offer high-tech help to restore the Great Wall of China
Camera+ 2 for iOS Brings New Interface, Photo Library Integration, Raw and Depth Editing, and More
Camera+ 2 was released for iPhone and iPad today, a complete rewrite of the popular photography app of the same name that appeared almost eight years ago and sold over 14 million copies in that time. The successor app features a completely redesigned interface for accessing manual controls, raw shooting and editing, depth capture, and more.
As a universal app, Camera+ 2 promises a consistent experience across iPhone and iPad, with multitasking support for the latter baked in. Unlike its predecessor, the app also comes with all features, one-touch filters, and tools included as-is – no in-app purchases required.
In shooting modes, the manual onscreen wheels and controls include traditional settings like shutter speed, ISO, White Balance, and Macro, with wide-angle and telephoto options available on dual-lens devices. These functions can also be hidden during casual shooting.
With depth capture enabled in Camera+ 2, the depth information is saved alongside the image, and the adjustments in The Lab section of the editor can be selectively applied to distant or close subjects. A collection of filters are also available, with options to adjust their strength and layer them to customize the aesthetic.

A new Smile mode enables Camera+ 2 to detect smiles and shoot automatically, while a Stabilizer mode shoots only when the iPhone is steady enough to produce a sharp picture. The Slow Shutter mode meanwhile brings the ability to take long exposures, even in daylight, with additional Burst and Timer modes also included.
Elsewhere, in a much-requested change, Camera+ now has full Photo Library integration with editing support, with the added ability to switch between the Photo Library and the Lightbox with the tap of a button. Drag and drop gestures on the iPad are supported for copying or sharing photos, while Files and iTunes integration are also available for transferring pictures to a computer or other apps.
Camera+ 2 is available to download on the App Store for $2.99 and requires iOS 11 or later. [Direct Link]
Tag: photography
Discuss this article in our forums
Household robots could learn skills in a virtual world where chores never end
Before robots can really help out around the home, they may have to train in a virtual world. That’s the aim of a new research project in which a Sims-like system called “VirtualHome” helps artificial intelligence (A.I.) characters perform everyday activities, one step at a time.
For us, VirtualHome looks like a Lynchian nightmare, where chores never end. For robots, it’s something of a training ground.
Humans have a talent for inference, and we take that for granted. If you were told to vacuum the rug, you’d presumably have no problem completing the task without having to break it down into each of its individual steps — like walk to the closet, open the closet, grab the vacuum, move the vacuum, plug it in, etc. Machines, on the other hand, need to process each one of these subtasks to get the job done.
The goal of VirtualHome is to help robots learn tasks by first experiencing them in a virtual system. In the current system, an avatar can perform 1,000 separate actions in eight different settings, including a living room, kitchen, and home office. The project is led by researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Toronto, McGill University, and the University of Ljubljana in Slovenia.
“We were trying to find a way to model complex activities to better understand the steps needed to do them, so that we could better identify them in video and potentially teach robots to perform them,” Xavier Puig, a CSAIL doctoral student who led the research, told Digital Trends. “There are very few datasets that have videos of people or agents doing step-by-step household activities.”
Puig and his colleagues created programs that laid out each step of a specific task in skeletal detail. For example, the task of watching TV involves five subtasks: Walk to the TV, turn on the TV, walk to the sofa, sit on the sofa, and watch the TV.
The researchers gave these program instructions to the VirtualHome system, which had the character act out these tasks. Videos of the agent performing these chores would be used to further train robots, by giving a visual example of what these actions look like.
In a recent paper, the researchers demonstrated that, by reviewing instructions or a video demo, their virtual agents could reconstruct the steps and perform the task. They hope that such a system could help train household robots and build up a database of tasks that can be easily communicated between humans and machines, through natural language processing.
The findings will be presented this month at the Computer Vision and Pattern Recognition conference in Salt Lake City, Utah.
Editors’ Recommendations
- Inside the ambitious plan to decode and digitize the Vatican Secret Archives
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- Forget cloning dogs, A.I. is the real way to let your pooch live forever
- MIT’s new A.I. could help map the roads Google hasn’t gotten to yet
- The best iRobot Roomba deals to make cleaning your home a breeze
Android P on the Xiaomi Mi Mix 2S: Everything you need to know
It’s what Xiaomi fans have been asking for years: pure Android on a high-end Xiaomi phone.

With Android P, Google has opened up the beta build to third-party phones for the first time, with seven manufacturers included in the program. Xiaomi is one of those brands, and the Android P beta build is available for its latest flagship, the Mi Mix 2S. The Mix 2S is a refreshed variant of last year’s Mi Mix 2 with a Snapdragon 845 chipset and wireless charging.
Android P on the Mi Mix 2S is highly interesting for a variety of reasons. The beta build gets rid of MIUI in favor of pure Android, and Xiaomi’s involvement in the beta program suggests the manufacturer is finally getting serious about updates. This is what it’s like to use Android P on the Mi Mix 2S.
MIUI makes way for pure Android…

The Mi A1 showcased what it would be like to run pure Android on a Xiaomi phone, and the Android P beta on the Mi Mix 2S follows in the same vein. The current beta build doesn’t have any MIUI elements, and in its place, you get the Pixel Launcher.
Pure Android on a high-end Xiaomi phone is incredible.
Functionally, it is the exact same interface as the Android P build on the Pixels, and that’s great.
Pure Android on a high-end Xiaomi device is fantastic, and although this is an early beta build, I haven’t encountered many bugs. There are issues with Bluetooth streaming and occasional glitches, but for the most part, the Android P beta has been enjoyable.
Xiaomi has started offering gestures with MIUI 9.5, but those have also made way for the new gesture-driven navigation that’s coming to Android P.
But before you get too excited, know that the pure Android build is going to make way for Xiaomi’s custom ROM in the coming months.
… But that will change in the coming months

While it’s great that the current Android P beta build doesn’t have any MIUI customizations, that isn’t set to last. Xiaomi has confirmed that it will add more and more features from MIUI in the coming beta builds, and the stable Android P release will see MIUI in all its glory, just like on any other Xiaomi phone.
Stable Android P will see the reintroduction of MIUI.
That was always going to be the case, as Xiaomi sees a lot of value in the host of features it offers with MIUI. There is a case to be made here, as the Android P beta gets rid of Xiaomi’s camera interface and instead offers the Snapdragon Camera, which is sub-par, to say the least.
MIUI itself is set for an overhaul with the upcoming MIUI 10 release, so it’ll be interesting to see what direction Xiaomi takes with its custom ROM. MIUI has slowly been on a path that sees the global ROM diverging from the Chinese build, and MIUI 10 is likely going to build on that as Xiaomi starts selling its phones in Western markets.
Android P beta isn’t coming to other Xiaomi phones

If you’re looking to try out the Android P beta on a Xiaomi phone, your only recourse is the Mi Mix 2S. It doesn’t look like the beta program will make its way to other phones from Xiaomi, but with the Mi Mix 2S officially going on sale in European markets, it should be easy to get a hold of the device outside of China.
With the Android P beta program, Google wanted to expand availability to a wider audience, and that’s where brands like Xiaomi come in. It’s an elegant way for consumers to test new features and provide feedback, and for developers to build apps for the upcoming version of Android.
It’s a shame that Xiaomi isn’t gearing up to offer an Android One version of the Mi Mix 2S anytime soon, but the Android P beta at least gives us a glimpse of what that would look like.
Android P
- Android P: Everything you need to know
- Android P Beta hands-on: The best and worst features
- All the big Android announcements from Google I/O 2018
- Will my phone get Android P?
- How to manually update your Pixel to Android P
- Join the Discussion
Lenovo Yoga 730 13 vs. HP Spectre x360 13
Mark Coppock/Digital Trends
The convertible 2-in-1 market has grown into its own over the last several years, with a host of options ranging from well under $1,000 up to over $2,000. Intel’s 8th-generation Core processors have also made their way into more notebooks, and they’re becoming a powerful and efficient CPU staple that essentially guarantees a great mobile productivity experience.
In the midst of this continued improvement, Lenovo recently refreshed its solid midrange convertible 2-in-1 with the Yoga 730 13-inch. It goes up against some stiff competition, and so we pitted it against the best anywhere near its price range, HP’s Spectre x360 13-inch convertible. Does the Yoga 730 offer enough improvement to keep up?
Design
Mark Coppock/Digital Trends
The Yoga 730 maintains the Yoga 720’s conservative, businesslike demeanor, coming in a silver-gray chassis that doesn’t stand out in a crowded field. It looks fine, and if you’re willing to let it fade into the background, then you’ll be pleased with its aesthetic. You’ll also enjoy its solid build quality, functional convertible hinge, and thin (0.62 inches) and light (2.47 pounds) chassis. It won’t win any beauty contests, but it won’t let you down when you want to get to work.
The Spectre x360, on the other hand, wants to stand out. HP offers three different color schemes, including a bold Dark Ash Silver and lovely Rose Gold, to go with the more traditional Natural Silver. The aesthetic is angular and modern, with chrome accents that shine things up without making them ostentatious. And the Spectre 360 is just as solidly built, thinner at 0.54 inches, slightly heavier at 2.78 pounds, and also offers a robust convertible hinge.
When it comes to input options, we also preferred the Spectre x360’s keyboard for its greater travel and lighter, springier mechanism. The Yoga 730’s keyboard left us a bit flat, and it wasn’t nearly as comfortable for longer typing sessions. The touchpads are a different story, where HP’s continued decision to use Synaptic drivers and not Microsoft’s Precision Touchpad protocol gives the Yoga 730 the advantage. Both notebooks offer similar active pens, with 4,096 levels of pressure sensitivity and tilt support, and so that’s a draw.
There’s nothing wrong with the Yoga 730’s design, but the Spectre x360 simply cuts a more striking figure.
Performance
Mark Coppock/Digital Trends
Both the Yoga 730 and the Spectre x360 are equipped with Intel’s quad-core 8th-generation CPUs. As such, they’re similar performers all around, whether you’re running synthetic benchmarks or churning through video encoding projects. Simply put, you can’t go wrong with either when it comes to getting your productivity tasks done efficiently and without delay. Even the PCIe solid-state drives (SSDs) are equally fast, meaning boot times, loading apps, and working with large data sets will be fast on either machine.
There is some divergence when it comes to their displays, however. Both utilize 13.3-inch IPS panels and offer 1080p options, but the Spectre x360’s display is brighter, has better contrast, and enjoys a slightly wider color gamut with better accuracy. The Yoga 730’s display would have been excellent a couple of years ago, but the market has passed it up.
Perhaps more important, HP also offers a 4K UHD (3,840 x 2,160) resolution display that offers significantly sharper text and high-quality video, along with a privacy screen that blocks things out from the left and the right for working in public on sensitive data. Both displays add a bit of a premium in terms of pricing, and the 4K display will have a real impact on battery life, but options are a good thing in our opinion.
Portability
Mark Coppock/Digital Trends
We’ve already covered the physical dimensions of the Yoga 730 and Spectre x360, nothing that the former is slightly thicker but also lighter than the latter. Both are highly portable 2-in-1s, though, that are equally easy to toss into a bag and carry around. In addition, both can be comfortably used as tablets with their active pens thanks to their svelte designs.
However, the Spectre x360 has the upper hand when it comes to battery life. Even when equipped with a faster Core i7-8550U processor compared to the Core i5-8250U in our Yogo 730 review unit, the HP lasted for significantly longer on a single charge. It lasted anywhere from an hour longer on our most demanding battery test to a full four hours longer on our video looping benchmark. Simply put, the Spectre x360 is going to give you a much better chance of making it through a full workday without need to tap into A/C power.
We like highly portable notebooks, and both qualify nicely, but we like longer battery life just as much — and the Spectre x360 simply offers more.
Conclusion
Mark Coppock/Digital Trends
You can save some money by opting for the Lenovo Yoga 730, spending as little as $850 for a Core i5-8250U, 8GB of RAM, and a 256GB PCIe SSD. The same Spectre x360 configuration will cost you $1,120 — although that includes the pen, which will cost you an extra $70 from Lenovo. You can spend a lot more on the Spectre x360 as well, given the 4K display option and larger 2TB SSD.
But the HP is worth the money, in our opinion. It looks better, it has a better keyboard, and it lasts significantly longer on a charge. You get what you pay for, in other words, and you get a lot more with the Spectre x360.
Editors’ Recommendations
- Lenovo Yoga 730 13-inch review
- HP Spectre x360 13 (Late 2017) Review
- Dell XPS 13 (2017) Review
- HP Spectre 13 (2017) review
- Asus ZenBook 13 vs. Dell XPS 13
Code in watchOS 4.3.1 Hints at Upcoming Apple Watch Face Inspired by Rainbow Flag
Apple at WWDC is expected to introduce a new pride watch face, reports 9to5Mac. The upcoming watch face is “inspired by the rainbow flag” and is likely designed to match the limited edition Rainbow Pride Woven Nylon Apple Watch band that Apple released at WWDC last year.
Code for the new watch face was found in iOS 11.4 and watchOS 4.3.1, and it appears that the new watch face will animate with moving threads of color that shift whenever the display is tapped.
The watch face will become available on Monday, June 4 at 12:00 p.m. Pacific Time, which suggests it will be made available to all iOS users through an over-the-air update to watchOS 4.3.1. On Twitter, a user was able to change the date on his Apple Watch and the new face showed up, confirming it will release for everyone just after the keynote event wraps up.
It is not clear if Apple has any other pride-related announcements in store to accompany the new watch face, such as a new pride band. It is also not known if Apple plans to release additional watch faces with watchOS 5, but last year’s watchOS 4 release did include new faces.
Apple always celebrates San Francisco Pride and participates in the parade, with the event set to take place on Saturday, June 23 and Sunday, June 24 this year, so it makes sense for the company to release a new rainbow Apple Watch face ahead of that date.
Note: Due to the political nature of the discussion regarding this topic, the discussion thread is located in our Politics, Religion, Social Issues forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.
Related Roundups: Apple Watch, watchOS 4, WWDC 2018Buyer’s Guide: Apple Watch (Neutral)
Discuss this article in our forums
Gene Munster Shares WWDC Predictions: Beats Product With Siri Integration, Improvements to AI and AR
Ahead of next week’s Worldwide Developers Conference, Loup Ventures analyst Gene Munster today shared his predictions for the features and services that Apple will unveil during the event.
Munster expects Apple to debut new Siri, AR, AI, and Digital Health functionality, including a Beats-branded accessory (presumably a speaker) that includes Siri integration, much like the HomePod. Some of Munster’s predictions have been previously covered in rumors shared by Bloomberg’s Mark Gurman, but Siri integration in a lower-cost Beats product is a new prediction.
We expect Monday’s keynote to be highlighted by extending the reach of Siri (most likely adding new domains, opening HomePod to more capabilities, and integrating Spotlight), along with additional AI tools (new Core ML extensions).
We also anticipate new features around digital health (privacy and device management) and ARKit (development tools).
Expect Siri integration with Beats.
Collectively, these announcements advance the ease of use and intelligence of Apple’s mobile and desktop experiences.
According to Munster, Apple may be planning introduce a $250 Beats-branded product that will offer Siri integration similar to the HomePod, allowing Apple to “advance its digital assistant ambitions” with a more affordable option. Apple currently sells a Beats Pill+ speaker for $179.95, and the device has not been updated in some time.
Apple is going to announce a new “Decade Collection” at WWDC according to a Best Buy leak, but that collection is limited to existing headphones in new colorways and does not appear include new products or a new speaker. It’s possible that Apple does have a new Beats product ready to unveil, and a recent somewhat sketchy rumor did suggest that Apple’s “low-priced” HomePod would be under the Beats by Dre branding.
That rumor would make some sense if Apple is indeed planning on introducing a Beats-branded speaker product that includes Siri integration. Siri competitor Alexa is available as an option on many speakers outside of Amazon’s own, and Apple could be planning to follow in Amazon’s footsteps. Obviously, though, it’s not clear if the rumor and/or Munster’s prediction are accurate.
Munster has several other predictions for features and services coming at WWDC. Specifically, he expects Apple to introduce new Siri domains, with support for “things like navigation and email” and integrated Spotlight Search to better improve Siri’s performance compared to Alexa and Cortana, the AI assistants used by Amazon and Microsoft, respectively.
Munster also believes Apple will introduce new domains for CoreML, the machine learning SDK that Apple introduced with the launch of iOS 11. Munster doesn’t offer details on what the new domains might be, but at the current time, CoreML offers features for developers like real time image recognition, search ranking, text prediction, handwriting recognition, face detection, music tagging, text summarization, and more.
Previous rumors have said Apple plans to introduce support for multiplayer augmented reality games, and Munster believes Apple will also introduce “subtle new developer tools” to improve AR development and lead to more compelling AR apps.
Similarly, rumors have indicated Apple is working on a Digital Health tool that will let parents better monitor the amount of time children are spending on iOS devices. Munster says Apple could also include additional features that notify users when data is being shared with developers and new device management features aimed at curbing “screen time and digital anxiety.”
Munster’s full range of predictions for WWDC can be read on Loup Ventures, and our iOS 12 roundup contains all of the other iOS-related rumors that we’ve heard thus far.
Related Roundup: HomePodTags: Siri, Beats, Gene Munster, ARKit, augmented realityBuyer’s Guide: HomePod (Buy Now)
Discuss this article in our forums
New Zealand Commerce Commission Warns Apple About Misleading Consumers
The New Zealand Commerce Commission today sent a warning to Apple over concerns that the company misled customers about their replacement rights under the Consumer Guarantees Act and Fair Trading Act, reports the New Zealand Herald.
According to the commission, Apple may have violated New Zealand consumer law by telling customers its products have a two year warranty and also referring customers who purchase non-Apple branded products from Apple to the manufacturer for warranty issues.
From an eight-page statement released by the Commerce Commission:
We consider that Apple is likely to be misleading consumers by trying to exclude its liability for non-Apple branded products. If this behaviour is continuing, we recommend you take immediate action to address our concerns and seek legal advice about complying with the Fair Trading Act.”
The New Zealand Herald says the Commerce Commission began an investigation into Apple’s practices in April 2016 after receiving complaints from consumers who sought repairs from Apple but were told that their products were covered by consumer law for just two years.
Under the Consumer Guarantees Act, there is no set two-year period after which it expires, with the act instead outlining a set of requirements for consumer devices regarding build quality (products must be free from defects).
According to Commissioner Anna Rawlings, businesses should not base warranty decisions in New Zealand “solely on how long a consumer has owned a product.” Instead, the “reasonable lifespan” depends “very much on what that product is” and each fault must be assessed “on its own merits.”
During the investigation, the commission also said that Apple is “likely to have misled” consumers by excluding liability for non-Apple products. Apple is responsible, says the commission, for “compliance with consumer guarantees applying to all products it sells, even if it is not the manufacturer.”
There were also some issues discovered around the availability of spare parts and repairs after one New Zealand customer was told he could have a maximum of four replacements for a faulty product.
The commission says Apple made voluntary changes to address some of the concerns that were raised, including making it clear to Apple employees in New Zealand that consumer law rights are not bound by a set time period. The commission believes Apple will consider and fix the other issues that were raised during the investigation.
Tag: New Zealand
Discuss this article in our forums
Hololens can be used to navigate the blind through buildings faster
Researchers at the California Institute of Technology developed an application for Microsoft’s Hololens that can steer visually impaired individuals through a complex building. Rather than deliver raw images to the brain as seen in recent prosthetic attempts, this “non-invasive” method relies on 360-degree sound and real-time room/object mapping to navigate wearers through an unfamiliar multi-story building on their first attempt.
Typically, Hololens renders interactive virtual objects in your full view of the real world. For example, engineers can construct a 3D model of a building in physical space and examine each side by simply walking around the virtual structure. You can also use the Hololens to shop for furniture online by placing a 3D model of the desired chair or table in your living room to see how it blends in with your current décor before making a purchase.
The drawback to Hololens, for now at least, is that all virtual objects reside only in the wearer’s view; these “holograms” can’t be seen by anyone else unless they have a device capable of sharing the same experience. In this case, the wearer can’t see anything, so the researchers fell back on the headset’s real-time room and object-mapping capabilities.
“Our design principle is to give sounds to all relevant objects in the environment,” the paper states. “Each object in the scene can talk to the user with a voice that comes from the object’s location. The voice’s pitch increases as the object gets closer. The user actively selects which objects speak through several modes of control.”
These modes consist of scan, spotlight, and target. After selecting scan mode using a clicker, each object will call out its name in sequence from left to right via spatial audio, meaning the wearer can get a sense of their real-world placement based on the distance and location of their voice. Spotlight mode forces the object directly in front to speak, and target mode will force an object to repeatedly call out its name. Meanwhile, obstacles and walls will hiss if the wearer moves in too close.
In one test, researchers created a virtual chair and directed Hololens wearers to approach object using the target mode. Most relied on a two-phase method: Localize the voice by turning in place and then quickly reach the correct destination. After that, researchers put a physical chair in the same location and asked the individuals to find that chair using their typical walking aid. The process took eight times longer and 13 times more distance without the help of Hololens.
Hololens can be used for long-range guided navigation, too. Researchers created a virtual guide that followed a pre-computed path and called out “follow me” to the wearer. It continuously monitored the wearer’s progression and remained a few feet ahead. If the wearer strayed off course, the virtual guide would stop and wait for him/her to catch up. The test included crossing a building’s main lobby, climbing two flights of stairs, walking around a few corners, and stopping in an office.
Editors’ Recommendations
- Escape reality with the best augmented reality apps for Android and iOS
- Google Maps is open to mobile AR game developers using Unity
- Magic Leap finally unveils ‘goggles’ with wireless processing, tracking
- HoloLens virtual touchscreen is the futuristic tech we’ve been waiting for
- From drones to bionic arms, here are 8 examples of amazing mind-reading tech



