Skip to content

Archive for

5
Oct

Google’s Pixel Buds translation will change the world


Google’s Pixel 2 event in San Francisco on Wednesday had a lot of stuff to show off and most of it was more of the same: the next iteration of the flagship smartphone, new Home speakers and various ways of entwining them more deeply into your smart home, a new laptop that’s basically a Yoga running ChromeOS and a body camera that I’m sure we’ve seen somewhere before. Yawn. We saw stuff like this last time and are sure to see more of it again at next year’s event.

But tucked into the tail end of the presentation, Google quietly revealed that it had changed the world with a pair of wireless headphones. Not to be outdone by Apple’s Air Pods and their wirelessly-charging TicTac storage case, Google packed its headphones with the power to translate between 40 languages, literally in real-time. The company has finally done what science fiction and countless Kickstarters have been promising us, but failing to deliver on, for years. This technology could fundamentally change how we communicate across the global community.

The Google Pixel Buds are wireless headphones designed for use with the company’s new Pixel 2 handset. Once you’ve paired the phones to the handset, you can simply tap the right earpiece and issue a command to Google Assistant on the Pixel 2. You can have it play music, give you directions, place a phone call and whatnot, you know, all the standards.

But if you tell it to “Help me speak Japanese” and then start speaking in English, the phone’s speakers will output your translated words as you speak them. The other party’s reply (presumably in Japanese because otherwise what exactly are you playing at?) will then play into your ear through the Pixel Buds. As Google’s onstage demonstration illustrated, there appeared to be virtually zero lag time during the translation, though we’ll have to see how well that performance holds up in the real world with wonky WiFi connections, background noise and crosstalk.

This is a momentous breakthrough, to say the least. Just 20 years ago, if you wanted to have a passage of text translated using the internet rather than tracking down someone that actually spoke the language, you likely did it through Altavista’s Babel Fish. Launched in 1997, it supported a dozen languages but often returned translations that were barely more intelligible than the text you put in. Over the next couple of decades, translation technology steadily improved but could never compete with natural language speakers for accuracy or speed.

In the last couple of years, we’ve seen some of the biggest names in technology jump into the translation space. In 2015 Skype debuted its Live Translation feature which works with four languages for spoken audio and 50 languages over IM. However the translations weren’t really in real-time, there was a lag between when the original message was sent and when the translated version arrived.

Earlier this year, Microsoft debuted its PowerPoint “Presentation Translator” add-in. Using an iOS or Android app, Presentation Translator can convert your voice over into Spanish or Chinese in real-time. It will not, however, make your PowerPoint presentation any less of an ordeal to sit through, so keep those slides to a minimum.

Both of those programs are impressive in their own rights, however, they’re a far cry from the hardware that Google has developed. Cramming all of the necessary bits and pieces necessary to facilitate real-time language translation into a device small enough to fit into your ear — especially without the need for external computing power — is no easy feat. That’s not to say that people haven’t tried (looking at you, Bragi Dash Pros).

The Pilot – Image: Waverly Labs

Take last year’s Indiegogo project darling, the Pilot from Waverly Labs. Reportedly leveraging “speech recognition, machine translation and the advances of wearable technology” these paired devices would be split between the people conversing and inserted into the ear. When one person speaks, the other earpiece automatically translates those words. Or at least that’s how it’s supposed to work. The crowdfunding campaign closed last year and deliveries have yet to begin, though the company states that it will begin shipping units in Fall 2017.

But there’s no need to do that now. Google didn’t just beat Waverly Labs to the punch, Google knocked them down with 25 additional languages (40 to the Pilot’s 15) and then stole their lunch money with a $160 pricetag — $140 less than what Waverly wants for the Pilot.

But this isn’t just about an industry titan curbstomping its startup competition, this technological advancement can, and likely will, have far reaching implications for the global community. It’s as close as we can get to a Douglas Adams-esque Babel Fish without having to genetically engineer one ourselves. With these devices in circulation, the barriers of communications simply fall away.

You’ll be able to walk up to nearly anybody in another country and be able to hold a fluid, natural language conversation without the need for pantomime and large hand gestures, or worry of offending with a mispronunciation. International commerce and communication could become as mundane as making a local phone call. The frictions of international diplomacy could be smoothed as well, ensuring that not only are a diplomats words faithfully translated but that a copy of the conversation is recorded as well.

Granted, this isn’t some magic bullet that will single handedly bring about world peace and harmony among all peoples. You’ll still have plenty of nonverbal and culturally insensitive means of putting your foot in your mouth but until we make like the Empire and develop Galactic Standard, Google’s Pixel Buds are our new best bet for understanding one another.

Follow all the latest news from Google’s Pixel 2 event here!

5
Oct

The Pixel 2 proves headphone jacks are truly doomed


As usual, Apple started a trend. Last year, it dropped the standard 3.5 millimeter headphone jack from the iPhone. The industry was quick to respond. Motorola, even before the iPhone 7 was announced, also removed the port from the Moto Z (though curiously, it remained on the cheaper Z Play). HTC followed suit with the U Ultra this year, as did the geek-friendly Essential phone. Now that Google’s Pixel 2 is confirmed to be headphone jack-less, it seems as if the port’s survival, at least in the mobile world, is a lost cause.

The truly sad thing? A year after this trend began, we still don’t have a good explanation of why we’re better off without headphone jacks. Removing the port opens up a bit of precious internal space, which allowed Apple to stuff in a bigger 3D Touch module in the iPhone 7 and 7 Plus. But did that actually help make 3D Touch more useful? And what have other phone makers gained, exactly, by jumping on this bandwagon? The additional room isn’t enough to significantly improve battery life, and aside from the Moto Z, it hasn’t led to an influx of ultra-thin designs either.

With the Pixel 2 and its larger companion, in particular, we’ve gained very little by losing the headphone jack. Sure, they’re much more water and dust resistant than the last models. But the Pixel 2’s IP67 certification is something several Android phones have offered for years — and they didn’t need to lose the port to achieve it. Typically when we move away from legacy hardware, we’re headed to something better. But in the case of the 3.5mm headphone port, the tech world seems to have forgotten that. Apple’s joking explanation — “courage” — isn’t enough.

59d539d15ab5352846c7b83c_o_U_v1.jpg

I’m not blind to the benefits of wireless. My trusty BeatsX earbuds are the first pair I’ve used that sound almost as good as great corded headphones. And I truly appreciate being able to use them on the subway without getting tangled up in cables. But here’s the thing: You don’t need to remove the 3.5mm port to enjoy the benefits of Bluetooth headphones. In fact, I’m running my BeatsX on an iPhone 6S — the last iPhone to include the 3.5mm jack. I just like having the flexibility to freely connect my phone to auxiliary cables in cars and corded headphones without carrying around any dongles. It’s 2017, that doesn’t seem like too much to ask.

And not to be too cynical, but it’s hard not to view the move away headphone jacks as a way for companies to push their own expensive wireless headphones. It’s no coincidence that Apple’s $150 AirPod’s debuted alongside the iPhone 7 and 7 Plus (as did the BeatsX). Today, Google also showed off its own offering, the aptly named Pixel Buds. It’s almost as if tech companies realize consumers would shell out a bit extra for wireless headphone, rather than live the dongle life.

dims?crop=2400%2C1600%2C0%2C0&quality=85

Chris Velazco/Engadget

As someone who’s chosen this hill to die on, the future looks bleak. Some manufacturers, like Samsung and LG, stuck with the 3.5mm port with their latest devices. Indeed, the the LG V30 appears to be the ideal new phone for audio fanatics, thanks to its powerful HiFi DAC. A headphone jack could just end up being a niche feature that some manufacturers use to entice geeks. But that doesn’t help iPhone users who want to upgrade this year, or Android fans who want the purest experience possible with Google’s Pixel phones.

It was easy for me to skip the iPhone 7 last year, as it was only a minor improvement over the 6S. But with the new design of the iPhone X, as well as its improved cameras, it’ll be hard for me to stay away. And even if I were to make the leap to Android, I’m just as tempted by the Pixel 2 as I am by the Galaxy S8. As much as I’d like to stick with the headphone jack, it’s only a matter of time until I’m tempted away. I just wish we had a good reason for moving away from the most widely supported port ever. No dongle will stop me from being resentful over that.

Follow all the latest news from Google’s Pixel 2 event here!

5
Oct

Porsche Mission E caught testing against Teslas


By Joel Stocksdale

It’s been about two years since Porsche revealed its slinky Mission E concept, which promised Tesla-matching range and performance with Porsche’s driving dynamic expertise. Now we finally get a look at one on the road. It looks like the Mission E is far along in development, and it seems that Porsche is very serious about taking on Tesla, since the car was being driven alongside both a Model S and a Model X.

Focusing on the Porsche itself, though, the design does appear to have been toned down significantly. The ultra-low hood and extra-tall fenders of the concept have been raised and lowered respectively for a much less dramatic nose. The bulging fenders have also been constrained a bit.

But the overall look still draws from the concept. The car is still very low in profile, with a low belt line. The rear fenders still look wide in trademark Porsche fashion, and they’re highlighted by the wide corporate taillight band on the back. Other details from the concept include the side vents on the fenders, the big diffuser at the tail, and the vents in the front bumper that descend from the headlights. Those headlights also look to be similar slim units to the concept’s. One other fun detail are the faux exhaust tips on the back to throw off spy photographers and passers-by.

Porsche has previously said that the Mission E would reach production by 2020, and according to our friends at Engadget, it should go on sale in 2019. Based on how complete the cars in these photos appear to be, we think the company has a good chance of hitting that target. When the concept was shown, Porsche promised 590 horsepower and, on the European test cycle, a range of over 310 miles. Also interesting was the concept’s claimed 800-volt electrical system that could be charged to 80 percent capacity in 15 minutes. Time will tell whether that system comes to fruition, but Porsche has at least tested some portion of the system on its Le Mans-winning 919 Hybrid race cars. Porsche also expects to sell the car for $80,000 to $90,000. All these features taken together would definitely make for a compelling Tesla alternative.

5
Oct

Renderings Imagine What an ‘iPhone X Plus’ Might Look Like


As Apple prepares to launch its first OLED iPhone with an edge-to-edge display, facial recognition, upgraded cameras, and other features, iDrop News has created renderings imagining what the future of the iPhone X might look like.

The renderings pair the existing 5.8-inch iPhone X with a larger model that has a 6.4-inch screen, based on the hypothesis that Apple is planning to release an all OLED iPhone lineup with devices that are similar in design to the iPhone X.

Design wise, the “iPhone X Plus” model in the rendering is identical to the iPhone X, with just a larger display to distinguish the two devices. It has the same notch-shaped top element to house the TrueDepth camera and sensors. Should Apple plan to introduce a larger version of the iPhone X in 2018 or beyond, it’s not clear what it would be named, but “iPhone X Plus” likely isn’t an option.


Though the iPhone X won’t be available for purchase for another three weeks, we’ve already been hearing rumors about Apple’s plans for 2018 and beyond.


Early information suggests Apple is aiming to introduce at least two OLED iPhones in 2018, with displays that measure in at 5.85 inches and 6.46 inches, similar to the renderings above. Apple is said to be working with Samsung Display and other suppliers to source OLED displays for the two devices.

Separate rumors have confirmed that Apple is aiming for an all OLED lineup for 2018 or 2019, with specific timing dependent on whether Apple can secure enough OLED production capacity from its various partners.

Apple was not able to introduce an all OLED lineup in 2017, instead pairing the $999 OLED iPhone X with the standard LCD iPhone 8 and 8 Plus, both of which have lower price tags.

Related Roundup: iPhone X
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

5
Oct

Apple Stops Signing iOS 10.3.3 and iOS 11.0, Downgrading No Longer Possible


Following the release of iOS 11.0.1 and iOS 11.0.2 on September 26 and October 3, respectively, Apple has stopped signing both iOS 10.3.3 and iOS 11.0, the previous versions of iOS that were available to consumers.

iPhone, iPad, and iPod touch owners who have upgraded to iOS 11.0.1, iOS 11.0.2, or iOS 11 will no longer be able to downgrade to the iOS 10.3.3 operating system.

Apple routinely stops signing older versions of software updates after new releases come out in order to encourage customers to keep their operating systems up to date.

iOS 11.0.1 and iOS 11.0.2 are now the only versions of iOS 11 that can be installed on iOS devices by the general public, but developers can download iOS 11.1, a future update that is being beta tested and will be released in the near future.
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

5
Oct

The Ohio State University Working With Apple on Digital Learning Initiative


The Ohio State University today announced that it has worked with Apple to create a comprehensive, university-wide digital learning experience that includes an iOS design laboratory and opportunities for students to learn coding skills.

Called the Digital Flagship University, the initiative will include an effort to integrate learning technology into the entire university experience. Along with the aforementioned iOS design lab, which will be available to faculty, staff, students, and members of the broader community, the university will aim to help students “enhance their career-readiness in the app economy.”

Apple CEO Tim Cook commented on the partnership, and said it will give students access to Apple’s new coding curriculum.

“At Apple, we believe technology has the power to transform the classroom and empower students to learn in new and exciting ways.

“This unique program will give students access to the incredible learning tools on iPad, as well as Apple’s new coding curriculum that teaches critical skills for jobs in some of the country’s fastest-growing sectors,” said Cook. “I’m thrilled the broader central Ohio community will also have access to coding opportunities through Ohio State’s new iOS Design Lab.”

Ohio State University’s Digital Flagship University will launch in the 2017-2018 academic year, with the design lab set to open in a temporary space in 2018 before moving to a more permanent location in 2019.

Starting in 2018, first-year students at the Columbus and regional campuses will be given an iPad Pro with Apple Pencil and Smart Keyboard, as well as apps, all funded through the university’s administrative efficiency program. Swift coding sessions will begin during the spring semester of 2018.

The iOS design lab will provide technological training and certification to students and community members who are interested in developing apps in Swift.

The Ohio State University also plans to integrate Apple technology into other areas of the university, introducing a chemistry course where students can complete assignments online with iTunes, debuting iPads for journalism and biology students, and more.

Tag: Swift
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

5
Oct

Snag a refurbished HP laptop EliteBook Folio 9470M for just $210 on Newegg


Get the perfect balance of portability and power with this Refurbished HP EliteBook Folio 9470M, which is currently over 90 percent off but only for a limited time. The laptop is HP’s first enterprise Ultrabook with docking capability, making it great for business elites who want to stay productive anywhere they go.

The enterprise laptop packs a third-generation Intel Core processor giving you ultimate control, security, and remote manageability. At just 0.75-inches thick, it’s the thinnest EliteBook to date, yet it gives you a 14-inch (355.6 mm) diagonal display providing all the mobility you need. It has VGA, DisplayPort, Ethernet, and three USB 3.0 ports so you can be more efficient wherever you take it.

We got our hands on an HP EliteBook Folio 9470M and found its slimness and number of ports to be huge pluses, along with the user-removable battery that doesn’t jet out and the pointstick and touchpad with discrete mouse buttons. We also confirmed that the laptop remains fairly quiet when not connected to the AC adapter and when hooked up with the fan running constantly, noise levels aren’t that noticeable even in quiet environments. We concluded that if ports and a removable battery are high on your list, you should definitely consider this HP model.

In testing, we found that under normal usage, you can probably get about seven and a half hours or more of battery life and if that’s not enough you can carry another battery and easily swap it out yourself. It has a generous touchpad and two sets of mouse buttons. Because you get distinct left and right mouse buttons, the design helps prevent issues with finger misplacement and accidental multitouch gesture activation. Additionally, if you’d rather use the pointstick for navigation, you’ll find it’s precise and easy to use.

You can keep the computer and your documents fully protected with a full set of EliteBook business software and anti-virus protection, which are automatically preloaded including HP Client Security and HP BIOS Vault. This model comes with the higher-end Windows 10 Pro 64-Bit operating system and 90-day limited parts and labor warranties.

The HP EliteBook Folio 9470M normally retails for $2,680 but for the next few days, you can score this refurbished model for $210 on Newegg, saving you $2,470 (92 percent).

Newegg

Looking for more great deals on tech and electronics? Check out our deals page to score some extra savings on our favorite gadgets.

We strive to help our readers find the best deals on quality products and services, and choose what we cover carefully and independently. If you find a better price for a product listed here, or want to suggest one of your own, email us at dealsteam@digitaltrends.com. Digital Trends may earn commission on products purchased through our links, which supports the work we do for our readers.



5
Oct

Researchers are measuring ocean health with drones, A.I., and whale snot


Why it matters to you

Gathering health data about ocean and Arctic creatures should help scientists better understand the impact of climate change.

How do you determine the health of the oceans and the Arctic? With some drones, artificial intelligence, and a bit of whale snot. On World Animal Day, Wednesday, October 4, Intel shared how its technology is being used as part of two successful wildlife research projects involving camera drones and artificial intelligence. The company recently partnered with a wildlife photographer, a conservationist, and two non-profit organizations to study the health of polar bears and whales, both animals that offer clues to the health of their ecosystems and the impact of climate change.

In the first study, wildlife photographer Ole Jorgen Liodden used Intel’s Falcon 8 drone system and a thermal camera to study the habits of a group of polar bears. With the drone’s aerial views, Liodden was able to monitor behavior like feeding, breeding, and migration. The data, Intel says, will help scientists understand how the animals are impacted by climate change, which offers a glimpse of the health of the arctic ecosystem as a whole.

Traditionally, researchers would have studied the bears’ movements using helicopters, a method that’s both expensive and invasive to the bears, or boats, a scenario that’s dangerous for researchers because of the harsh conditions.  Using the drone, the polar bears did not appear to be affected by its noise or appearance, even when the UAV was flown between 50 and 100 meters away.

“Polar bears are a symbol of the Arctic. They are strong, intelligent animals,” Liodden said. “If they become extinct, there will be challenges with our entire ecosystem. Drone technology can hopefully help us get ahead of these challenges to better understand our world and preserve the earth’s environment.”

The second research project is offering a better understanding of the health of the oceans by looking at, yes, whale snot. Accurately dubbed Project SnotBot, Intel partnered with Parley for the Oceans and Ocean Alliance to use artificial intelligence to monitor a whale’s health in real time. SnotBots first started studying the whales earlier this year.

After collecting the spout water from several different breeds of whales, including blue whales, right whales, gray whales, humpback whales and orcas, Intel’s AI algorithms analyze the sample for several different elements. The whale snot contains a wealth of different information, including stress and pregnancy hormones, viruses, bacteria, toxins, and DNA. The machine learning technology, Intel says, allows researchers to access the data in real time in order to make more timely decisions.




5
Oct

Windows Mixed Reality hands-on review


Tech pundits like yours truly have been opining about VR “going mainstream” for years. Atari founder Nolan Bushnell thinks we’re going to live in The Matrix within our lifetimes. Elon Musk isn’t convinced we’re not already in it. And Editor in Chief, Jeremy Kaplan, just wants to go back to Mars. Yet for all the breathless praise and predictions, VR very much remains a novelty for well-heeled software engineers and obsessive gamers.

Microsoft wants to break VR out of the basement and into the living room with Windows Mixed Reality, an ambitious gambit to build VR into the operating system you’re already using. Microsoft calls it “Mixed Reality” because it will accommodate both VR headsets, and eventually AR headsets like the HoloLens as well.

A free Windows 10 update will deliver everyone the software they need on October 17. To dive in, though, you’ll need one of five Windows Mixed Reality headsets, the latest of which Microsoft just announced on October 3 at an event in San Francisco.

What’s this new environment like? We had a chance to explore the final version before it reaches consumers in two weeks. It’s the most intuitive, entertaining, and polished VR implementation yet.

Welcome to the Cliff House

Every interface is a metaphor. Command lines often mimic the typewriter, desktops look like their namesake, and virtual reality interfaces often look like a comfortable, expensive home. Microsoft’s Windows Mixed Reality, which places you in a clifftop mansion, follows similar interfaces from Oculus and HTC.

Nick Mokey/Digital Trends

Nick Mokey/Digital Trends

Nick Mokey/Digital Trends

Nick Mokey/Digital Trends

The real-estate aspirations of every Seattle-dwelling Microsoft designer bleed through the so-called Cliff House, a sprawling, minimalist estate with an open floor plan, exposed rock surfaces, a tree-dotted perimeter, and a killer view of Mt. Rainier. You can scope it out from the patio, which joins a deck, theater and living room as the four main spaces you can hop between.

The real-estate aspirations of every Seattle-dwelling Microsoft designer bleed through the so-called Cliff House.

The home is too sprawling to navigate by walking in real space, so you’ll need to use one of the two Vive-like controllers to get around. As Jeremy Kaplan described in his earlier hands-on demo, you “hop” by pushing an analog joystick forward in the direction you want to go, casting a glowing circle on the floor. When you click the trigger, you’ll almost instantly warp there. A number of games already use this trick to avoid the nausea induced by sliding through VR spaces, and Microsoft has wisely decided against reinventing the wheel.

You can turn by looking around or, if you’d like to pull an about-face, swivel by pointing the joystick sideways. It moves in notchy increments to, again, prevent you from spewing your guts over the Cliff House’s pristine marble floors.

All this might sound complicated. Thankfully, Cortana explains the controls in a quick tutorial. The control scheme felt familiar to me in minutes, and by the end of my demo, I was jackrabbiting through the house like it was second nature.

It’s still a desktop

The Cliff House, while visually impressive, is mostly a tool for navigating to the good stuff – apps. There’s no shortage of them. Microsoft claims 20,000 Windows 10 apps will work in virtual reality.

Nick Mokey/Digital Trends

Traditional, two-dimensional apps appear like giant projections on the walls, which you can leap in front of and interact with. Each pistol-shaped controller projects a beam you can swing around like a mouse cursor. Just click the trigger to make a selection, or hold it down to manipulate objects. You can slide an app around on the wall, for instance, or drag the edges to make it bigger or smaller. All the familiar desktop paradigms still work.

There are some rough edges, however – literally.

There are moments that’ll have you believe you’re sucking thin mountain air at 8,000 feet.

VR web browsing isn’t the best. Current VR headsets can’t display smooth, small text easily, so everything must be made jumbo sized, like your grandpa’s calculator. You can view webpages, but it loads the mobile version of sites like Digital Trends, and magnifies them until they look like posters. Even at that resolution, text has a jaggy, uneven look that gets worse toward the edge of your vision.

Entering text, like web addresses, means pulling down a keyboard the size of a (virtual) piano and punching in every letter with your laser cursor. It’s not as daunting as it sounds, but you wouldn’t want to punch in anything longer than a few words. Microsoft’s VR guru Alex Kipman claims he regularly works in VR and simply dictates to Cortana or touch types with a keyboard, but you’ll need a private office, or a lot of patience, to make it work.

Another dimension

Of course, sticking to 2D apps is like installing Windows and then using the Command interface. You’re here to dive into 3D, right? Microsoft doesn’t disappoint, with an array of 3D experiences from games to 360-degree videos.

I indulged in a “HoloTour” of Machu Picchu that situated me in a glass-floored hot air balloon drifting above the ancient Incan city. This wasn’t just 360-degree video; Microsoft used stereoscopic cameras that give world depth. There are moments that’ll have you believe you’re sucking thin mountain air at 8,000 feet. Don’t cancel your tickets to Peru just yet, but this might be the Space Cadet 3D Pinball of Windows MR — a basic, yet impressive forebear of things to come.

Nick Mokey/Digital Trends

Microsoft has other tricks, too. A Hologram app lets you populate your virtual Cliff House with everything from lamps to rambunctious chameleons on bicycles, which proceed to pedal around like wind-up toys after you set them loose.

Where’s MS Paint? Missing, currently — but everything on Steam will soon work for Windows MR, so Tilt Brush isn’t out of reach.

That means games, too. Microsoft showed off Superhot, an existing VR hit, alongside Halo: Recruit, an upcoming VR-only entry in the Halo franchise. Swallow your high expectations now, because the demo I saw was merely a carnival-style shooting gallery plastered over with Halo graphics and narration. We’re told the full game will beyond this training exercise, but if initial impressions are any indication, it’s not going to impress fans waiting for Halo 6.

Windows Mixed Reality is the first attempt at building a version of Windows you can live in.

We were more impressed by two new, unique games — Luna, and Sky Worlds. The first is a trippy, calming puzzle game so gorgeous you almost don’t want to solve the simple puzzles, which ask you to move stars into constellation-like formations. Sky Worlds, meanwhile, mimicks classic Warcraft with a 3D game board of medieval warriors clashing in front of you. You can spin it around like a Lazy Susan to get a better look, and pluck cards from your left hand to lay down on the board, where they come to life and go into battle against the marauding horde.

If all this sounds like just a little bit too much work, you can also retreat to the home theater room, where a screen that simulates the size of a 300-inch TV awaits. Like the browser, it won’t be as good as your home TV, but if you don a headset on a flight – and a laptop can definitely power these things – I guarantee you’ll prefer it to watching on a seat-back screen.

A glimpse at the future

We’ve long had 3D demos, games, and apps, but Windows Mixed Reality is the first attempt at a VR environment you can actually live in. When you’re not busy blasting aliens, or exploring ancient ruins, or solving 3D puzzles, you can browse the internet, watch movies, and mess around in a space that puts – I imagine – even Bill Gates’ house to shame. You could conceivably spend a whole day in here. Instead of an isolated VR experience, Windows Mixed Reality is a virtual reality world.

I say virtual reality, not mixed reality, because, frankly, that’s PR spin right now. As we said the last time we saw Windows Mixed Reality, there’s nothing mixed about it. Still, we’re looking at Windows Virtual Reality. Microsoft has clearly settled on “mixed reality” as a conveniently ambiguous word that encompasses what HoloLens does, and what VR headsets do. Someday, perhaps, the hardware for both will look similar – but not yet.

Windows Mixed Reality is the first version of Windows you can live in. Whether you’ll want to live with it is still another matter, and one that may take several months to know – but first impressions, at least, are promising.




5
Oct

The new Sonos One smart speaker supports Alexa and Google Assistant voice control


Why it matters to you

Sonos has one of the best multiroom streaming music systems on earth, and mixing in smart home capabilities will allow it to move well beyond sound.

Sonos today announced the all-new Sonos One smart speaker, the company’s first speaker with voice control. At launch, the Sonos One smart speaker will integrate with Amazon’s digital assistant, Alexa, but the speaker is designed to support multiple voice-control platforms — in 2018, the Sonos One will be updated to support Google Assistant as well.

Not to be left out, existing Sonos speaker owners will be able to control their systems via Alexa through any of Amazon’s Echo speaker devices pursuant to a software update issued later today. Airplay 2 support is coming for iOS device owners, and for those who have longed to be able to control their Sonos speakers from individual music apps like they do with Spotify, Sonos revealed that in-app control would be coming to Pandora and Tidal later this year, with iHeart Radio soon to follow.

Sonos One

The new Sonos One speaker looks just like the company’s existing Play:1 speaker, but a closer look reveals a key difference: A six-microphone array installed at the top of the speaker. Under the hood is additional hardware which allows the speaker to accept and process voice commands. Everything else that makes a Sonos speaker sound like a Sonos speaker is still baked right in.

What’s notable is that, in demonstrations, it wasn’t necessary for users to address the speaker by saying Alexa’s name to wake it up. Instead, the speaker was already awaiting voice commands, and executed very specific playlist playback and speaker assignment tasks with very simple language. Clearly, Sonos has developed a customized way of interacting with Alexa from the ground up.

You can pre-order a Sonos One smart speaker here.

Now you can be the control freak

Previously, Sonos operated a closed, very tightly controlled system. It wanted its hardware and software to work seamlessly together, and always work perfectly for its customers. The company feels it has achieved that experience, but now Sonos has to look beyond its heavily fortified fortress and start letting others in.

During its announcement in New York today, Sonos revealed it has opened up its platform and started working with developers from over 100 different partners. While that might sound like inside baseball, it is the very same practice that has led Amazon’s Alexa to its rampant popularity today — Alexa is virtually everywhere now, and Sonos likely wants to enjoy the same sort of omnipresence.

By allowing other tech companies to integrate their software and hardware with Sonos products, Sonos itself will play a bigger role in smart home systems. Perhaps instead of a traditional doorbell, you hear a clip of your favorite song coming from a Sonos speaker anytime someone presses the button at the door, for instance.

More Apple integration

Sonos knows which side its bread is buttered on and hasn’t forgotten the partner which bring it to where it is today. Announced today, Airplay 2 is coming to Sonos speakers next year. This will allow Sonos speaker control from iOS devices and through any Siri interface. Potentially, this could mean Sonos speakers like the Sonos PlayBar and PlayBase could work wirelessly with the Apple TV 4K.

The Sonos One smart speaker is only the latest way Sonos is trying to make your listening experience smarter. The company recently unveiled its Wrensilva Sonos Edition Record Console, a turntable set-up connected to two of Sonos’ Play:5 speakers. The console was designed after the sleek style of Sonos’ NYC flagship store and looks like a piece of furniture from The Jetsons. You can toggle between the past and thanks to the ability to switch between streaming music or a built-in turntable by simply turning the nob on the console.