Spotify Testing Voice Control Features in iOS App
A small group of users have begun noticing a new voice control feature appearing within the Spotify app for iOS devices, and The Verge this week got a chance to see how the music streaming service’s new voice commands work.
As expected, the voice control allows users to call up their favorite artists, songs, albums, and playlists without having to navigate around the app with taps. Voice control is initiated by first navigating to the magnifying glass icon at the center of the app’s bottom tab row.
Image via The Verge
In this area users can tap a microphone icon inside a white bubble, and then Spotify will begin listening for their voice (once access to the iPhone’s microphone is allowed). Right now the commands are only available in English, but once a command is asked Spotify will begin playing the content within the app.
The Verge was mostly impressed with the time spent asking Spotify to play various songs, comparing it favorably to Siri on HomePod: “It all happened as quickly as Siri does the same thing on a HomePod.” It should still be noted that Spotify’s solution as of now isn’t a fully talkative AI assistant, but simply voice controls.
I spent the past hour spitting queries at the microphone, with mostly accurate results. I queued up the Gold School and Top Hits Today playlists, artist radio stations for Radiohead and Wilco, and the magnificent strains of “Despacito.” It all happened as quickly as Siri does the same thing on a HomePod.
And I did encounter some errors. I created a playlist for songs I found on Spotify that I call “Spotifinds,” and when I searched for it the very confused app asked me if maybe I was searching for “Spotify memes.” (I am now!)
The voice commands are said to be limited to music only inside Spotify’s catalog, and queries like “Who are the Beatles?” were met with the app playing a Beatles playlist, “without telling you anything about the band.”
Spotify’s voice control test follows rumors that the company is planning to launch its first hardware product, expected to be a smart speaker of some kind and compete with Sonos, Echo, and Apple’s HomePod. Particularly for the HomePod, Spotify users face a lesser experience due to Apple’s decision to only allow native streaming for Apple Music.
If these new tests roll out to a wider audience, it could be an indication of the technology users can expect to see in a smart speaker built by Spotify. For now, The Verge noted that “the early version works well enough to make it a core part of my music listening.”
Tag: Spotify
Discuss this article in our forums
‘Shadow of the Tomb Raider’ arrives on September 14th, 2018
Square Enix has revealed that Lara Croft’s next chapter will be Shadow of the Tomb Raider, the third and “climactic finale” in her origin trilogy. A new video teaser (below) has Lara fighting soldiers in the jungle, and finishes with a dramatic shot of an eclipse over some kind of ziggurat or pyramid. More importantly, the end of the video shows that the game will be fully revealed on April 27, 2018, and released on Xbox One, Playstation 4 and PC on September 14th, 2018.
Experience Lara Croft’s defining moment as she becomes the Tomb Raider. Shadow of the Tomb Raider will be revealed April 27th. Available on Xbox One, PlayStation 4, and PC on September 14th, 2018. pic.twitter.com/jujMf47kJH
— Tomb Raider (@tombraider) March 15, 2018
There’s no news on which developer is building the game, but Kotaku reported earlier that it would be Eidos Montreal, the studio that created Deus Ex:Mankind Divided, rather than Crystal Dynamics, the California company behind the last two Tomb Raider games.
Via: Xbox News
Source: Tomb Raider
Wikipedia had no idea it would become a YouTube fact checker
YouTube CEO Susan Wojcicki said during a SXSW talk this week that the company would be making a more concerted effort to stem the spread of misinformation on its site. Specifically, YouTube plans to start adding “information cues,” including text boxes that link to third-party sources like Wikipedia, to videos covering hoaxes and conspiracy theories. But in a statement, Wikimedia Foundation has now said that neither it nor Wikipedia were told about YouTube’s announcement ahead of time. “In this case, neither Wikipedia nor the Wikimedia Foundation are part of a formal partnership with YouTube,” the company said. “We were not given advance notice of this announcement.”
The @Wikimedia Foundation statement about the recent @YouTube announcement pic.twitter.com/PFDDNtNNjn
— Wikimedia (@Wikimedia) March 14, 2018
While there are plenty of conspiracy theories floating around YouTube, the platform took notice recently as some of its users began spreading false information about a Marjory Stoneman Douglas High School student. A video claiming he was a paid crisis actor rather than a high school student who had just survived a school shooting became the number one trending video on the site. YouTube took it down as well as some others claiming the same thing and the company said those types of videos qualified as harassment and were therefore in violation of its policies.
However, other conspiracy theories — like those that claim the moon landing was fake — which don’t violate policies can stay up. And YouTube’s new information cues are geared towards pushing back against the spread of misinformation regarding events, like the moon landing, that are widely accepted to be true. YouTube said that the feature would be rolling out in the coming months, but there’s no word yet on if Wikipedia not being in the know will have any effect on that.
Source: Wikimedia Foundation
What to look for if you’re buying a TV for gaming
Most TV makers (and buyers’ guides, for that matter) assume you’re buying a set for the sake of enjoying movies or shows, and that’s understandable. But what if you’re more interested in playing Monster Hunter World than watching Murder on the Orient Express? Your criteria can sometimes be very different; the TV that works well for Netflix might be miserable for gaming. You don’t have to buy a specialized set, though. There are many TVs that fit the bill for console gaming, and it’s just a matter of shifting your expectations. Here’s what you’ll want to look for.
Low lag matters the most

Put image quality on the back burner. First and foremost, you should focus on buying a TV with low input latency — that is, one that minimizes the delay between output from your console and action taking place onscreen. High input lag won’t matter much for a puzzle or strategy title, but it can sour a fighting game or a first-person shooter, where a fraction of a second can make all the difference. That’s particularly true now that many sets support 4K resolution and high dynamic range (HDR), both of which can affect your performance in ways that you might not notice if you’re only watching Netflix.
You’re typically looking for a TV with input lag under 30ms at your desired resolution and color range while using a set’s game mode, which disables some image processing in the name of performance. That’s easy to achieve if you intend to play at 1080p or with HDR turned off, but beware: Some sets have unusually high lag when you invoke 4K, HDR or both, making them less than ideal if you own a 4K HDR-ready console like the PlayStation 4 Pro or Xbox One X. Most LG, Samsung and TCL sets performed well across the board as of this writing, but performance was decidedly mixed for brands like Sony and Vizio — some are fine, but others bog down the moment you start playing in HDR.
Don’t expect manufacturers to publish these figures, however. Remember how we said that TV makers emphasize video, not games? Instead, you’ll want to turn to a third-party site like RTings or DisplayLag. They conduct thorough lag tests and frequently offer side-by-side comparisons that help you gauge the performance relative to competing models.
OLED versus LCD? It depends.

Now that OLED TVs are relatively affordable, they’re tempting options if you can afford to splurge on a higher-end TV with gaming in mind. But should you? That depends on how and what you play.
OLED looks nicer as a general rule. It doesn’t need back or edge lighting like an LCD, so you’ll see true blacks instead of dark grays. Accordingly, it’s your display tech of choice if you play in a dimly lit basement or thrive on creepy horror titles — you do want that monster to surprise you when it jumps out of the shadows, after all. Also, the hardware-induced motion blur you sometimes see with LCDs is virtually nonexistent with OLED. While that’s not always a good thing (it can exaggerate the stuttering in low-frame-rate content), it’s great for preserving details in action-packed games. OLED can also be better for local multiplayer experiences like fighters and party games, since there’s virtually no color shifting or reduced brightness at wide viewing angles, as you sometimes see with LCDs.
Don’t run out to buy an OLED screen just yet, though. The technology can be prone to burn-in, which is when the display retains image elements if they stay onscreen for too long. While fears of burn-in are somewhat exaggerated with modern OLEDs (they’re more resistant to burn-in and frequently include preventive measures like pixel shifting), you do have to watch out for it in a way you don’t with LCDs. Do you regularly play strategy games where the onscreen graphics rarely change, or leave your games idling for long periods? You might want to skip OLED for now. It’s better for action titles, not to mention gamers who rarely leave a TV unattended.
There are a few other areas where LCDs can claim an edge, and we don’t just mean the historically lower prices. The TVs with the lowest lag and highest brightness still tend to use LCD panels. While OLEDs are quickly catching up (LG’s 2017 OLEDs had much faster response times than their 2016 ancestors, for example), you’ll likely want an LCD if you either insist on the lowest lag possible or play in a very sunny room. And then there’s the simple matter of size. OLED sets still tend to be large, living-room-oriented models, while there are plenty of small LCDs well-suited to gaming in your bedroom or dorm.
Consider future-proofing

It’s generally wise to future-proof your TV regardless of how you use it, but that’s particularly true with gaming. The console market frequently pushes the limits of TV tech, and it’s becoming difficult to predict. Who could have anticipated the PS4 Pro or Xbox One X in 2013? You’re likely committing to ownership for several years or more, and you don’t want to buy a TV that’s obsolete soon after you take it out of the box.
Ironically, 4K and HDR support are the easy parts. The odds are that any new gaming-ready TV you buy will support at least 4K, and likely HDR as well. TCL in particular has developed a reputation for making affordable game-ready 4K TVs. Don’t worry about looking for 4K if you’re opting for a smaller set, however. Most TVs under 40 inches don’t support it, and you likely wouldn’t notice the higher resolution at that size.
Rather, you’ll want to think about features that require a deeper dive into the spec sheets. While you don’t need to worry about refresh rates beyond 60Hz (your games aren’t likely to need anything higher), you may want broader HDR support if you can get it. The PS4 Pro and Xbox One X both rely on the HDR10 standard for their enhanced visuals, but Dolby Vision support (present on sets from brands like LG, TCL and Vizio) may be helpful if future consoles or firmware updates take advantage of it.
And don’t forget connectivity. You’ll want as many HDMI 2.0 ports as you can get (some vendors may only include one), and preferably more ports than you need right away. Multi-console households are increasingly commonplace — you shouldn’t need to swap cables or buy an HDMI switch just because there’s a new must-have system on the block. You may also want to consider a TV with Bluetooth audio, for that matter. While you might be happy to listen to speakers right now, wireless headphones could come in handy if you ever want to play while someone is sleeping. And don’t forget to consider other factors. You’re ideally buying a TV that can adapt to your life, and that means thinking about where, when and how you might play in the years ahead.
Google to test video ads on the Play store
Imagine that you’re browsing the Google Play store, looking for your next favorite app. You’re searching through top ten lists and recommended apps. And then all of a sudden, you’re greeting with a blaring, earsplitting video ad for an app. That could be a reality, according to Google. Today, the company announced that it is testing video ads for apps in its Play store. Judging from provided images, it appears that these videos will not be autoplay — but think of the horror if they were.
While Apple does allow for promoted listings in the App Store, it doesn’t currently have video ads (though it does have videos that autoplay without sound while browsing). While video ads may catch more people’s attention than other types of promotion, they’re also pretty hated. But as apps and companies are competing for eyeballs, it’s not surprising that Google would beta test a feature like this.
Google is also experimenting with playable and multi-option video ads. These will allow users to play an ad as if it’s a part of a game. Playing ads can give users special incentives, such as extra lives or in-game currency. And users are in control of when they play these ads, so they aren’t forced and don’t interrupt gameplay. And it allows developers to more fully monetize the games they create.
Clearly Google is (unsurprisingly) thinking of new and innovative ways to deliver ads. There’s a delicate balance between engaging advertising and intrusive advertising. It remains to see how these experiments go, but let’s hope that video ads don’t have a permanent home on the Google Play Store.
Microsoft forms a new gaming cloud division
With little fanfare, Microsoft has announced that it’s launching a new gaming cloud division, a move that would set the company up to enter the world of game streaming. As The Verge reports, it’s something Microsoft been building up for a while with the acquisition of small companies like PlayFab, which is focused on game development in the cloud. The company also has the basic structure of what a cloud streaming offering could look like with Xbox Game Pass, its subscription service that gives players access to large library of games for $10 a month.
While early attempts at game streaming have crashed and burned, like OnLive, it’s still an intriguing market. Sony’s PlayStation Now service is still alive and kicking on the PS4 and PC, and NVIDIA is doubling down on game streaming with GeForce Now. The latter is still just a beta test though — even the graphics card giant hasn’t figured out how to charge for high-end game streaming. Microsoft isn’t saying much about what the new gaming cloud division is working on yet, but its leader, Kareem Choudhry, hints that they’re exploring ways to bring content to gamers on any device.
Source: The Verge
Samsung’s Galaxy S10 Rumored to Feature 3D Facial Recognition Like Face ID on iPhone X
Israeli startup Mantis Vision is reportedly working with camera module firm Namuga to develop 3D sensing camera solutions for Samsung’s tentatively named Galaxy S10, according to Korean news outlet The Bell.
The technology would pave the way for Samsung to implement a 3D facial recognition system on the Galaxy S10, similar to Face ID on the iPhone X. The new Galaxy S9 and Galaxy S9 Plus, which officially launch tomorrow, still rely on a less secure 2D facial recognition system paired with an iris scanner.
Last year, videos surfaced that showed the same 2D solution on the Galaxy S8 could be unlocked by waving a photo of the registered user’s face in front of the camera. Samsung even confirmed that its facial recognition solution cannot be used to authenticate access to Samsung Pay or its Secure Folder feature.
By comparison, Face ID uses a structured-light technique that projects a pattern of 30,000 laser dots onto a user’s face and measures the distortion to generate an accurate 3D image for authentication. Face ID has been duped with sophisticated masks, but not with a simple photo of a person.
KGI Securities analyst Ming-Chi Kuo recently opined that it would take Android smartphone makers up to two and a half years to catch up with Face ID. Apple released the iPhone X last November, while the Galaxy S10 will likely be released around March or April of 2019, a roughly one-and-a-half year span.
It’s a given that Samsung will catch up with Face ID at some point, but it remains to be seen if its 3D facial recognition system can match the iPhone X’s user experience. Around this time next year, we should find out.
Tags: Samsung, theinvestor.co.kr, Galaxy S10, thebell.co.kr
Discuss this article in our forums
Apple Pay Promo Kicks Off Spring Break With Free Song Credits From TouchTunes
The latest Apple Pay promotion has launched today, and this time Apple is preparing users for Spring Break. When using Apple Pay in the TouchTunes jukebox iOS app [Direct Link], users can get three free song credits. The offer is valid through 11:59 p.m. PT on March 27, 2018 in the United States and Canada only.
TouchTunes uses a network of digital jukeboxes across 65,000 bars, restaurants, and other social venues across North America, which users of the app can control from their iPhone. Users amass TouchTunes’ “credits” and spend the credits when choosing which song they want to play next, creating a community-built playlist of songs with anyone else using TouchTunes at the same location.
Now, with the new promotion users can purchase in-app credits using Apple Pay and with the three free credits they should be able to play one song at a bar for free (TouchTunes notes that the number of credits required for song plays may vary). Other springtime apps promoted by Apple include clothing brands lululemon, J.Crew, and Zara. Apple also encourages users to “get back to the beach quicker” when using Apple Pay in apps for Reef, Ray-Ban, and Abercrombie & Fitch.
The TouchTunes promo follows two weeks after Apple celebrated the Oscars with a discount on two or more movie tickets through Fandango.
Related Roundup: Apple PayTag: Apple Pay promo
Discuss this article in our forums
A self-driving car in every driveway? Solid-state lidar is the key
Ever noticed how self-driving cars end up wearing some weird hats?
The earliest self-driving military trucks looked like they had spinning coffee cans up top. Carnegie Mellon’s iconic self-driving Hummer was topped by a giant ping-pong ball. Waymo’s smiley little prototype wears a siren-shaped dome that makes it look like the world’s most adorable police car.
Inside all three are about a dozen lasers, shooting through telescope-grade optics, slinging around hundreds of times per minute, to generate 300,000 data points per second. It’s called lidar, and without it, these cars would all be blind. It’s also one of the biggest reasons you don’t have a self-driving car in your driveway right now. At around $75,000, a single lidar can easily cost more than the car it rides on. And that’s just one ingredient in the self-driving soup.
But a new technology is popping up everywhere this year: solid-state lidar. With no moving parts, it promises to give self-driving cars sharper, better vision, at a fraction the cost of old-school, electromechanical systems. Solid-state lidar will pave the way for the first self-driving cars you can actually afford. Here’s how it works — and what’s just around the corner.
How lidar works
The term “lidar” comes from mashing together “light” and “radar,” which also makes a handy way of understanding it because … well, it’s radar, but with light.
A refresher from high-school physics: Radar bounces a pulse of radio waves off an object, like a plane, to determine how far away it is, based on how long it takes for the pulse to bounce back. Lidar uses a pulse of light from a laser to do the same thing.
“You need a combination of cameras, radar, and lidar in order to create a self-driving system.”
Take enough of those lasers, spin them in circle, and you end up with a three-dimensional “point cloud” of the world around you. You’ve probably seen these rainbow-colored dots depicting cityscapes, mountains, and even Thom Yorke’s singing, disembodied head in Radiohead’s House of Cards music video. That 360-degree 3D map is like a Rosetta Stone to a self-driving car, allowing it to decipher the world around it.
“You need a combination of cameras, radar, and lidar in order to create a self-driving system,” explains Jada Tapley, VP of Advanced Engineering at Aptiv. She would know. Aptiv built the autonomous Lyft cars that ferried attendees around Las Vegas for CES 2018. In the worst gridlock the city sees all year. And monsoon-like conditions. With zero accidents.
Those cars had nine lidar, ten radar, and four cameras. A combination of all three allow it to drive itself, but lidar performs the crucial function engineers call localization. “It’s important for the vehicle to be able to identify with a very high degree of accuracy where it is on the map,” Tapley explains. “We use our lidar to do that.”
While GPS can narrow down your location to a circle about 16 feet in diameter, lidar can do it within a circle four inches in diameter. That’s better than a lot of drivers can manage. Tapley remembers one group of wide-eyed journalists wincing as Aptiv’s autonomous car breezed past a parked bus in Las Vegas. They didn’t need to — because the car knew there was plenty of room. “As humans we get intimidated, especially by big, big vehicles like buses or semis. So we tend to kind of edge away from them,” she explains. “But an autonomous vehicle doesn’t need to do that.”
Autonomous car levels explained
International engineering organizations have settled on six levels of automation to talk about the evolution we’ll see between dumb cars and complete autonomy.
Level 0: No autonomy
This is the car you already probably own. Stop texting! You need to do everything.
Level 1: Hands on
Your car will help you in some scenarios, like adaptive cruise control slowing you down on the highway when the car ahead of you does.
Level 2: Hands off
Your car can drive just like you do — under just the right circumstances, like Tesla Autopilot on a divided, marked highway.
Level 3: Eyes off
Go ahead and send that text; this car won’t crash if it doesn’t have your attention. But you’ll still need to grab the wheel if things get complicated, like with Audi Traffic Jam Pilot.
Level 4: Mind off
Go to sleep; your car is under control. But you still need to sit behind a wheel juuust in case.
Level 5: Total autonomy
Your car has no steering wheel, because it can drive better than you can in all scenarios. Go sit in the back, feeble human.
While cameras can identify objects, and radar can tell how far away they are, lidar can achieve both with a degree of precision neither can touch. “Imagine that there’s an 18-wheeler tire tread in the middle of the road,” Tapley says. “Radar will not detect that. Lidar will.”
That’s why a Tesla Model S, which has both cameras and radar, but no lidar, must have a driver prepared to take the wheel at any time. It’s considered a level 2 autonomous vehicle. Almost all car autonomy experts — with the glaring exception of Elon Musk — believe lidar is necessary to achieve true “sleep behind the wheel” level 4 autonomy.
And that’s a tremendous problem if you or I ever hope to own one. The silver Velodyne HDL-64E you see atop many test cars costs $75,000. Even the company’s “budget” Puck model runs $8,000. And this is not a part you can want to skimp on. Imagine your car windows going black at 80mph, and you have a pretty good idea how losing lidar would look to the computer in a self-driving car.
Like all technology, lidar has become cheaper over time, but the precision required and massive spinning parts in electromechanical lidar mean it can’t become cheaper, smaller, and better every year the same way the processor in your phone or computer does.
But what if … you could make lidar from only silicon? Take away all the moving pieces, and the future starts to look a lot brighter.
Welcome to the solid state
Solid-state electronics, which by definition have no moving pieces, have changed the way we do everything from keeping track of time to listening to music. Remember how portable CD players used to skip? That’s what happens when you rely on a laser to read microscopic grooves in a spinning disc. But you can put your smartphone in a paint shaker and still listen to Kanye, because the music is stored on solid-state memory chips that don’t mind getting shaken up. Lidar is heading in the same direction.
Like portable CD players, spinning electromechanical lidar is not ideal. “Number one, they’re big,” says Tapley. “Number two, they’re expensive. Solid-state lidar allows us to get smaller, package better in the vehicles, and reduce costs.”
How do you move light around without moving a lens or a mirror? How does lidar get to solid state? Engineers have devised some downright genius ways.
The first is called flash lidar. “Flash is basically where you have a light source and that light source floods the entire field of view one time using a pulse,” Tapley explains. “A time-of-flight imager receives that light and is able to paint the image of what it sees.” Think of it as a camera that sees distance instead of color.
Think of it as a camera that sees distance instead of color.
But that simplicity comes with some snags. To see very far, you need a powerful burst of light, which makes it more expensive. And the light can’t be so powerful that it damages human retinas, which limits range. One workaround is to blast light at a specific, invisible wavelength that doesn’t affect human eyes. Perfect! Until you bump into yet another catch: Inexpensive silicon imagers won’t “read” blasts of light in the eye-safe spectrum. You need expensive gallium-arsenide imagers, which can boost the cost of these systems as high as $200,000.
“You have to have an extremely powerful light source, or an extremely sensitive receiver, and if you don’t have those things then you have this limited range,” Tapley says. It might be perfect for government planes conducting detailed aerial surveys, but flash lidar probably isn’t fit for your Corolla.
Set phasers to scan
Fortunately, there’s another way. Louay Eldada has been cracking on the problem since he got his PhD in optoelectronics in the early ‘90s; and today he runs Quanergy, one of the preeminent players in solid-state lidar. Eldada and his team derived a different approach by looking at how radar works. It is, after all, a close cousin of lidar. As it turns out, radar used to spin just like lidar, until scientists developed a brilliant workaround known as the phased array.
A phased array can broadcast radio waves in any direction — without spinning in circles — by using a microscopic array of individual antennas synced up in a specific way. By controlling the timing — or phase — between each antenna broadcasting its signal, engineers can “steer” one cohesive signal in a specific direction.
Phased arrays have been in use in radar since the 1950s. But Eldada and his team figured out how to use the same technique with light. “We have a large number, typically a million, optical antenna elements,” Eldada explains. “Based on their phased relationship amongst each other, they form a radiation pattern, or spot, that has a certain size and is pointed in a certain direction.”
By intelligently timing the precise flash of a million individual emitters, Quanergy can “steer” light using only silicon. “The interference effect determines in which direction the light goes, not a moving mirror or lens,” Eldada explains.
That means the nest of optics and motors inside a $75,000 lidar bucket disappears, and you’re left with only chips. Right now, Quanergy uses several chips and sells the package for $900, but future versions will become a single chip. “At that point, our sales price will become under $100,” Eldada predicts.
Quanergy can “steer” light using only silicon.
Solid state isn’t just cheaper, it’s better. “Being able to effectively change the shape of the lens to any shape you want allows you to zoom in and zoom out,” Eldada explains. “So imagine you’re looking at an object in your lane, and you want to define in high resolution what it is. You reduce the spot size and determine it’s a deer, it’s a tire, it’s a mattress that fell off a truck. At the same time, you can hop between doing that and looking at the big scene.” This “hopping” could happen multiple times per second without a driver even knowing, as an algorithm calls the shots and determines what deserves a closer look.
Solid-state devices also last longer. Electromechanical lidar can run for between 1,000 and 2,000 hours before failure. With the average American spending 293 hours in a car per year, most of us would end up replacing our lidar before our tires. Quanergy claims its solid-state lidar will run for 100,000 hours — more than most cars will ever drive.
Mirror mirror, on the wall
Flash and optical phased arrays are really the only true solid-state lidar. But there’s a third new way to do lidar, the red-headed stepchild known as microelectromechanical mirrors — or MEMS mirrors.
As the “mechanical” in “microelectromechanical” suggests, there are moving parts, so MEMS mirrors aren’t truly solid-state. But they’re also so tiny that the technology still represents an improvement over large-scale electromechanical lidar.
Aptiv is hedging its bets by working with – and investing in – all of them.
“The architecture is very simple,” Tapley explains. “You have one laser, one mirror.” The laser fires into the very tiny mirror, which spins like a top, providing the rotation that conventional lidar gets from spinning an entire bucket around.
It’s simple enough, until you want to move the laser up and down in addition to spinning in circles. Then you need to “cascade” it off another mirror, which spins on another axis. Or you can shoot multiple lasers at one mirror. Either way, the cost and complexity begin to build.
“Making sure that everything is aligned perfectly creates challenges,” Tapley explains. “If you’ve got this laser in a mirror that’s rotating on both axes, it can sometimes be susceptible to shock and vibrations.” You know, like the type you might find in a car, bouncing down the road at 70 mph.
Eldada points to other issues. “Micro MEMs mirrors drift out of alignment. They don’t maintain calibration. When there are big changes in temperature, they need to be recalibrated over the lifetime.”
“If the mirrors get stuck, you have an eye safety issue,” he points out. And sunlight can wreak its own havoc. “You have big issues when you’re facing the sun,” Eldada says. “The sunlight is going to hit it, the light is going to get reflected inside the lidar, and saturate the detectors, and drown out the signal.”
With so many differences between all three types of next-gen lidar, Aptiv is hedging its bets by working with – and investing in – all of them. “Each have different tradeoffs relative to field of view, range, and resolution,” Tapley explains. “Depending on where that lidar is positioned on the vehicle, that will dictate which one of those needs to be the most important.”
Side-facing lidar, for instance, might not need the range that front-facing lidar does. By mixing and matching between the variety, Aptiv hopes to harness the best of all worlds.
So where’s my self-driving car?
In 1999, Jaguar introduced the first radar-based cruise control in the XK, a coupe that sold for about $100,000 in today’s dollars. At the time, the sensors were so expensive that as Tapley tells it, “People joked around that you got a free Jag with every radar purchase.”
Today, you can get the same feature in a $18,000 Corolla. “We’re kind of on that same learning curve with lidar,” she says. “Until solid state becomes mature and enters mass production, these vehicles are going to be pretty cost prohibitive for an average consumer to own.”
Quanergy’s $900 solid-state lidar sensor is helping make that happen. The upcoming Fisker EMotion will be the first vehicle to hit the streets with those sensors inside — five of them — when it arrives in 2019. No bigger than the battery pack for a cordless drill, they’re buried in vents, hidden behind chrome grilles, and totally invisible unless you’re looking for them. A long way from the spinning buckets of yesterday.
Solid-state lidar means that self-driving cars won’t just be robochauffeurs for the wealthy.
Eldada believes we’ll see level 4 autonomous cars from a notoriously “aggressive” American manufacturer as early as 2020. “2021, 2022, you will see several more. 2023 is the big year. Most automakers will have self-driving cars.”
While the Fisker will be priced at $130,000, it might end up looking a lot like the Jaguar XK of 1999: An expensive harbinger of technology to come. Ultimately, solid-state lidar means that self-driving cars won’t just be robochauffeurs for the wealthy. “It means that everyone can have a self-driving car,” Eldada says. “It’s not only for the Mercedes S-Class and BMW 7 Series. This means that people driving Toyota Corollas will also have self-driving cars.”
And as fundamental as that shift may sound, cars may be just the beginning for solid-state lidar. “You will see it in devices, you will see it in wearables, in the helmets of firefighters and soldiers. The applications are almost limitless.”
Honor View 10 vs. Samsung Galaxy S9: What do you get for the extra $200?

Most buyers will be happy with either of these phones — but that doesn’t mean they’re evenly matched.
Okay, this is a weird comparison, I know, but hear me out. The Honor View 10 stood toe-to-toe against the OnePlus 5T as one of the best “budget flagships” you can buy, with great dual cameras, sturdy build quality, and a compellingly low price of $499.
On the other hand, the Galaxy S9 is the new hotness around, packing every feature under the sun into a package so beautiful and well-machined that you almost don’t want to put it down. But with a starting price of $719.99, just how much more are you really getting for the $200+ premium over the View 10?
What the Honor View 10 does better

It’s pretty obvious that the cheaper phone of the two will come with some compromises, but the View 10 still offers a ton of bang for your buck. You get the modern design we’ve come to expect of a 2018 flagship, complete with slim bezels and an 18:9 aspect ratio. That modernization doesn’t mean Honor has completely thrown out old trends, though — it still has a headphone jack!
You don’t have to spend over $700 to get a premium smartphone.
Despite a relatively low price, you still get the very best processor Huawei has to offer — the Kirin 970, complete with the same AI enhancements found on the more expensive Mate 10 Pro. In addition, the View 10 is backed by an impressive 6GB of RAM, and you probably won’t run out of space, given the 128GB of internal storage and microSD slot for expandability.
Per usual, the biggest differentiator between phones is the software, and the View 10’s software definitely isn’t for everyone. It’s still Android Oreo, which is great news, but the EMUI overlay is a far cry from stock Android, with plenty of OEM customizations throughout the interface. The same can be said of The Artist Formerly Known As TouchWiz residing on the S9, but EMUI just feels … less useful, with more redundant apps that offer little extra functionality.
One area where the View 10 easily bests the Galaxy S9 is battery life, where its massive 3750mAh — combined with power-efficient software — far outlasts the S9’s measly 3000mAh cell (and even outmatches the larger 3500mAh battery in the S9+).
See at Honor
What the Galaxy S9 does better

Wireless charging, water resistance, stereo speakers, dual apertures, and iris scanning — all things the Honor View 10 simply doesn’t have, and the list doesn’t end there. The Galaxy S9 topples over just about every other phone on the market when it comes to feature lists, and this is no exception.
It’s hard to match Samsung’s level of build quality and feature set.
The Galaxy S9 may be more iterative than some would have liked, riding on the same general design as the Galaxy S8 before it, but that just means that Samsung has had an entire year to refine its already-great hardware. The curved glass and metal pairing is as gorgeous as ever, and the fingerprint sensor is finally in a sensible place underneath the camera module.
Speaking of cameras, the Galaxy S9’s camera absolutely clobbers the View 10, even with just one lens. That’s not to say that the View 10’s dual camera module is bad — it’s pretty great, actually — but the S9 just takes stunningly good photos, and its dual aperture system is unparalleled. It also benefits from OIS for added stability, further improving its video performance and night shots (two areas where the View 10 lacks).
See at Samsung
Which one is right for you?
If you have the extra money, the Galaxy S9 is still the better buy. Its software is more thought out, it takes better photos, and it boasts a significantly longer list of hardware features. What’s more, you can finance it through your carrier — and speaking of carriers, it’s your only choice of the two if you’re on a CDMA network like Sprint or Verizon.
Still, if you don’t mind more reseved hardware, the Honor View 10 has a lot to offer for significantly less money. You’ll still get great specs with a top-tier processor, excellent dual cameras, and better battery life than Samsung can compete with. The design isn’t quite as futuristic, but it’s still well-built, and for the money it’s hard to complain.
Are you going all out with the Galaxy S9, or has the View 10’s lower price and competitive specs won you over? Let us know in the comments below!
Samsung Galaxy S9 and S9+
- Galaxy S9 review: A great phone for the masses
- Galaxy S9 and S9+: Everything you need to know!
- Complete Galaxy S9 and S9+ specs
- Galaxy S9 vs. Google Pixel 2: Which should you buy?
- Galaxy S9 vs. Galaxy S8: Should you upgrade?
- Join our Galaxy S9 forums
Verizon
AT&T
T-Mobile
Sprint



