Grab this Google Pixel battery case for just $37 right now!
Looking for a case to both protect and charge your Google Pixel? If so, check out this new option from BEAOK that has a 4000mAh battery built in while still providing full protection to the phone for just $37.49 with coupon code 55BQC5WR. Battery cases aren’t usually the prettiest looking, and that is certainly the case with this option. It is on the thicker side, so you will definitely feel it on your phone, but if you need the extra battery life you can probably look past it. The case is easy to install and remove, so you can always just put it on when you know a long day is coming up, or you will be away from a charger for a long period of time.

Normally this case would set you back $50, but with coupon code 55BQC5WR you can save $12.50 on the purchase. If you have lots of long days and would rather have extra battery power built right into your case, this is a great option to consider.
See at Amazon
A vast majority of Android users still don’t have the latest emoji
Google’s inclusive emojis mean nothing if only a small percentage of its users feel represented.

Software updates have always been a major headache for Android users. Frankly, it’s a wonder that so many of us have stuck around this long as fragmentation remains a major issue on our beloved mobile platform. Especially considering that the lack of consistent software updates means that there are too many users without the same features that their other Android brethren are using on the regular, like the latest emoji.
You might be wondering: Why does it matter if I have the latest emoji? Well, think of it this way: just because you don’t eat croissants doesn’t mean there aren’t others out there that want their croissant-loving ways recognized as the norm. As Emojipedia rightly points out, despite the fact that Google was the first to introduce more diverse emoji in Unicode 9, including a number representing the working female populace, it has no bragging rights because only 4 percent of Android’s users are actually utilizing the new cast of characters. (The data is based on Emojipedia’s internal findings.)

Apple, on the other hand, is doing a better job at making its users feel included based on the sheer fact that it controls software updates, thus pushing out those new emoji to a whopping 84 percent of its users. That’s a major chunk of people who have access to emoji that represent them! As Emojipedia pleads:
A phone that can’t see the 12+ months of new emojis is crippled as a communication device.
Sure, you could use a third-party keyboard app or a messaging platform like WhatsApp to streamline the emoji process on your yet-to-be-updated Android device, but that doesn’t help everyone else. There are still 96 percent of Android users out there who can’t see the new emoji offered on the platform, and thus, a big chunk of users who aren’t seeing themselves represented.
I’ve been racking my brain on how to fix this problem, but I have no answer for you at the moment. We’re still fighting for timely software updates on the Android platform. Emojipedia suggests spending your money where your mouth is, but that’s not going to happen here at Android Central as we’re all planning to continue wielding Android devices. The best we can do right now is continue to harp on Google on the issues of fragmentation because those who are left behind are not just missing out on new software features and security updates — they’re also missing out on feeling represented by their mobile platform.
An AI camera failed to capture the magic of CES
Relonch wanted me all to fall in love with photography again this CES. But its camera is so radically different from everything I’ve used before, I struggled to put my faith in its promise.
The company is based in Palo Alto, California, and its pitch is simple, if very Silicon Valley: A camera as a service. You hand in your old shooter (yes, really) and in return you get the 291, a unique leather-bound DLSR-shaped camera. It has an APS-C sensor, a fixed, 45mm-equivalent lens, an electronic viewfinder, a shutter key and, importantly, a 4G radio inside.
The 291 uses that radio to send raw files to Relonch’s servers. Once they’re there, an AI scans through your shots and picks the best ones. To do this, it identifies the individual elements in the photos using computer vision, and judges your composition. It’ll then process the raws, individually lighting and coloring elements before applying its own crop and sending them back as JPEGs.
You receive a batch of photos each morning, which is key to Relonch’s business model. The idea is you choose the photos you love as part of your morning ritual, which reminds you to take your camera out again and keep snapping.
The 291 itself is free. The photos are sent to you as small, watermarked files, and you have the option to keep them, which grants access to the full-size (as large as 20 megapixels, depending on how the AI has decided to crop it) file. Each photo you keep costs $1, and you start your account with the market value of the camera you handed in as credit. Oh, and if you decide you want to pick a photo at a later date, you can always go back and buy it. Likewise, if you don’t like the 291, you can hand it back in exchange for your old camera.
That financial proposition is what intrigued me most. Over the past five years I’ve spent $3,000 or so on various cameras and lenses. I’ve probably, outside of work, processed and kept maybe 300 photos. (Of course, there are another 30,000 or so that are gathering digital dust on various SD cards and hard drives.)
I picked up my 291 the day before CES was set to begin, briefly meeting with Relonch co-founder Yuriy Motin, who explained the process. After, I headed out for a meal with a few colleagues at a pretty bad Las Vegas eatery, and snapped away.
Using the 291 might feel natural to those used to a point-and-shoot or smartphone camera, but for someone accustomed to aperture rings and settings dials, it was disorienting. There’s no way to zoom or manually focus, and no way to check on ISO or aperture. This is intentional, of course, as Relonch believes its AI can balance the photos, and doesn’t want people distracted by such things. When coupled with the lack of screen, though, it introduces an uncertainty to proceedings.
The following morning, I received 15 photos via the Relonch site, and went through them one by one, deciding (in Relonch’s parlance) which were “Remarkable” and “Not Remarkable.” I ended with eight, at a value of $8. Probably, were I not using the 291 for work, only a couple of them would have made the grade, but I was conscious of my need to have a decent bank of photos by the end of the week.
The photos were generally okay. I was impressed with the way the AI processed a chance meeting of Engadget and Verge editors, which was taken in darkness but is exposed very well. Yes, it’s very noisy, and it’s not going to win any awards, but it was a photo I wanted (I’ve worked for both sites), and I’m glad it panned out that way. Others taken at and around the restaurant were generally good, although I didn’t appreciate the weird square crop on the photo of Nate (Ingraham) and Jess (Conditt).

My first day with the Relonch 291 (the crops were picked by the AI).
My plan was to continue to shoot for a couple more days and document the results. Like all best-laid plans it quickly went awry, but that wasn’t entirely Relonch’s fault.
I spent most of day two in our trailer, frantically writing up an article with a tight deadline. As such, I’d only taken a couple of photos by 3PM. As I jumped on a bus to Faraday Future’s FF91 launch event, that thought was weighing heavy on my mind. So much so that I ended up snapping a shot of the Vegas sunset out of the bus’s dirty, UV-filtered window, which is not something I’d usually do. (I am very much about portraits, rather than landscapes.)
By the time I reached the venue (which was held in a tent across town from our CES trailer), the sun had set, and I was in panic mode. I quickly grabbed an exterior shot, and headed inside to find interesting subjects, while still attempting to do my regular job.

The AI processing isn’t very consistent.
Ironically, my job that night was to support our auto expert Roberto Baldwin by taking photos of Faraday’s new car, but I couldn’t use the 291 because I had to publish them the same day. Not for the last time this week, I had to bring a second camera along. After I was done with work shooting, I took a few photos of some prototypes, a couple of my colleague Robbie, and then some of the maelstrom that followed the event.
The next morning, I received almost nothing but disappointment: There were precious few photos worth keeping from the event. Some of that can be blamed on lighting conditions, or a last-second shake of the hand, but I’d captured some nice images at the event for Engadget, and was surprised at the lack of quality photos from the Relonch.
A lot of the images were just… unnatural looking. Like someone had used an aggressive Instagram filter, or a cheap “HDR” application. I chose to keep two photos of Robbie (above), just to highlight the difference between the AI’s processing choices. Other images were better, and the sunset shot actually came out okay, if a little too saturated for my tastes.

On day two, I found the AI processing over-aggressive.
On my final day with the 291, CES struck again. I attended a briefing with Razer for the Ariana projector and Valerie laptop, which I didn’t want to take the camera to. The meeting was under a timed non-disclosure agreement, and, although I’m probably over-cautious, I was worried about randomly sending photos of secret prototypes across the internet. After the briefing, I sat inside the trailer processing the images I took for work and writing up other articles.
Then, 3PM came again. I’d taken no photos with the 291, and needed to rush across town to an ASUS media suite, where I’d be taking photos of the company’s new Zenfone AR. Again, I brought my personal camera as I’d need the images quickly, but once I was done with work, I hung around the periphery for 10 minutes, waiting for the video shoot to wrap. I used the opportunity to take a few snaps with the 291, before traveling to an NVIDIA keynote that involved writing and taking photos of NVIDIA things, and then to a live show where I met with people and also didn’t take any photos. I woke the next day to four thoroughly unremarkable shots in my Relonch photo box.
I’m not a professional photographer, but I do take a lot of photographs, and I’m confident enough that it’s the camera, not me, at fault here. At the ASUS Zenfone AR shoot, lighting conditions were fantastic. On my personal camera, I was typically shooting at 1/160th of a second with ISO set to 640 and f/2.0 aperture. Why, or how, had the 291 failed under these conditions? I’m not entirely sure.
The photos I received from Relonch were all either fuzzy or blurred. They weren’t massively noisy, but they also weren’t anywhere near sharp. Relonch photos don’t have any metadata, so I can’t tell what the settings were, but for an almost stationary subject to be blurred, it must have been shooting at a very slow shutter speed, which makes absolutely no sense given the lighting in the space.

Day three was marred by confusingly poor photos.
Relonch’s business plan makes a lot of sense to me. I love how passionate the company’s founders are about photography, I love the concept of an AI delivering me perfectly edited photographs every morning. I love the pitch that my life might be as photogenic as anyone else’s, if I only took my camera to capture it. But I don’t love the 291.
The 291 feels nice. It looks striking. But inside its leather-bound case lies a very average camera. The hardware package feels equivalent to entry-level DSLRs, but without Nikon or Canon’s optical know-how and handling, or the ability to switch lenses. To make matters worse, its software is clearly imperfect. The camera doesn’t appear to be making the right decisions with regards to shutter speed, focus and aperture, and the helpful AI auto-processing is inconsistent and sometimes too aggressive.
Part of Relonch’s pitch is that you always have your camera with you, capturing everyday moments. When I collected the 291 from him, Relonch’s Motin pointed to White House photographer Pete Souza, noting that the best shots of Barack Obama are the candid ones taken between big events. My CES was full of such moments. I met with colleagues and old friends; people who I see at most once a year. I made new friends and had memorable conversations. I watched two colleagues embody Elsa, if only for five minutes. Between the stress and exhaustion, I laughed, a lot. But after just a couple days, I didn’t trust the 291 to capture those moments.

“A camera as a service” remains a truly interesting proposition, though. It was probably unreasonable to hope that Relonch had nailed it on its first attempt. Just as it was unreasonable to pick my busiest week of the year to test the 291. Assuming it has the funding to continue pursuing its dream, Relonch still has my attention. Its AI will only get better, as, I’m sure, will the 291. Early testers are apparently very happy with the 291, and Relonch says its algorithms will improve, learning from everyone who uses its cameras. The 291 is in a limited launch, and Relonch is currently only accepting photographers into the program who meet its criteria.
I will definitely try one of the company’s cameras again, this time carrying it around my home city, without the pressure of a trade show, and I’m truly excited to see if the company can make this business model work. I’d also consider just using the AI processing, if the company is able to perfect it and offer that as a standalone service. (It’s not in the company’s immediate plans.) So many of the photos I take are left untouched, taking up needless space on my laptop, purely because I don’t have the time or inclination to process the raw files. Relonch could really be onto something here, but it has to improve its software to have a hope at succeeding.
Click here to catch up on the latest news from CES 2017.
Gear up for tonight’s Nintendo Switch live stream
Nintendo revealed the Switch, its latest console, back in October — but the company left out plenty of key details about the half-portable, half-living room system. So, that’s what tonight is all about. Nintendo will host a live stream at 11P ET on Thursday, January 12th, intended to outline more of the features, hardware specs and games coming to the Switch when it lands in March.
Watch the live stream right here with us tonight and keep the Engadget home page open for all of the news as Nintendo announces it. Until then, here are a few things to expect out of tonight’s Switch event:
First up, the price. Nintendo hasn’t indicated how much the Switch will cost, but Japanese news organization Nikkei predicts it will hit shelves at less than $250, at least in Japan. Nintendo is traditionally on the lower end of the spectrum when it comes to new console prices. The Wii U, for instance, cost $300 in the US at launch — compare that with the Xbox One, which debuted at $500, or the PlayStation 4, which premiered at $400.
Next, the games. Nintendo has kept a tight lid on the software lineup it has planned for the Switch’s release, though we do know a few titles that are definitely coming to the new console. The Switch will get The Legend of Zelda: Breath of the Wild at launch, plus Shovel Knight, Yooka-Laylee, Stardew Valley, Rime, Project Sonic 2017 and possibly even a full Pokémon game — that would be a first for a living room console.
Nintendo showed off a handful of titles in the initial Switch sizzle reel, including an updated Splatoon, NBA 2K, a 3D Mario game, new Mario Kart and Skyrim. However, the company has yet to officially announce these titles for the Switch. Tonight might just be the time to do so.
Plus, there’s the looming possibility that the Switch will support vintage GameCube games via the Virtual Console system, something that fans have been clamoring for over the past few video game generations.
And then there’s the hardware. Nintendo hasn’t detailed all of the features of the console, which pulls apart to become a mobile system complete with portable controllers. They might even be motion controllers — perhaps Nintendo will set the record straight in tonight’s live stream.
Overall, it remains unclear how Nintendo plans to position the Switch, whether as a living room console that can go mobile, or as a mobile console that can be used in the living room. Or, maybe as neither. Nintendo generally zigs while the rest of the video game industry zags — with mixed results. The Wii was a sleeper hit that changed the way the industry viewed “casual” and motion-sensing gaming, while its successor, the Wii U, was a flop.
Nintendo is also diving into the world of smartphone gaming, after years of denying any interest in that space. It’s a period of change for Nintendo — the Switch is its first big console release after the death of beloved CEO Satoru Iwata.
See what the company has in store for longtime fans and fiends alike tonight at 11PM ET, right here.
Amazon bringing 100,000 full-time jobs to the US by 2018
Amazon is about to go on a huge hiring spree, adding over 100,000 “full time, full-benefit” US jobs. They’ll be available to people with “all types of experience, education and skill levels,” the company wrote, ranging from engineers and software developers to entry-level fulfillment center positions. “Innovation is one of our guiding principles at Amazon, and it’s created hundreds of thousands of American jobs,” CEO Jeff Bezos said in a statement.
The company boasted that employees get “highly-competitive” salaries, along with health and disability insurance, retirement savings plans and company stock. It also offers 20 weeks of paid parenting leave, which can be shared with a spouse employed elsewhere. The company recently revealed that it pays male and female employees equally, and its diversity hiring is above average for the tech industry (which is to say, still not great).
Working at Amazon, both in executive and blue-collar positions, is famously difficult. Amazon has, in the past, also used a lot of contract labor to avoid paying benefits and other perks. Over the past few years, however, and with the latest hiring spree, the company seems to be shifting its workforce to more permanent positions — even if employees work fewer hours.
We plan to add another 100,000 new Amazonians across the company over the next 18 months as we open new fulfillment centers, and continue to invent in areas like cloud technology, machine learning, and advanced logistics.
States that will benefit from the hiring spree include Washington, Texas, California and Kentucky, thanks to expansions, renovations and new construction of fulfillment centers. Amazon also said it would hire an additional 25,000 veterans and military spouses over the next five years, keeping an earlier promise. Along with fulfillment, Bezos said the company will expand in “cloud technology, machine learning and advanced logistics.”
President-elect Donald Trump has threatened US companies that move jobs abroad with sanctions, but Amazon’s hiring spree doesn’t likely have anything to do with the new administration. Trump famously sniped at Amazon during the election, saying “Amazon is getting away with murder, tax-wise.” Bezos replied in kind, saying Trump’s lack of transparency (he still hasn’t released his own tax returns) “erodes our democracy around the edges.”
After a meeting with Trump and other tech leaders in December, including Tim Cook and Elon Musk, Bezos softened his tone, though. He said Trump’s promised focus on innovation “would create a huge number of jobs across the whole country, in all sectors, not just tech.”
Source: Amazon
Would you pay to get a new song texted to you every day?
On the surface, every streaming music service offers the same thing: access to tens of millions of songs. That’s more music than any human being can ever consume, so the big battle now centers around helping users find songs and albums they care about. Apple Music bet big on human-curated playlists, Spotify has Discover Weekly and Release Radar, and Google Play Music has stations for every mood and activity, to name just a few examples. These options help, but getting through your recommendations and actually finding things you love can still take some work.
A small startup called Noon Pacific has been helping to cut through the clutter for several years now. Every Monday, the company publishes a curated 10-song playlist in its apps and on its website for visitors to stream, ad-free. If you’d prefer an even quicker way to find new music, the company recently launched its first subscription product: Noon Pacific Daily. After signing up, you’ll get a text message every day at noon Pacific (naturally) with a link to a new song.
For someone like me who loves finding new music but doesn’t have tons of time to dig through playlists, Noon Pacific Daily seemed like a painless way to find new songs. I’ve spent the last few weeks giving it a shot, trying to decide if it’s worth $3 a month. (Note that you can currently sign up for $2 a month as part of a special introductory offer.)
Daily is dead-simple: My phone has lit up with a new text right around noon Pacific every day with the name of an artist and song title, along with a quick sentence description of the track. Then there’s a link to the song itself; tapping it brings up a page where you can stream the song or open it in Spotify, Apple Music, YouTube or Soundcloud.
If you’re a fan of one particular music service, you can set it to automatically open the song in that app every day, which I’ve done. I’ve taken to immediately adding each song to a playlist so I can go back and re-listen later, or have songs saved in case I don’t have time to listen right away. But the text message thread works as a pretty effective playlist on its own.

As for how it picks the songs, a Noon Pacific spokesperson told me that songs are picked from top music blogs, artist submissions, promo lists and manual Soundcloud digging. The results have been a fairly eclectic mix of straight indie alternative, electronic, hip-hop, dance and pop, with most artists pretty far from mainstream radio. Unsurprisingly, the selection isn’t wildly different from what you’ll find in the free weekly playlists Noon Pacific posts every Monday.
I didn’t like every single song that I heard using Noon Pacific Daily, but I didn’t expect to; the company is serving up the same song every day to all subscribers, so it’s not going to be tailored to your particular listening habits. The good news is you can listen to dozens of the free playlists the company has been posting in the Noon Pacific app every week to get a sense of whether or not your tastes align.
In this case, it all worked for me; I added more than half of the songs sent to me over the last few weeks to my ongoing new musical discoveries playlist. I’m finding songs that I like and may not have discovered otherwise — and more importantly, I’m finding them with minimal effort. It’s easy to carve out five minutes every day to listen to a new song.
Of course, I still need to do the hard work of digging in and listening to more tracks from these artists I’m being exposed to, but that’s not a knock on Noon Pacific Daily. The service has found me good music to try out. Even if I only play these songs once and never again, it’s still worth the cost. If you want to try Noon Pacific daily yourself, the company just added a seven-day free trial as well as a 30-day money back guarantee, so there’s little downside to giving it a shot.
Obama expands the NSA’s ability to share data with other agencies
The National Security Agency is now able to share raw surveillance data with all 16 of the United States government’s intelligence groups, including the Central Intelligence Agency, Federal Bureau of Investigation, Department of Homeland Security and Drug Enforcement Administration. These agencies are able to submit requests for raw data pertaining to specific cases, and the NSA will approve or deny each request based on its legitimacy and whether granting access would put large amounts of private citizens’ information at risk.
Previously, the NSA would filter information for specific requests, eliminating the identities of innocent people and erasing irrelevant personal data. That’s not the case any longer.
The rule changes open up the NSA’s trove of raw data to other intelligence agencies, making it easier for authorities to notice trends or spot troublesome communications. However, activist groups including the American Civil Liberties Union argue that relaxing the rules around sharing raw data threatens the privacy of innocent US citizens, according to The New York Times.
These changes have been a long time coming.
The NSA has a sweeping surveillance system that collects satellite transmissions, phone calls and emails that pass through networks abroad, and other bulk communications data. The program is largely unregulated by wiretapping statutes, instead adhering to regulations laid down in the aftermath of the World Trade Center terrorist attacks on September 11th, 2001.
In 2002, the Foreign Intelligence Surveillance Act secretly permitted the NSA and other agencies to share raw, domestically gathered intelligence. In 2008, the FISA Amendments Act legalized warrantless, domestic surveillance when the target was a foreigner abroad, and the Foreign Intelligence Surveillance Court also approved the sharing of raw email data uncovered in these programs.
That same year, President George W. Bush modified Executive Order 12333 — which regulates surveillance systems not covered in wiretapping laws — to allow the NSA to share raw surveillance data. However, first the director of national intelligence, the attorney general and the defense secretary had to agree on procedures.
This brings us to 2016.
Attorney General Loretta Lynch signed the new rules on January 3rd, after Director of National Intelligence James Clapper approved them in December. President Barack Obama’s administration passed the changes in its final days in the White House — President-elect Donald Trump takes office on January 20th.
The FBI and other agencies use the systems laid out by the FISA Amendments Act and Executive Order 12333 in different ways, as noted by The New York Times. The warrantless surveillance program enabled by FISA allows FBI agents to search that program’s database when investigating ordinary criminal cases. Meanwhile, the 12333 database is limited to agents and national security analysts working only on foreign intelligence or counterintelligence operations. Either way, if an agent sees evidence of an American committing a crime, that information is forwarded to the Department of Justice.
Surveillance isn’t the only sector where the US government is attempting to keep up in an increasingly connected world. For example, in December the Department of Justice received expanded powers to search multiple computers, phones and other devices on a single warrant.
Source: The New York Times
Netflix’s ‘iBoy’ trailer introduces smartphone superpowers
Netflix has unveiled a new original film with a pretty ludicrous tech angle. iBoy (yep) stars Maisie Williams and is set to arrive on the streaming service January 27th. The plot, unfortunately, reads like a “Toast parody of a Black Mirror episode,” as my colleague Aaron put it: normal teenager Tom is beaten by thugs, leaving parts of a smartphone embedded in his brain. That somehow gives him “strange” superpowers, which he uses to save his best friend Lucy (Williams) and take revenge on the gang.
The trailer (below) is as bad as that sounds, but with teenage protagonists and phone-related superpowers, it may be what Netflix executives think the youth market wants. In any event, with reduced movie options from other studios, the streaming company is trying to crank out more and more original content, and not all of it is going to be Beasts of No Nation. Hopefully, the final film will be better than the trailer looks.
Source: Netflix (YouTube)
Fox and Intel will offer a player’s perspective during the Super Bowl
Over the last year or so, Fox Sports has been keen on bringing the latest tech to its live broadcasts. When the network hosts Super Bowl LI in a few weeks, its plans to offer viewers a player’s perspective without requiring the participants to wear cameras. Using Intel’s 360 Replay technology that’s already been employed in MLB and the NBA, Fox will “allow a moment to be recreated in 3D space” to show fans exactly what a player saw during a play. The network is calling it “Be the Player.”
This likely means that if a quarterback throws an interception during the game, Fox will be able show you exactly what he saw rather than a bird’s-eye view of the situation. The Intel 360 replay system uses an array of cameras situated around the stadium to create the on-field perspective with the help of “a huge bank of Intel computing power,” according to Fox Sports SVP Michael Davies.
After employing drones and VR for live broadcasts, Fox teamed with GoPro to offer a referee’s perspective during the Big 10 championship game. The network says the “Be the Player” perspectives will not only enhance viewing for fans, but it will give announcers a better look at exactly what happened. The visuals should allow us to see if a player’s view was blocked and any alternate options from the field level. We don’t have to wait until February 5th to see the system in action as you can watch a preview clip down below.

Source: Fox Sports
HTC will intro half as many smartphones this year
HTC may have taken a bolder approach in the smartphone world with its new U Ultra and U Play, but it’s decided to play it safe with its roadmap for the rest of the year. After today’s launch event in Taipei, I caught up with President of Smartphone and Connected Devices Business, Chialin Chang, who confirmed that HTC will only be releasing six to seven smartphones this year. While that’s a drastic cut from last year’s eleven to twelve models, he claims this has so far allowed the company to focus on its smartphones’ core features, in a bid to put up a better fight against other brands.
In the case of the two newest phones, Chang sees machine learning as their main selling point. The exec described the so-called Sense Companion virtual assistant as a combination of Google’s Awareness API, device information and third-party data. Over time, the device learns your commuting pattern, dining preferences and app usage habits, in order to offer useful tips at the right time. For instance, it will be able to tell whether you walk, drive or take public transportation to work, and will eventually offer relevant departure times; it will even throw in weather alerts before you leave home or work.

While some of these assistant features are already available on Android in some shape or form, HTC’s approach goes deeper than, say, Google Now. Besides, U Ultra users will get these notifications on the always-on second display just above the main screen. Better yet, your assistant profile data is stored in the cloud, meaning that when you switch to another HTC phone in the future, you don’t have to retrain Sense Companion; simply log into your HTC account and your new device will be just as smart as your old one.
Chang added that throughout 2017, HTC will be adding more Sense Companion features via updates, which should see the addition of more third-party services involving the likes of restaurants, malls and cab-hailing apps, in the hopes of making it more of an integral part of our lives. Or as the exec put it, “we’re not trying to emphasize A.I.; we’re trying to emphasize companionship.”
Admittedly, we won’t know how well Sense Companion works until we’ve spent some quality time with the new phones, but the U Ultra alone — set to ship in the US in March — has a few more tricks up its sleeve. Its bundled USonic USB-C earphones can quickly scan your ear canals using sonic pulse, and then the phone automatically adjusts its audio output’s frequency response to make up for weaker hearing at certain frequencies. This is more convenient than the manual tuning on the HTC 10 and the HTC 10 Evo (aka Bolt).

The U Ultra also packs four always-on microphones for biometric voice recognition plus high-res 3D audio capture — something that Chang believes is the future for mobile VR. Photography-wise, the U Ultra has the same awesome 12-megapixel UltraPixel main camera as the HTC 10, as well as a 16-megapixel front-facing camera which offers an UltraPixel mode for boosted sensitivity in dark environments. Not to mention that the device comes in a refreshingly gorgeous “liquid surface” design. “We hope consumers will really see value in this smartphone.”
For the remainder of the year, Chang told us to also expect a few Desire devices for the “fun and affordable” markets. When asked whether there will be an “HTC 11,” the exec simply said it won’t be named as such this year, so there’s a good chance that HTC is still committed to a true flagship device for 2017 — one which will hopefully take advantage of Qualcomm’s upcoming Snapdragon 835 chipset.



