Everything is smart: AI is officially the new cool thing

This weapon will be powerful, versatile and indestructible. It will feel no pain. No pity. No remorse. No fear. It will have only one purpose: to return to the present and prevent the future. This weapon will be called the Terminator.
I love Artificial Intelligence, machine learning, and everything that makes computers smarter. Let me qualify that and say I love the idea of it all. It’s something Sci-Fi writers have been talking about for over a half-century and has created some pretty compelling fantasies about a future where machines are alive and do everything. I geek out on that. I also think the industrial applications of machine learning and AI that already exist are pretty great and love to pore through diagrams and read about their capabilities. Actually using it in everyday life isn’t nearly as polished — it can get downright awkward at times. But what I think or what you think doesn’t matter. AI is the new cool tech that we’re all supposed to want.
On our phones, what started out as a simple set of voice commands has blossomed into a thing with personality who can do a little bit of thinking (magic?) and get it right most of the time. Developers can create skills and actions for their service or product and companies will create more physical things that can work with these assistants. Little by little it looks like Alexa and Siri and Google Assistant (and others yet to come) will allow to live the dream and be served by robots. And hope they don’t get too smart and kill us all.

The folks making the things we love to buy see an opening here. And they’re taking full advantage of it. Alexa is everywhere from your headset to your refrigerator to the Huawei Mate 9. Google Assistant is now on your phone, in your living room, and on your TV — and soon to be plugged in like an air freshener in your bathroom. The ASUS Zenfone AR has been able to package AR, VR, and Facebook into a normal-sized phone. Eventually, anyway. It’s becoming lucrative to build smarter devices.
Amazon and Google may be how we interface with all these smart things right now, but the two big names in mobile will stake their claim with their own take on a phone that’s more than just a phone. Rumors and news stories have spelled out that the next Galaxy S line of phones is going to have a Samsung assistant that’s not tied to Amazon Alexa or Google Assistant. Probably a really good one, too. They’re using technology they purchased that was polished enough to be presented to Apple. That, along with the things they have been working on in-house will try to make sure what we see as their first generation product won’t feel like a first generation product.
Samsung can make anything. That means they can make it all work with each other through their AI platform.
Look for the Samsung Galaxy ecosystem to branch into appliances and home control because Samsung knows a lot about both. Samsung has the ability to integrate the smart voice you interact with on your next phone into your life in ways Google and Amazon can only dream about.
Apple is being Apple. Watching. Saying nothing, And furiously working behind closed doors to bring Siri a major overhaul. They’re into the project far enough that they didn’t need to buy a great start-up and a working technical model that was offered to them. When you have billions laying around you buy the things you need when you can. When they do release it, New Siri will be smarter, funnier, better looking and more fun to use than anything else out there. Siri and HomeKit can make for a nice ecosystem, too.
By then we’ll have seen something from everyone else. LG, HTC, Motorola and the rest will follow suit because the market demands it (A.K.A. Samsung has it and they are the market when you say Android). Whether you wanted it or not, your next expensive phone will probably be a smart assistant that learns from you and helps you do everything from making a dinner reservation to controlling your lights and temperature and garage door and window blinds. But will we actually use it?
That’s the big question. There’s money it in — both real clinky shiny gold coin money as well as paying with your data money — so it had to come to our phones eventually, but how do we benefit is a better question. Being able to tell a little robot voice to do things is fun. It’s also expensive when compared to doing things the old “dumb” way. But that’s just the novelty side of all this right now. how these smart servers in the sky can integrate into our lives when we need them to is the big picture.
Smart computers will need a lot of time to get everything wrong while they’re learning.
Your life’s agenda in the hands of a hunk of wires will be filled with problems and hiccups for the foreseeable future while the various factions duke it out and developers figure out how it can work so they can make it work. Missed appointments or getting the wrong pizza are things early adopters will be faced with if it ever goes that far. The tech isn’t ready to fill these sort of expectations just yet, even if some of us want it to be. And even if it gets there, some of us just won’t want to use it for one reason or another.
I don’t know if the tech is ready to actually be useful, or if we’re ready for actually useful “smart” machines. But I do know that watching it all unfold is going to be interesting.
The CES comedown and the coming onslaught

That’s a wrap on CES 2017!
I didn’t attend Las Vegas this year, but having to oversee CES coverage from the same seat that I regularly do my job gave me a top-down perspective on how alternately tenuous and vacuous the whole thing seems.
Some years, a particular trend stood out — 3D, or curved, or the reclaimed vestiges of old operating systems — but in 2017, what emerged was a pervasive vagueness.
From the outside, Alexa seems to have stolen the show: without an official presence from Amazon, the nascent AI platform was everywhere, buoyed by the fact that the Seattle giant has made it incredibly easy — Netflix easy — to add near-cognition to a refrigerator or slow-moving buddy robot.
Alexa’s ubiquity also speaks to the fact that, as with the early days of iOS and Android, gadgets and their creators have a very difficult time living in a vacuum, and the advantages to a box that advertises Alexa is just as important as having it in the product itself.
I’m intrigued — fascinated, even — with the prospect of using Android apps on Chrome OS.
From an Android Central perspective, we saw the announcement of two fantastic-looking and potentially disruptive Chromebooks, at least one of which will be making it into my office in the next few weeks. I’m intrigued — fascinated, even — with the prospect of using Android apps on Chrome OS, especially on hardware like the Samsung Chromebook Pro that was specifically designed for such purposes.
And then there was the surprise expansion of Google Assistant into none other than the updated NVIDIA Shield, which marks the first time Google’s own AI has extended beyond the company’s own hardware. Google has a vested interest in catching up to Alexa as quickly as possible, as all it has to do is mirror Amazon’s strategy: because there is no UI to speak of outside the generalized voice interface people are increasingly growing accustomed to, Google can and should give Assistant away to as many people as possible through myriad hardware and software partners since it alone controls the cloud backend.
Google has a vested interest in catching up to Alexa as quickly as possible.
Finally, the Huawei Mate 9 made its U.S. debut, and having used it for nearly two months I have to say I’m excited to see what the Chinese giant can achieve. Without carrier support the Mate 9 won’t sell in quantity, but this is the company’s first true salvo into the world’s most important handset market.

Other wins:
- The TCL-built BlackBerry ‘Mercury’ looks intriguing, and I have to admit to a fair amount of nostalgia-driven interest, but the prospect of reverting back to typing on a physical keyboard for an extended period of time interests me as much as returning to ink-and-paper for note-taking. That is, there is an inherent fascination in finding that analog person I left behind half a decade ago, but it’s not so strong as to upend my workflow which is, even on a touchscreen keyboard, far more efficient.
- Google Assistant aside, I’m really excited to give the new NVIDIA Shield a try, even if I don’t have a 4K TV. More and more, though, I’m finding reasons to upgrade.
- The ASUS ZenFone 3 Zoom is a phone I could get behind if I didn’t know exactly what kind of software disappointment I was in for. But I do, so I’m preemptively walking away.
- The Snapdragon 835 is, geek speak aside, one of the most important announcements of the year. Moving to a 10nm process alone is cause for celebration, especially as Intel continues to falter in the x86 department.
- On the Windows side, Razer’s Project Ariana is one of the most tenacious and unique enterprises I’ve seen in years, and I really hope the company makes it happen.
Thanks for following along on our CES 2017 adventure. The year is new, but I think it was a great start, and I can’t wait to share what we have in store in the coming months!
-Daniel
The Apple iPhone is 10 years old: Look how much the iPhone has changed
Today, the 9 January 2017, marks the 10th anniversary of the Apple iPhone’s announcement in San Francisco when Steve Jobs presented the new smart phone to a packed audience of Apple guests, staff, and journalists, including Pocket-lint.
Although rumoured for a number of years, the keynote, which was the catalyst to Apple’s huge success ever since, saw CEO Steve Jobs reveal how he believed the company would change the phone industry forever with their take on the mobile phone.
“This is the day I’ve been waiting for the last 2 years”, he said as he announced the new phone at the keynote speech in San Francisco at MacWorld 2007 before making the first call on the phone to Jony Ive.
Over the next 90 minutes Jobs detailed how Apple wanted to re-invent the phone showcasing a number of new features. He even got a standing ovation from the audience at the end of the keynote.
At the time, companies like Motorola, Nokia, Microsoft, and BlackBerry laughed off Apple’s event and plans to change the phone industry.
In an unusual move from Apple, customers would have to wait almost 6 months for the Apple iPhone to go on sale in the US and a further 11 months before it went on sale in the UK.
It’s hard to imagine now, but the first iteration of the iPhone didn’t have a number of features we take for granted today including “copy and paste”, 3G, or even apps as we know and enjoy today. You could also only sync it via iTunes on the desktop.
Since 2007 Apple has adapted and changed the design of the iPhone a number of times, ditching the metal design for a plastic one for the iPhone 3G and 3GS before moving to glass for the iPhone 4 and 4s models. It was back to metal with the iPhone 5, and with the exception of the iPhone 5c, the company has stuck with metal ever since.
The iPhone hasn’t escaped criticism over the years though. There’s was “bendgate”, “antennagate”, and even a claim by some that their beard got trapped in the casing.
Click through the gallery and take a look, and be sure to let us know what your favourite Apple iPhone handset was and where you want Apple to go next with the iPhone 8 in the comments below.
The Engadget Podcast Ep 23: Leaving Las Vegas
Editor in chief Michael Gorman, executive editor Christopher Trout and managing editor Dana Wollman join host Terrence O’Brien to give you one last update from the ground in Las Vegas. They talk about the history of sex at CES, it’s quiet reemergence and all the most absurd gadgets from the show floor. Plus they settle once and for all who is the Flame Wars champion, and who will have something to prove in 2017.
Relevant links:
- Penthouse’s CEO thinks VR porn should be a carnival of sex
- Porn is back at CES, but good luck finding it
- Ron Jeremy predicts porn’s … present?
- Wanna develop an app for your sex toy?
- Sex at CES: An uncomfortable coupling
- Simplehuman made a trashcan you can open with your voice
- Oh hey, they have smart hairbrushes now
- Willow’s smart breast pumps slide into moms’ bras
You can check out every episode on The Engadget Podcast page in audio, video and text form for the hearing impaired.
Watch on YouTube
Watch on Facebook
Subscribe on Google Play Music
Subscribe on iTunes
Subscribe on Stitcher
Subscribe on Pocket Casts
Click here to catch up on the latest news from CES 2017.
The Octopus watch might make a responsible person out of your kid
I’m no parent, but I was sort of a lazy jerk as a child. While I eventually got my act together, I have to wonder if having something like the Octopus watch as a wee lad might have helped. Unlike other smartwatches for youngsters, which usually focus on keeping them connected or entertained, the Octopus was instead designed to build and reinforce good habits on a regular schedule.
Let’s say your tyke doesn’t brush his or her teeth every night before bed. You could use the Octopus companion app to add a toothbrushing task, prompting an icon to appear on the kid’s wearable at the same time every day. That iconography is crucial, by the way: JOY, the startup behind the watch, designed it so kids who can’t read yet can still glance at and understand the reminders their parents have set for them. When the task is complete, the child taps the watch’s single button to mark it complete, and they get a little celebratory animation. You know that strange sort of joy that comes with ticking a to-do off a list? That’s sort of what the team is trying to invoke here.
Even better, the watch is meant to grow with these kids too. 4 year-olds would probably just appreciate the watch’s colorful screen and bright bodies — not to mention the octopus-shaped nightlight/charger combo — but an 8 year-old might enjoy the level of self-sufficiency it offers. They’ve got their daily schedules, just like we adults, and accessing them by way of some cool hardware like their parents could foster a greater sense of self-sufficiency. After first struggling to manage a bunch of tiny cousins running around, my go-to strategy became treating them like tiny adults — so far it’s gone well because I’m recognizing their agency, and the watch operates on a similar concept.

The Octopus is also just a fun looking piece of kit. I couldn’t actually fit the diminutive band around by wrist, but it’s more than fine for kids under 9 or so. As you’d expect from a wearable that stands a better than average chance of being lost or destroyed, the JOY team didn’t go crazy with the components. The screen is colorful but low-res and the single button was a little tough to press on some of the team’s show floor demo models. On the flipside, though, it looks sort of like an Apple Watch… if Apple farmed the production process out to Fisher Price. The startup completed a Kickstarter campaign earlier this year, but with any luck, they’ll be hitting doorsteps soon — we’re told shipping is set to begin this March.
Click here to catch up on the latest news from CES 2017.
Uber starts offering ridesharing data to cities
Cities and taxi companies frequently portray Uber as a threat, but the ridesharing outfit is now promising to give something back. It’s launching a website, Uber Movement, that will offer some transportation data that cities can use to improve their transit systems. It’ll reveal the historical time it takes to travel between neighborhoods, helping local governments that want to understand how major events and road closures affect traffic.
The info stems from Boston, Manila, Sydney and Washington DC, but data from dozens of additional cities should be available soon.
It’s a sharp contrast with Uber’s usual hesitance to share information. When it does, it’s for publicity moves like its ill-fated ‘God View’ app. As the Washington Post points out, though, this isn’t necessarily a selfless gesture. To some extent, it’s about winning over cities that frequently try to ban or regulate Uber beyond what it’s willing to accept: don’t hold us back, we’re performing a valuable public service. The company is still embroiled in a fight with New York City over the urban hub’s demand for drop-off times and other details that could reveal excessive work hours, and this might draw attention away from the problem.
Still, Movement could be important. It’s rare that cities can get such a large, ready-made data set — let’s just hope that it’s the start of greater openness rather than a one-off move.
Source: Washington Post
BMW shows off what else you could be doing in a self-driving car
The world is overflowing with autonomous cars. That’s great because it breeds competition that will benefit everyone. The typical self-driving demo (while generally impressive), usually requires reporters to sit in the passenger seat while a safety engineer is behind the wheel. BMW on the other hand put me behind the wheel and let its prototype 5 Series tell me all about Las Vegas while barreling down the freeway.
The BMW Personal Co-Pilot prototype 5 Series is a small taste of the automaker’s long-term plans for the interior of its autonomous vehicles. Earlier in the week the company showed off its i Inside Future sculpture/concept that demonstrated how BMW drivers would spend time in their cars in the autonomous age. All I can say is it’s a glorious future full of leisure time and giant screens playing our favorite movies.
But before we get to start chilling in our cars reading Breakfast of Champions or watching Bob’s Burgers, I was able to get behind the wheel of the prototype 5 Series at CES and take it for a spin through Las Vegas. Once I navigated the city streets in manual mode and merged onto the freeway, I pressed the blue button that set the car to “autonomous mode” and took my hands off the wheel.

While the car mostly drove itself (it tracked a Series 7 during the drive and the company called it an “approximation of autonomous driving”), I filled my new found “free time” trying out BMW’s Connected Experience infotainment options. The company had Las Vegas landmarks geotagged along the route and I used the car’s gesture features to find out more when one popped up on the screen. I could either read the entry about Caesar’s Palace, or learn about the metal animal sculptures along the freeway or even have the car read the info to me.
The car was also outfitted with Microsoft’s digital assistant, Cortana. By just asking it about nearby restaurants, the voice from the Halo videogame found a gap in my busy CES schedule and recommend places to eat with availability for two, at 8pm. I picked a restaurant and it went ahead and made a reservation for me. If I did this on my own, I’d have to load my calendar to see when I was available, then launch Yelp or OpenTable to find a restaurant — something you could never do when driving of course (please don’t try).

Like autonomous driving, these types of features are still years away. But it’s good to see that when we are finally able to safely ignore the road, we’ll not only be able to take care of things like email and restaurant reservations, but we’ll also learn just a little bit more about the monuments and points of interest along the road.
Feminist Frequency shares the promising gaming trends of 2016
Since 2009, Feminist Frequency has been tracking harmful representations of women in gaming. During this year’s CES, Feminist Frequency’s Executive Director Anita Sarkeesian and Managing Editor Carolyn Petit joined us on the Engadget stage to discuss something a little different: some 2016 gaming trends that showed the industry is moving beyond typical stereotypes and tropes and starting to think more differently.
For starters, Petit and Sarkeesian shared thoughts about more empathetic and complex approaches to relationships with non-player characters like The Last Guardian’s Trico, confrontations of real-world structural racism in Watch Dogs 2 and the tricky interactions in a risqué indie game called One Night Stand. Although there are some promising trends here, and 2016 was “a huge improvement” when it came to representations of men of color in gaming, Sarkeesian and Petit say there’s still a ways to go before female protagonists start seeing equal playing time.
Click here to catch up on the latest news from CES 2017.
Waymo built a full sensor suite for its self-driving minivans
Last month, Google’s newly-renamed self-driving division Waymo unveiled its newest test model, the Chrysler Pacifica. Today, during the North American Auto Show’s Automobili-D conference, the CEO John Krafcik revealed that it built a full sensor suite expressly for its autonomous minivans. Not only did the company’s extensive R&D lead them to create entirely new LiDAR sensors, but also cut down the expense of individual sensors, which will likely drive down the cost of sensor setups across the autonomous driving industry.
This year’s CES had plenty of self-driving concepts from traditional automakers, but it also saw non-automotive tech companies start inserting themselves into the autonomous vehicle game, like Nvidia lending its self-driving computing setups to both Audi and Mercedes. But by focusing on in-house development instead of using lent tech, Waymo can more tightly integrate its sensor hardware, software and image recognition. The company also claims its network achieves better resolution and performance than the hardware they were previously using.
There are other benefits to dedicated internal tinkering, not least of which is their team’s experience looking beyond current sensor setups. While most autonomous vehicles rely on lone roof-mounted medium-range LiDAR, Waymo innovated new long- and short-range sensing units for their Pacifica minivans to make a more complete awareness window around the vehicles. And while initial R&D is always pricey, the company’s research efforts have slashed the cost of a single high-performance LiDAR by 90 percent from its initial $75,000 price tag when Waymo first started its self-driving development.
The first Pacificas equipped with the new in-house sensor suite will hit California and Arizona roads later this month, extending Waymo’s lead in self-driving distance driven. The company already has 2.5 million miles under its belt, and will hit 3 million in eight months, Krafcik said.
Source: TechCrunch
Dell S2718D Ultrathin Monitor Release Date, Price and Specs – CNET

The Dell 27 Ultrathin Monitor (S2718D).
Dan Ackerman/CNET
Why do desktop computer monitors have to be thick and ugly, while razor-thin TVs garner awards at CES every year?
The Dell S2718D is the answer. This is a monitor that’s probably thinner than your phone: Engadget reports that it’s just 6.9mm thick, as thin as an iPhone 6. (The iPhone 7 and Galaxy S7 are a bit thicker.)
Not bad for a 27-inch, 2,560×1,440-pixel resolution screen with HDR support, no?

Dan Ackerman/CNET
More from CES 2017
- CES is finally open: Here’s what you missed
- Check out the smart home products at CES 2017 (so far)
- Razer’s new gaming laptop has three (!) screens
- Neon Museum is saving Las Vegas’ most beautiful tech
Did I mention it’s not just a monitor? The S2718D might also be a single-cable dock and charge station for your USB-C equipped laptop. If you’ve got one of the do-it-all ports, you can plug a single USB-C cable into your laptop and the monitor to get a screen, 45 watts of power, two full-size USB ports for your peripherals, and 3.5mm audio for speakers or headphones.
Mind you, it’s not necessarily a monitor for creative professionals who need picture perfection — Dell says it only covers 99 percent of the sRGB color spectrum, with no mention of Adobe RGB or more strenuous standards. The panel only refreshes at 60Hz, so it may not be the best pick for gamers. And while that 45W of power should be enough to charge a MacBook, it’s not enough for the new MacBook Pro line (which require 61W and up).

Dan Ackerman/CNET
Dell says it’ll go on sale in the US on March 23 for $700 (roughly £570 or AU$960).
Also see: Dell’s monster 8K-resolution monitor, also spotted at CES this year.



