Gboard now supports Chinese, Korean, and 20 other new languages
74% of the world’s population is now supported by Gboard.
Gboard has been my go-to Android keyboard of choice for well over a year now, and a lot more people will now be able to take advantage of its many features thanks to new language support.

Version 7.0 of Gboard is rolling out to the Play Store now for all users, and with it comes official support for Korean, simplified and traditional Chinese, and 20 other languages – including Adlam, Manx, and Maori.
These new additions mean Gboard now covers over 300 different languages, and in other words, 74% of the world’s entire population.
Along with the new languages, v7.0 for Gboard also brings auto-complete suggestions for email addresses, a universal search that lets you browse through emojis, stickers, GIFs, and more simultaneously, and the ability to have multiple keyboards selected at once.
Download: Gboard (free)
Google Lens is now available for all Android users through Google Photos
Select Android phones will also be able to access lens via the Assistant.
When the Pixel 2 launched last October, one of its exclusive software features was Google Lens. Today, Google Lens is escaping the clutches of the Pixel brand and expanding to all Android devices through the Google Photos app.

To access Google Lens, simply open Google Photos, select the picture you want to use, and then tap the Lens icon that’s in between the trash and edit options. Once you do this, Google Lens will scan your picture and show any information that’s relevant to it.
Google Lens can currently identify buildings/landmarks, company logos, cat/dog breeds, text, paintings, movies, etc.
In addition to this, Google also announced at MWC 2018 that certain phones from Samsung, Huawei, Motorola, LG, Nokia, and Sony would be able to access Lens via the Google Assistant. There’s still no ETA as to when that’ll happen, but in the meantime, Google Photos has your back to get your Lens fix.
Download: Google Photos (free)
Google Clips review: A thing that exists
This weird square might be the coolest thing Google has done in a long time.

“Okay, but what is it?” my friend asked for the third time during my description of the small teal square I was now actively fidgeting with. This was the third time during this party I’d run into this same problem. Someone will catch me setting Google Clips up somewhere in the room, shoot me a quizzical look, and when I didn’t offer an immediate answer ask what I was up to. My friends are all used to me bringing some new gadget to a party to play with, whether it’s a 1W blue laser that looks like a lightsaber or a VR rig to show people what it’s like to shoot zombies drunk. Yet this time, with nothing but this small camera from Google in my hand, I found myself unable to offer a simple answer that would satisfy the person asking.
Long answers for what Google Clips is I can handle. Clips is a camera from Google with AI baked in to take pictures and videos automatically. You set it somewhere in the middle of something interesting, and when you come back for it later there will be memories captured you otherwise would not have been able to capture. It’s not a constant recording; you only get the bits Google’s AI thinks were important. It’s a camera you have basically no control over, so you can enjoy the event you are supposed to be enjoying instead of walking around with your phone in front of your face the whole time.
This explanation begged an important follow-up, which takes even more time to explain. “Does it work?”
See at Best Buy

The Automatic Camera
Google Clips Hardware
There’s really not much to Google Clips. It’s a little square that fits in the palm of your hand, with a camera lens on one side and little else. There’s a USB-C port for charging, a single physical button under the lens, and three LEDs under the white plastic to let you know when the camera is on and doing something. To turn it on, you twist the lens and wait for the lights to pulse. Once that happens, you put the camera somewhere and leave it. That really is it, your job as the human is complete. The rest is up to the AI, Google is the photographer here.

Google’s software seems to work in a couple of different ways. If the camera detects a ton of motion, it will save a clip of whatever just happened. If multiple faces are detected, it will save a clip of whatever just happened. Basically, the camera is always recording but only saves the stuff it thinks you will find interesting. All of this is done locally, with no connection to your phone or your data required. As “smart” things with cameras on them go, it’s remarkably privacy-focused. You are the only one with access to the photos and video captured by Clips. The AI heavy lifting all happens locally, and you can extract photos and video without ever being connected to the internet. The recorded content isn’t stored to your phone or in Google Photos unless you explicitly give permission.
All of that having been said, Google offers a way to “train” the camera to give you more things you might be interested in. If you sync your People and Pets collection from Google Photos, Clips will have a library of faces it knows are important to you. When one of those faces are detected, it will record ever it previously might not have. You can also manipulate Clips by using the physical button on the front of the camera. Use it to take a photo of someone, and Clips will identify that person as a priority for future recordings.
The camera itself is interesting. It’s a 12MP sensor with an ƒ/2.4 aperture and a 130-degree Field of View (FoV) lens, which means everything it captures is wide. This presents an interesting challenge for getting photos and video you actually want to see. If you place the camera on a surface somewhere at the edge of a room, it will capture the whole room with no problem but everyone will appear far away. If you place the camera too close to the action, it risks getting knocked over or not being in the right place if the subject moves. The solution, for the most part, is to use the Live View mode in the Google Clips app so you can temporarily see what the camera sees to ensure the best placement. The inherent problem with that solution, however, is you are now using your phone to control a camera designed to encourage you to put your phone down.
With its 16GB of onboard storage and the promise of three straight hours of “smart capture” Clips is designed to live long enough to capture memories from your average kids party. The onboard storage allows you to store up to 1400 files with no problem, so you can recharge and capture without being concerned you need to sort through these memories until later.

Maybe a little too simple, but good at what it does
Google Clips Software
When the party has ended and you’re ready to wind down, you can open the Clips app on your phone and see what it captured. The app syncs to the camera even if it hasn’t been used in hours, as long as it is nearby. Once connected, you have two options for browsing. There’s the unfiltered list of 21-second video clips the camera caught, and there’s the AI-enhanced edits which only focus on the things it found the most interesting. As you scroll through either list, the top file will auto-play so you get a quick look at what was happening.
The results are something you and your friends and family are guaranteed to love.
From here, you have a couple of options. You can save the file straight to your phone, where it will appear as a Pixel-style Motion Photo and backed up to Google Photos. You can pick a single frame out of the video to save as just a photo, where it can be edited like any other photo. Or, my personal favorite, you can edit the file right on the camera. The edit tool in the Clips app lets you crop as you see fit, trim the video length as you see fit, and then save to your phone as either a photo or video or gif. Saving as a gif works exactly like the Motion Stills app from Google, which is both familiar and a little confusing.
With its 12MP camera, Google Clips exported in a wild variety of ways I had little control over:
- GIF – 0.3MP or 640×480
- Motion Photo – 6MP
- Video – 2.3MP
For comparison, my Pixel 2 will capture 8MP motion stills with the front-facing camera without issue. The app has some basic controls for image quality if you want to record something nicer, and those settings were cranked all the way up to High for these clips. That means the image and video sizes can be considerably smaller, which naturally also means quality takes a hit. That’s an issue, because image quality is already questionable in a lot of situations for this camera.

The ƒ/2.4 aperture means it struggles a bit in low light or variable light, but Google’s AI does quite a bit to clean up noise before presenting the image to you. This isn’t the kind of thing I would rely on outdoors at night, but in a dimly-lit room or a room where all of the light is shining right at the camera I found the captured photos and videos were okay but clearly nowhere near as good as the top cameras available on phones today.
What this app does, it does well. There’s really not a whole lot you’re supposed to do, if the whole point is to set up a camera and become part of the experience. Personally I’d prefer the ability to edit directly in Google Photos. Right now you have to jump around from the Clips app to Photos and back if you want to make edits to something you’ve saved. If I’m already syncing my data from Photos to this camera, it’d be nice to have an “edit in Photos” option that takes the image I’m playing with straight to the app to be edited.

Is this supposed to be so much work?
Google Clips Experience
I have four kids running around my house and I love going to small gatherings with my friends. On a high level, Clips seems like it was built for me. To be able to get out from behind the camera and participate without losing the opportunity to capture what could be a precious memory is right up my alley. Google Clips can deliver that experience, and when it does the end results are fantastic. But it’s not quite as automatic or seamless as it probably could be.
In the week that I used Clips, I found myself constantly hunting for the best place to put it so it could record things. Parties often happen in multiple rooms, so I would have to move the camera to where the people are to get the things I wanted. Each time this happened, I went from being the active participant in the party to the passive observer and event documentary host. Several of the things Clips recorded were of me trying to position Clips, and on more than once occasion my big dumb face happened to be blocking part of a really cool thing that was happening behind me.
But like I said, when it works the results are something you and your friends and family are guaranteed to love.

I have a collection of memories from Clips that I either couldn’t have or wouldn’t have captured, and that’s cool. Over time I got used to the “frame” for Google Clips and didn’t have to reply on the live view in the app as much, but there was never a point where I found myself truly setting the camera up and forgetting about it. It’s the kind of thing that makes me wonder if I’m a fan of Google Clips because I like the underlying idea, or if I’m a fan because it’s actually delivering on its promise and making me more present and focused on the moment in front of me.
Either way, this thing I have trouble describing is an incredible exploration of what we think about when taking photos. I enjoy the way Clips has challenged the way I think about what I capture and how, and find myself eager to explore this camera a lot more.
Should you buy it? It Depends
At $250, Google Clips is expensive. This is something you buy if you want to try a new way to take photos, not something you buy if you want the best possible photo or if you enjoy the act of taking photos. Google is the photographer with this product, and if that is an idea which excites you I would recommend picking one up.
See at Best Buy
Deezer gains support for Amazon Alexa voice controls
Available now across 66 countries.
As if you didn’t have enough music streaming options to choose from on your Alexa-powered speaker, yet another one is joining the mix as Alexa finally offers full support for Deezer.

In addition to basic voice controls for searching through songs, artists, and controlling the volume and playback of tunes, you can also say “Alexa, play Flow” to stream your Flow playlist of songs that Deezer creates based on what you listen to.
You’ll be able to use Deezer on all of Amazon’s own Echo devices, and if you own something like a Sonos One or another speaker that uses Alexa but isn’t made by Amazon, you’re also fully covered.
Deezer will be available to use across 66 countries were Alexa is supported, and it’s rolling out to users now.
Amazon Echo
- Tap, Echo or Dot: The ultimate Alexa question
- All about Alexa Skills
- Amazon Echo review
- Echo Dot review
- Echo Spot review
- Top Echo Tips & Tricks
- Amazon Echo vs. Google Home
- Get the latest Alexa news
See at Amazon
Audi gives Airbus’ flying taxi concept a stylish makeover
Did you think Airbus’ Pop.Up flying taxi concept was a little drab? So did Audi. It teamed up with Airbus and Italdesign to unveil Pop.Up Next, a reworked version of the two-seat autonomous vehicle concept. The new version is more stylish than the mostly functional original, and borrows more than a few cues from Audi’s current design language. However, it should also be more practical — it’s supposed to be “significantly” lighter than the original, which is rather important for a hybrid passenger drone.
The core concept remains the same. Pop.Up Next revolves around a passenger pod that attaches to a skateboard-like platform that drives around town, but hooks up to a drone for times when flying would be more convenient. As a passenger, you’d stare at a 49-inch touchscreen that uses face recognition, eye tracking and voice recognition for interaction.
It’s still not certain if or when the concept will see production. There are any number of hurdles beyond the technology itself, such as legal frameworks, infrastructure (you’d want safe places for the ground-to-air transition) and, of course, business models. Right now, Pop.Up Next is more about showing what transportation could look like in a fully autonomous future.

Click here to catch up on the latest news from the 2018 Geneva Motor Show.
Via: Autoblog
Source: Audi
Gboard for Android now supports Chinese and Korean
Google announced today that Gboard for Android is getting a handful of new languages including Korean and both traditional and simplified Chinese. The company said that those have been the most requested languages for Android — they’re already on Gboard for iOS — and they join 20 others that are rolling out to Gboard for Android now.
Gboard launched in 2016 and it now supports over 300 language varieties. You can check out a full list here. Google said that while a few of the newly added languages are some of the most widely-spoken, it’s also working on including others like Manx, Maori and the Fulani alphabet Adlam that are not as well known.
The new languages are rolling out worldwide and should be available within the next few days.
Source: Google
‘Luke Cage’ season two premieres June 22nd
Jessica Jones isn’t the only streaming Marvel show getting a long due follow-up. Netflix has posted a teaser clip confirming that Luke Cage’s second season will premiere on June 22nd. The video itself doesn’t reveal much about the story, but it’s evident from the description that the stakes have changed. Cage is now a hero, but that makes him a bigger target — and a “formidable new foe” will make him deal with the “fine line” between a champion and the villains he’s supposed to be fighting.
Marvel’s Netflix programming has produced mixed results, but Luke Cage has generally earned a warm reception for both the strength of key actors (most namely Mike Colter in the titular role) and a storyline that’s as much about Harlem and its challenges as it is the usual superhero fight scenes. The question is whether or not Netflix can maintain those strengths while expanding beyond familiar territory.
Source: Netflix (YouTube)
Amazon expands Whole Foods delivery to Atlanta and San Francisco
Last month, Amazon launched its Whole Foods delivery service, allowing Prime members in some Austin, Cincinnati, Dallas and Virginia Beach neighborhoods to order Whole Foods groceries through Prime Now and have them delivered within two hours. Today, Amazon announced that the service has expanded to select areas of Atlanta and San Francisco. Customers in those regions can order produce, bakery items, dairy, meat, seafood, flowers and everyday staples through Prime Now. San Francisco customers can also order alcohol through the service.
Whole Foods delivery is available from 8AM to 10PM and doesn’t cost any extra as long as orders are at least $35. However, if you’d like your order within one hour, you can pay an additional $8 for that quicker turnaround. Fast delivery was the next logical step in the Whole Foods-Amazon integration and Amazon says the service will reach more cities throughout the year.
Source: Amazon
Alexa can now stream music from Deezer
Deezer has been rapidly expanding its footprint, from the devices it’s available on to adding voice control. Now it’s adding Alexa to the mix. Not only is Deezer voice control available on Amazon’s Echo devices, but it’s also accessible on devices that support Alexa, such as the Sonos One and Ultimate Ears BLAST. The service will be accessible to users in 66 countries.
Premium+ customers in the UK can use the command “Alexa, play Flow” to play a never ending stream of individually customized music. All Deezer users can take advantage of voice commands to search for and play albums, artists and songs. Deezer on Alexa also supports play, pause, song skipping and volume control.
In order to stay competitive, Deezer needs to be available on as many devices as possible, and it’s certainly doing that. In the last year, the service has come to the Roku, Google Home and more.
Mercedes’ futuristic headlights are no longer just a concept
Mercedes has been testing smarter headlights, and now it appears they’re no longer just a concept. Daimler announced today that these futuristic headlamps will be available in top of the line Mercedes-Maybach S-Class vehicles.
These smart headlights are equipped with the equivalent of a digital projector, which can control how and where the light is thrown. The sensors on the car can determine when and how bright the headlamps should illuminate, preventing drivers from blinding pedestrians or people in other vehicles. The headlamps can also project information onto the road. You can read more about our experience with these smart headlights, which have a resolution of over a million pixels each, here.
It should be noted that this is an incredibly small production line, but purchasers of vehicles equipped with these headlamps can expect to take delivery of their cars during the first half of this year. It’s going to be awhile before this tech makes it down to regular vehicles, but it’s nice to see it in production regardless.
Click here to catch up on the latest news from the 2018 Geneva Motor Show.
Via: Gizmodo
Source: Daimler



