It’s a scenario ripped straight out of the movies: Thieves invaded data centers peppered across Iceland to steal around 600 computers used to mine digital currency. Dubbed as the “Big Bitcoin Heist” by the local media, police are baffled, stating that the series of thefts is the biggest they have ever seen. Even more, the total value of the theft averages around $2 million or just over $3,300 per machine.
But don’t let the report fool you. The heist wasn’t one large massive invasion of Iceland’s data centers during a specific night. Three burglaries took place in December while a fourth followed in January. The police waited until the fourth heist and the arrest of 11 individuals before going public. Only two still remain in custody.
“This is a grand theft on a scale unseen before,” Olafur Helgi Kjartansson, the police commissioner on the southwestern Reykjanes Peninsula, told the Associated Press. “Everything points to this being a highly organized crime.”
So why Iceland? According to reports, its geothermal and hydroelectric power plants provide cheap, renewable energy needed to power farms of PCs mining digital currency. The “mining” aspect simply means these PCs help maintain the actual digital currency platform, whether its Bitcoin or Ethereum, and receive digital coins in return. Right now, a single Bitcoin is worth $11,561 in North America.
And that is the fuel behind the theft. Digital currency is not maintained by any one government, nor can it be traced. With 600 computers, these thieves could generate millions in cash without a trace. But there’s a drawback: power. PCs need power to mine digital coins, thus the local police are keeping a close eye across Iceland for large amounts of power consumption in hopes of catching the thieves red-handed.
But using all 600 PCs in one centralized location will likely never happen. The fishnet will need to be wider than just catching thieves using loads of power. Iceland’s law officials are now calling on storage space unit providers, electricians, and internet service providers to keep tabs on large pockets of PCs leeching large amounts of power and bandwidth.
With Bitcoins, miners can’t just dig up a single coin and exchange it for cash. Instead, potential investors mine a Bitcoin block that generates a little more than 12 digital coins. Smaller miners typically pool their PCs together and receive one Bitcoin for contributing one-twenty-fifth of the computing power to generate a block. The catch is that the next block requires more computing power to mine at the speed of the previous block, pushing miners to add more hardware or steal farms of dedicated PCs as seen in Iceland.
Cryptocurrency mining of today really isn’t meant for one specific PC to process. Because the digital currency relies on cryptography, the PC needs to make multiple, intense hash operations each second. The more powerful the hardware, the more operations it can make. A single PC wanting to mine a single block in a single month needs to perform around 3 quadrillion hash operations per second. That is why multiple, networked PCs with dedicated mining hardware is a must.
- What is a blockchain? Here’s everything you need to know
- Staff at Russian nuclear facility caught using supercomputer to mine Bitcoins
- What is Litecoin? Here’s everything you need to know
- What is bitcoin? Here’s what you need to know about it
- Giant cryptocurrency mine that runs on green energy coming to Iceland
Daven Mathies/Digital Trends
I was there when Adobe showed off a completely redesigned version of Lightroom CC last year at Adobe MAX. Between the cheers and the applause, beneath palpable excitement of 12,000 creatives in attendance, I found myself feeling just one thing in the wake of the announcement: Relief. I didn’t care about new features or capabilities, I just stared into that simplified, minimalist, matte gray interface like I was watching the sun rise after a cold night.
As a photographer, I had never enjoyed working in the original Lightroom (now called Lightroom Classic). I found it to be a headache-inducing program that was painful as it was powerful. It ran like an antique car, but without the spit and polish that would have at least made it nice to look at. It was an artifact of the PC age, when beige was an appropriate color for consumer tech products.
The new app took the Lightroom name; the original received a tailpiece that destined it for where legacy software goes to die.
It was time to evolve, and here before me, finally, was a new Lightroom, completely rebuilt from the ground up. Here was the modern user experience photographers had long deserved. Gone was the module-based interface that tried to force users into a linear workflow — and with it, gone were five modules that I never used at all: Map, Book, Slideshow, Print, and Web. The Develop and Library modules were merged into one, and you could begin editing a photo simply by clicking (or tapping) on the editing controls button, without having to reload the image in a new module. It was glorious.
Sure, other photographers no doubt had uses for those other modules, but Lightroom Classic was foremost an image organization and editing tool — one that had grown fat and slow in its old age. Lightroom CC, by comparison, looked modern and streamlined, with a renewed focus on the simple thing that gave it reason to exist at all: Your photographs.
Adobe continues to update Lightroom Classic, but there was a general feeling that day at MAX that the company was gently trying to corral users toward Lightroom CC. After all, it was the new app that took the Lightroom name; the old version received a tailpiece that all but destined it for wherever legacy software goes to die. Why not just call it Lightroom Jurassic? Recycle its binary bones into fuel for software better adapted to the post-PC era. Even Adobe’s Bryan O’Neil Hughes proudly proclaimed in his presentation that he had made the switch a year and half earlier, and hadn’t looked back. If one of the most experienced Lightroom professionals on the planet had been living happily through buggy, pre-release versions of the new software for that long, then certainly it would work for me.
Daven Mathies/Digital Trends
Naturally, I downloaded the app as soon as I possibly could and started working in it that afternoon from my hotel room. But while I instantly loved aspects of it, I quickly found it lacked features that were indispensable. Disheartened, back to Classic I went — and that’s where I stayed for the next several months.
Fortunately, Adobe scrambled to bring new features to the app over that time, and Lightroom CC has grown into a competent photo editor. I finally decided to try making the switch again, and am pleased to report that I, too, haven’t looked back — even if, at times, I have to force myself not to. If you haven’t made the switch yet, it’s time to at least take a look.
Get your head in the cloud
The first thing to understand about Lightroom CC versus Lightroom Classic is that it presents an entirely new workflow paradigm. The interface is unified (as much as can be) across desktop and mobile platforms, and just about everything — even your RAW files — are automatically backed up to the cloud and accessible from anywhere. You can start an edit on your phone while in the field and finish it from your computer at home without skipping a beat. Adobe demonstrated this live, jumping between an Apple iPhone 8, iPad Pro, and a Microsoft Surface Book 2.
But the thing about product demonstrations is that they only show the awesome parts of something — not the unbearable parts. RAW files take up a lot of space, and if you’re any type of working professional, it’s easy to come back from a shoot with gigabytes upon gigabytes of images. Most internet service providers offer internet service built for the consumption, rather than the creation, of content, with upload speeds that are often many times slower than download speeds. In dense cities, you may have a better option — but in the rural small town where I live and work, I have to deal with an upload speed of just 4 megabits per second.
At 4Mbps, those 300 photos would take 5 hours to upload to the cloud.
On a recent camera review, I shot some 300 photos which amounted to a little under 10 gigabytes of data — not a large shoot, by any means. But at 4Mbps, those 300 photos would take 5 hours to upload to the cloud. That’s five hours before I can use them (at least, all of them) on another device, five hours before Adobe’s AI-powered Sensei search works, and five hours before I can dependably log in to a lag-free game of Destiny 2.
Now, imagine coming back from something like a wedding with not 300, but 3,000 photos? I’ll let you do the math.
Internet slow enough to stifle a fax machine certainly isn’t Adobe’s fault, but it’s something to be aware of before you go all in on a cloud-based workflow. Toting around a portable hard drive and manually syncing Lightroom Classic catalogs between your laptop and desktop — as tedious as it can be — may be the more efficient solution for some users, so long as you don’t care about having RAW files accessible on your mobile device.
Unfortunately, even if you wanted to, you can’t really use Lightroom CC this way. When it comes to file management, you’re more or less stuck in the cloud. Makes sense: Adobe wants to sign you up for paid cloud storage plans, after all. The entire concept of a photo “catalog” has vanished. You can still create albums to separate projects, but Lightroom CC now keeps all of your photos under one umbrella. This isn’t inherently bad — and may be the way most people used Lightroom Classic in the first place — but I prefer to create new catalogs for different projects, or at least project categories, to keep things nimble and organized. I have no need to see product photos from a review shoot alongside portraits from a wedding job.
Daven Mathies/Digital Trends
Lightroom CC’s import and export options are also woefully limited (there aren’t even keyboard shortcuts to bring up the import and export windows). You can’t add any metadata on import, and the only filetypes available on export are original or JPEG. And for the latter, the only control you have is setting the long dimension; there is no ability to set the amount of JPEG compression. You can’t even choose to name and sequence the files on export.
Moreover, not that this exactly matters given the lack of options, but export presets are completely gone. This is bad news for me, as I used various presets in Lightroom Classic for different purposes, from outputting photos sized to Digital Trends’ standards, to highly compressed files for social, to full resolution images for archiving.
But that’s the thing: Lightroom CC wants you to keep all of your photos in the cloud, and use its built-in sharing options to share images and albums with other people. If you continuously archive your work to an external drive and clear it from your catalog, you’re not going to buy upgraded cloud storage plans from Adobe. But I have no need to keep all of my images accessible in the cloud past their due dates. Once I deliver on a job, I’m out — archive, backup, delete. Lather, rinse, repeat.
You could buy several 2TB hard drives for the cost of a single year of the 2TB cloud plan.
To be sure, it’s not impossible to export and delete images from Lightroom CC. The program is just set up in a way as to make doing so less convenient than leaving them where they are. Aside from the annoyance of scrolling through old photos and albums I no longer actively need, this might not be a huge problem — except that cloud storage is also expensive.
Adobe offers several different pricing options, but the standard $10-per-month Photography Plan is arguably the best deal. With it, you get Photoshop, both versions of Lightroom, access to the (really quite cool) Spark mobile apps, and 20GB of cloud storage. It appears that new users can buy into a 1TB plan for $20 per month, but for whatever reason, I was able to upgrade to it for just $15 per month. This is a pretty good deal. A 1TB Dropbox Plus plan, for comparison, is $8.25 per month — and doesn’t come with, like, seven programs.
However, if I wanted to jump up to 2TB — which I would need to, if I wanted to keep even a majority of my photos in the cloud — the price leaps to $30 per month, which makes no sense whatsoever. I could buy several 2TB hard drives for the cost of a single year of the 2TB cloud plan. Call me old school, but given that I don’t have a need to access all of my photos all the time from any device — and that paying for that ability would be considerably more expensive than backing up files locally — there just doesn’t seem to be incentive to do it.
So why not stick with Lightroom Classic?
Here’s the thing: As much as I complain, the truth is I really like Lightroom CC. It is the modern Lightroom I’ve been waiting for Adobe to build for years. The user interface is beautiful and responsive, the editing and organization are more streamlined, and it’s not full of things I don’t need. It offers a much more enjoyable experience than Classic, and while I certainly don’t require cloud access to all of my photos, it is nice to not have to bring an external hard drive with me when I’m on the road.
What’s more, if a client makes a request for an image while I’m out with just my phone, I don’t have to wait until I’m home to deliver. I can pull it up in the iOS app, make any quick edits, and send it off right there before my latte even gets cold. You can even access your Lightroom CC library from any computer via the web app. Editing Nikon RAW files inside of Google Chrome feels a bit like magic.
Sure, Lightroom CC still puts annoying limitations on users — the lack of export options is particularly irksome — but none of those are enough to make me miss Lightroom Classic; not enough to go back, anyway.
My only other complaint is that I simply can’t afford to use Lightroom CC the way it was meant to be used, by storing all of my images in the cloud. Sure, I don’t have a real need for this, but for some people it would really be a simpler solution, and it’s just a shame that the cost of cloud storage will prevent people from using Lightroom to the best of its abilities. For now, the 1TB plan I can afford is sufficient to hold my current working projects, and gives me time to back up files locally before removing them from Lightroom.
It’s not a perfect solution, but what program is perfect? For me, Lightroom CC has reached the point where its imperfections are lesser than Lightroom Classic’s. But this isn’t about picking the less bad option; Lightroom CC looks and feels like the future, and I am hopeful that most of my lingering concerns will be addressed in updates down the road.
Of course, that still won’t help with the pitiful upload speeds that pass for broadband at American ISPs. But as more and more consumers begin relying on cloud services, perhaps the increased demand for faster uploads will push those ISPs in the right direction. At least, we can hope?
- If Lightroom is still slow for you, Adobe promises help is on the way
- How to update your Gmail picture
- With Mosaic, you can redesign your Instagram grid without the commitment
- How to take photos of the moon
- The best free photo-editing software
Final Fantasy XV is a gorgeous game. The art design, the detail, every square inch of this game is a feast for the eyes — if you have the right hardware. Unlike its console counterpart, finding a balance between visual quality and performance is a bit of a chore, but we have a few tips that should help you get the most out of the game.
First up, let’s talk about how we tested Final Fantasy XV. The system we used is a desktop PC with an AMD Ryzen Threadripper 1920X 12-core processor. That’s a lot of horsepower, and we’re aware that it’s pretty far outside the norm for all but the most high-end desktop PCs, but we use it for a reason. It’s among the quickest processors you can get your hands on, which means our results won’t be bottlenecked by a slow processor. That’s also why our testing PC has 32GB of RAM, and a lightning-fast 512GB SSD. We want to remove all the speedbumps that could trip up our results.
That means the results we discuss below will be almost entirely dependent on the graphics card, arguably the most important part of any desktop gaming PC. The graphics cards we tested fall into every price category up and down the scale, as we wanted to get a wide sample to find out how well the game does on even entry-level hardware.
On the Nvidia side, we tested the entire GTX lineup and two Ti-grade graphics cards: The GTX 1080 Ti, GTX 1070 Ti, GTX 1060, and GTX 1050.
The red team, AMD, has a slightly larger catalogue of current-gen graphics cards, so we picked four of our favorites — the RX Vega 64, RX 580, RX 570, and RX 550. For those of you keeping score, that’s two high-end cards, a mid-range card, and an entry-level card. It’s a nice sample from each tier of AMD’s graphics card lineup.
Presenting the presets
Every game’s preset settings differ, but it’s always helpful to take a look at what amounts to the developer’s suggested graphics settings. Like most games, FFXV offers four presets — Low, Average, High, and Highest. Here, there’s a clear difference at each level but pay extra-close attention to Noctis’ clothes.
FFXV’s bro-squad is the game’s heart and soul, so it’s no surprise they receive lavish detail. Turning up the game’s graphics make those extras obvious. Buttons, zippers, and other pieces of flair become obvious at High detail, and really stand out at Highest.
The realism of each character’s clothing also improves dramatically as detail is ramped up. At Low detail, the team’s leather jackets look fuzzy and lack fine detail, which makes them look like poorly built fakes purchased from eBay. Average is a big improvement, adding fine details that provide a better sense of depth and texture. At Highest, those details become crisp even when close to the camera.
FFXV’s bro-squad is the game’s heart and soul, so it’s no surprise they receive lavish detail.
Low and Average detail also suffer from a lack of shadow detail and depth which, combined with lower texture resolution, hurts the overall presentation’s contrast. Objects further from the camera, like the diner in the background, look flat and washed-out at Low and an Average. The High and Highest settings add shadow details that noticeably improve the game’s look.
Still, FFXIV tolerates its lower presets with grace, and some elements suffer little from downgrades. Each character’s face, flesh tones, and hair see less degradation then we might’ve expected, even at Low detail. The game’s strong art design helps, as both characters and environments have readily visible themes that don’t rely on fine details to express themselves.
Obviously, it’s best to play at the highest detail possible, but we think FFXIV looks great even at Average detail, and even Low is alright, though its texture resolution suffers. This, as you’ll soon see, means the game is enjoyable on a wide variety of hardware.
Let’s start with the best: 4K
Guess what? Final Fantasy XV looks great at 4K. Not a surprise, but it’s worth saying because this game is a sight to behold with every setting — resolution included — pushed to its absolute maximum. However, running the game at 4K requires some serious hardware. In our test rig, even our Nvidia GeForce GTX 1080 Ti had trouble keeping up with Final Fantasy XV’s intense and richly detailed visuals at 4K. Let’s look at the numbers
Right here, performance breaks along predictable lines. The 1080 Ti is in the lead by a sizable margin, while the 1070 Ti, and AMD Radeon RX Vega 64 achieve playable framerates, if only barely.
There are a couple things we can learn from these results right off the bat. First, look at the GTX 1050, and RX 550. Neither card can run this game in 4K. We could barely get in and out of the game because it ran so slowly on both cards. It was a slideshow. It’s worth mentioning because our GTX 1050 and RX 550 cards both had 2GB of video memory, and that’s definitely not enough to handle FFXV at this resolution.
Unsurprisingly, Final Fantasy XV looks especially great at 4K.
High-end cards like the RX Vega 64 and GTX 1080 Ti have a lot more than that — the Vega 64 has 8GB, and the 1080 Ti has a whopping 11GB. The performance gap between these sets of cards tells us that this game not only requires a lot of graphical horsepower, but also a sizable amount of video memory.
That’s why cards closer to the mid-range like the 1070 Ti and 1060 have an easier time running the game at 4K on Average or Low settings. They have enough video memory to handle those assets like textures and models, alongside the rendering horsepower to keep the game running smoothly.
In the end, if you’re looking to run FFXV at 4K, you’re going to want to stick with upper tier of graphics hardware. The lowest you’ll want to go is the GTX 1070 Ti or Radeon RX Vega 64.
1440p is far more forgiving
At 1440p, performance widens up a bit, and that’s good news if you don’t have a GTX 1080 Ti lying around. This resolution is a great middle ground. You can still enjoy crisp image quality, but you give yourself a ton of wiggle room to keep framerates high.
Look at those framerates! At 1440p Final Fantasy XV still looks great, and it’s easy enough on mid-range hardware that you won’t be stuck in a slideshow every time you draw your sword. Yes, your sword materializes in a shower of blue sparkles every time you draw it. It’s a whole thing.
Looking at our test data, the only cards that couldn’t run the game at 1440p were the GTX 1050 and RX 550. All the others were more than capable of maintaining an enjoyable framerate at Medium or High settings. Even the mid-range GTX 1060 maintained a playable framerate of 36 FPS at 1440p, at the Highest detail preset. The RX 570 had a bit of a tougher time, but plays the game just fine at High settings, and you won’t even notice the loss in detail.
1080p, the people’s resolution
Finally, we come to the most common resolution – 1080p. Here, almost every card in our test suite managed a playable framerate most of the time. Well, with one exception. Let’s get to the numbers.
Naturally, the GTX 1080 Ti, GTX 1070 Ti, RX Vega 64, and RX 580 had no problems here. They just walked right through to the Highest detail setting and asked, “Is that all you got?”
Even the mid-range cards, starting with the GTX 1060, did well enough. At the Highest detail preset, the GTX 1060 hit an average framerate of 49 FPS, and the RX 570 hit a just-barely-playable 29 FPS at the High preset.
The entry-level cards were a mixed bag, though. The GTX 1050 managed a comfortable framerate, 31 FPS, at the lowest detail setting. The RX 550 barely managed 24 FPS on average at the same detail setting. It’s a disappointing result for anyone with an RX 550. If you’re one of those people, try stepping your resolution down to 720p — you should be able to manage a decent framerate at that resolution.
Outperforming the presets
All right, let’s dig into the nuts and bolts of the graphics menu. Like most games, you’re going to find tons of options here. Each one controls a specific part of the game’s visual palette. On their own, most of these settings will only impact your performance a small amount, but taken together, they can make your game look great, or run poorly, depending on your hardware.
If you’re not comfortable tweaking some of these settings, we have some good news for you. It’s not worth it.
No, you read that right. This is our first performance guide where we would actually recommend you stick with the presets if you want the best performance and the best graphics your hardware can handle.
Each preset is so well-tuned, we couldn’t improve on them.
Square Enix really did their homework here. Each preset — Low, Average, High, and Highest — is custom-tuned to such a degree that we couldn’t actually improve on them despite our best efforts. We went through each setting and measured out precisely how big an impact each one had on overall performance.
In the end, we found that the built-in quality presets mitigate those performance hits better than anything we could put together. Still, there are a few settings you should keep an eye on if you find you’re hitting performance snags.
Riding the TRAM
TRAM, or Texture RAM, is one of the biggest performance hogs. Turned all the way up, we lost 11 percent of our overall framerate. This setting governs how much of your video memory is earmarked for game textures, which is why you’ll definitely notice when it’s turned up or down.
Look at how detailed every single piece of Noctis’ outfit is at the Highest TRAM setting. His buttons have little skulls on them, his t-shirt has skulls on it, even the many extraneous zippers on his outfit has an impeccable level of detail. Stepping down the quality to High, you lose a little detail, but the biggest hit comes at Average. Here, you lose almost all of the fine detail, and moving down to Low, you lose all but the barest suggestion of these details.
What we do in the Shadows
Second place, as usual, goes to the Shadows setting. Like TRAM, this one affects nearly every single frame of Final Fantasy XV, so it’s no wonder that when it was turned all the way down, we saw a 10 percent bump in performance.
Look at those friends, standing in our shadows, messing up our screenshots. They’re actually doing us a favor. Keep an eye on the edges of the shadows as you scroll through the screenshots. At the Highest setting, they’re sharp, detailed, and lush. At the Low setting, they’re fuzzy and amorphous — though they don’t look awful.
In the end, Final Fantasy XV will look great pretty much no matter what you do to it. Even at the lowest settings, it’s still a gorgeous game, and that’s a credit to stellar art direction and a richly detailed world. If you’re having trouble maintaining an acceptable framerate, try stepping down to the next lowest preset, chances are it’s going to offer better performance and visual quality than digging into the settings yourself.
It’s good the game’s art holds up at lower settings, because you may need to use them. That’s particularly true if you want to run at 60 frames per second. Falling back to 30 frames per second shows some mercy on your hardware.
We can certainly recommend FFXV if you want a graphical showcase, but be prepared. This game can challenge even the quickest PCs.
- The best graphics cards
- Dell XPS 8930 review
- ‘Far Cry 5’ on PC will support multi-GPU configurations, 4K at 60 fps
- Dell Inspiron 15 7577 Gaming Review
- Dell Inspiron 5675 gaming desktop review
Google Lens turns a smartphone camera into a keyboard, handling tasks from reading text to recognizing objects and landmarks — but those capabilities are expanding beyond Google Pixel Phones with expanded Android availability and a promise for an iOS launch, too. On Monday, March 5, Google announced expanded features for Google Lens inside Google Photos on Android that will allow the smart camera to add business cards and recognize landmarks. Google says that the tools are also coming soon to iOS users.
Google Lens could already recognize landmarks and save business cards, but before the latest Google Photos update, the feature was only available on Google Pixel phones. The latest update brings those features across Android devices.
With the update, taking a photo of a business card and heading into Lens will create a contact using the information from the card. The app uses text recognition to import the data from the image, saving users from manually typing out the information on the card.
The second new addition for non-Pixel Android users is the ability to recognize landmarks. Inside Lens the camera will identify popular landmarks, using, of course, Google search data to provide data like a description and even the hours of operation. Lens will also pull up the reviews on the location, and if that’s not enough, Google Search results on the landmark.
Google’s tweet announcing the new features also confirms a Mobile World Congress announcement that the smart camera mode will also be coming to Google Photos on iOS. Google hasn’t yet shared a timeline for when the update will be available in the App Store.
First available only on Google Pixel phones, Google Lens uses a smartphone camera (or existing photographs) to give the Google Assistant “eyes.” Machine learning on Lens allows the camera to identify objects and pull up related search data for when you just can’t think of the name of that flower or another object. The feature can also scan movie posters, barcodes, and book and album covers. Lens is also programmed to read text, which creates a shortcut to skip the typing by taking a photo of a link.
Android users can find the feature inside Google Photos after an app update, while on Pixel phones, the feature is also inside Google Assistant.
- How to use Google Lens to identify objects on your Pixel smartphone
- Get ready for more AR apps — Google brings ARCore to version 1.0
- Everything you need to know about Android 8.0 Oreo
- Everything you need to know about the Google Pixel 2 and Pixel 2 XL
- Insert Stormtroopers into your life with Google’s new AR stickers for Pixel
74% of the world’s population is now supported by Gboard.
Gboard has been my go-to Android keyboard of choice for well over a year now, and a lot more people will now be able to take advantage of its many features thanks to new language support.
Version 7.0 of Gboard is rolling out to the Play Store now for all users, and with it comes official support for Korean, simplified and traditional Chinese, and 20 other languages – including Adlam, Manx, and Maori.
These new additions mean Gboard now covers over 300 different languages, and in other words, 74% of the world’s entire population.
Along with the new languages, v7.0 for Gboard also brings auto-complete suggestions for email addresses, a universal search that lets you browse through emojis, stickers, GIFs, and more simultaneously, and the ability to have multiple keyboards selected at once.
Download: Gboard (free)
Select Android phones will also be able to access lens via the Assistant.
When the Pixel 2 launched last October, one of its exclusive software features was Google Lens. Today, Google Lens is escaping the clutches of the Pixel brand and expanding to all Android devices through the Google Photos app.
To access Google Lens, simply open Google Photos, select the picture you want to use, and then tap the Lens icon that’s in between the trash and edit options. Once you do this, Google Lens will scan your picture and show any information that’s relevant to it.
Google Lens can currently identify buildings/landmarks, company logos, cat/dog breeds, text, paintings, movies, etc.
In addition to this, Google also announced at MWC 2018 that certain phones from Samsung, Huawei, Motorola, LG, Nokia, and Sony would be able to access Lens via the Google Assistant. There’s still no ETA as to when that’ll happen, but in the meantime, Google Photos has your back to get your Lens fix.
Download: Google Photos (free)
This weird square might be the coolest thing Google has done in a long time.
“Okay, but what is it?” my friend asked for the third time during my description of the small teal square I was now actively fidgeting with. This was the third time during this party I’d run into this same problem. Someone will catch me setting Google Clips up somewhere in the room, shoot me a quizzical look, and when I didn’t offer an immediate answer ask what I was up to. My friends are all used to me bringing some new gadget to a party to play with, whether it’s a 1W blue laser that looks like a lightsaber or a VR rig to show people what it’s like to shoot zombies drunk. Yet this time, with nothing but this small camera from Google in my hand, I found myself unable to offer a simple answer that would satisfy the person asking.
Long answers for what Google Clips is I can handle. Clips is a camera from Google with AI baked in to take pictures and videos automatically. You set it somewhere in the middle of something interesting, and when you come back for it later there will be memories captured you otherwise would not have been able to capture. It’s not a constant recording; you only get the bits Google’s AI thinks were important. It’s a camera you have basically no control over, so you can enjoy the event you are supposed to be enjoying instead of walking around with your phone in front of your face the whole time.
This explanation begged an important follow-up, which takes even more time to explain. “Does it work?”
See at Best Buy
The Automatic Camera
Google Clips Hardware
There’s really not much to Google Clips. It’s a little square that fits in the palm of your hand, with a camera lens on one side and little else. There’s a USB-C port for charging, a single physical button under the lens, and three LEDs under the white plastic to let you know when the camera is on and doing something. To turn it on, you twist the lens and wait for the lights to pulse. Once that happens, you put the camera somewhere and leave it. That really is it, your job as the human is complete. The rest is up to the AI, Google is the photographer here.
Google’s software seems to work in a couple of different ways. If the camera detects a ton of motion, it will save a clip of whatever just happened. If multiple faces are detected, it will save a clip of whatever just happened. Basically, the camera is always recording but only saves the stuff it thinks you will find interesting. All of this is done locally, with no connection to your phone or your data required. As “smart” things with cameras on them go, it’s remarkably privacy-focused. You are the only one with access to the photos and video captured by Clips. The AI heavy lifting all happens locally, and you can extract photos and video without ever being connected to the internet. The recorded content isn’t stored to your phone or in Google Photos unless you explicitly give permission.
All of that having been said, Google offers a way to “train” the camera to give you more things you might be interested in. If you sync your People and Pets collection from Google Photos, Clips will have a library of faces it knows are important to you. When one of those faces are detected, it will record ever it previously might not have. You can also manipulate Clips by using the physical button on the front of the camera. Use it to take a photo of someone, and Clips will identify that person as a priority for future recordings.
The camera itself is interesting. It’s a 12MP sensor with an ƒ/2.4 aperture and a 130-degree Field of View (FoV) lens, which means everything it captures is wide. This presents an interesting challenge for getting photos and video you actually want to see. If you place the camera on a surface somewhere at the edge of a room, it will capture the whole room with no problem but everyone will appear far away. If you place the camera too close to the action, it risks getting knocked over or not being in the right place if the subject moves. The solution, for the most part, is to use the Live View mode in the Google Clips app so you can temporarily see what the camera sees to ensure the best placement. The inherent problem with that solution, however, is you are now using your phone to control a camera designed to encourage you to put your phone down.
With its 16GB of onboard storage and the promise of three straight hours of “smart capture” Clips is designed to live long enough to capture memories from your average kids party. The onboard storage allows you to store up to 1400 files with no problem, so you can recharge and capture without being concerned you need to sort through these memories until later.
Maybe a little too simple, but good at what it does
Google Clips Software
When the party has ended and you’re ready to wind down, you can open the Clips app on your phone and see what it captured. The app syncs to the camera even if it hasn’t been used in hours, as long as it is nearby. Once connected, you have two options for browsing. There’s the unfiltered list of 21-second video clips the camera caught, and there’s the AI-enhanced edits which only focus on the things it found the most interesting. As you scroll through either list, the top file will auto-play so you get a quick look at what was happening.
The results are something you and your friends and family are guaranteed to love.
From here, you have a couple of options. You can save the file straight to your phone, where it will appear as a Pixel-style Motion Photo and backed up to Google Photos. You can pick a single frame out of the video to save as just a photo, where it can be edited like any other photo. Or, my personal favorite, you can edit the file right on the camera. The edit tool in the Clips app lets you crop as you see fit, trim the video length as you see fit, and then save to your phone as either a photo or video or gif. Saving as a gif works exactly like the Motion Stills app from Google, which is both familiar and a little confusing.
With its 12MP camera, Google Clips exported in a wild variety of ways I had little control over:
- GIF – 0.3MP or 640×480
- Motion Photo – 6MP
- Video – 2.3MP
For comparison, my Pixel 2 will capture 8MP motion stills with the front-facing camera without issue. The app has some basic controls for image quality if you want to record something nicer, and those settings were cranked all the way up to High for these clips. That means the image and video sizes can be considerably smaller, which naturally also means quality takes a hit. That’s an issue, because image quality is already questionable in a lot of situations for this camera.
The ƒ/2.4 aperture means it struggles a bit in low light or variable light, but Google’s AI does quite a bit to clean up noise before presenting the image to you. This isn’t the kind of thing I would rely on outdoors at night, but in a dimly-lit room or a room where all of the light is shining right at the camera I found the captured photos and videos were okay but clearly nowhere near as good as the top cameras available on phones today.
What this app does, it does well. There’s really not a whole lot you’re supposed to do, if the whole point is to set up a camera and become part of the experience. Personally I’d prefer the ability to edit directly in Google Photos. Right now you have to jump around from the Clips app to Photos and back if you want to make edits to something you’ve saved. If I’m already syncing my data from Photos to this camera, it’d be nice to have an “edit in Photos” option that takes the image I’m playing with straight to the app to be edited.
Is this supposed to be so much work?
Google Clips Experience
I have four kids running around my house and I love going to small gatherings with my friends. On a high level, Clips seems like it was built for me. To be able to get out from behind the camera and participate without losing the opportunity to capture what could be a precious memory is right up my alley. Google Clips can deliver that experience, and when it does the end results are fantastic. But it’s not quite as automatic or seamless as it probably could be.
In the week that I used Clips, I found myself constantly hunting for the best place to put it so it could record things. Parties often happen in multiple rooms, so I would have to move the camera to where the people are to get the things I wanted. Each time this happened, I went from being the active participant in the party to the passive observer and event documentary host. Several of the things Clips recorded were of me trying to position Clips, and on more than once occasion my big dumb face happened to be blocking part of a really cool thing that was happening behind me.
But like I said, when it works the results are something you and your friends and family are guaranteed to love.
I have a collection of memories from Clips that I either couldn’t have or wouldn’t have captured, and that’s cool. Over time I got used to the “frame” for Google Clips and didn’t have to reply on the live view in the app as much, but there was never a point where I found myself truly setting the camera up and forgetting about it. It’s the kind of thing that makes me wonder if I’m a fan of Google Clips because I like the underlying idea, or if I’m a fan because it’s actually delivering on its promise and making me more present and focused on the moment in front of me.
Either way, this thing I have trouble describing is an incredible exploration of what we think about when taking photos. I enjoy the way Clips has challenged the way I think about what I capture and how, and find myself eager to explore this camera a lot more.
Should you buy it? It Depends
At $250, Google Clips is expensive. This is something you buy if you want to try a new way to take photos, not something you buy if you want the best possible photo or if you enjoy the act of taking photos. Google is the photographer with this product, and if that is an idea which excites you I would recommend picking one up.
See at Best Buy
Available now across 66 countries.
As if you didn’t have enough music streaming options to choose from on your Alexa-powered speaker, yet another one is joining the mix as Alexa finally offers full support for Deezer.
In addition to basic voice controls for searching through songs, artists, and controlling the volume and playback of tunes, you can also say “Alexa, play Flow” to stream your Flow playlist of songs that Deezer creates based on what you listen to.
You’ll be able to use Deezer on all of Amazon’s own Echo devices, and if you own something like a Sonos One or another speaker that uses Alexa but isn’t made by Amazon, you’re also fully covered.
Deezer will be available to use across 66 countries were Alexa is supported, and it’s rolling out to users now.
- Tap, Echo or Dot: The ultimate Alexa question
- All about Alexa Skills
- Amazon Echo review
- Echo Dot review
- Echo Spot review
- Top Echo Tips & Tricks
- Amazon Echo vs. Google Home
- Get the latest Alexa news
See at Amazon
Did you think Airbus’ Pop.Up flying taxi concept was a little drab? So did Audi. It teamed up with Airbus and Italdesign to unveil Pop.Up Next, a reworked version of the two-seat autonomous vehicle concept. The new version is more stylish than the mostly functional original, and borrows more than a few cues from Audi’s current design language. However, it should also be more practical — it’s supposed to be “significantly” lighter than the original, which is rather important for a hybrid passenger drone.
The core concept remains the same. Pop.Up Next revolves around a passenger pod that attaches to a skateboard-like platform that drives around town, but hooks up to a drone for times when flying would be more convenient. As a passenger, you’d stare at a 49-inch touchscreen that uses face recognition, eye tracking and voice recognition for interaction.
It’s still not certain if or when the concept will see production. There are any number of hurdles beyond the technology itself, such as legal frameworks, infrastructure (you’d want safe places for the ground-to-air transition) and, of course, business models. Right now, Pop.Up Next is more about showing what transportation could look like in a fully autonomous future.
Click here to catch up on the latest news from the 2018 Geneva Motor Show.
Google announced today that Gboard for Android is getting a handful of new languages including Korean and both traditional and simplified Chinese. The company said that those have been the most requested languages for Android — they’re already on Gboard for iOS — and they join 20 others that are rolling out to Gboard for Android now.
Gboard launched in 2016 and it now supports over 300 language varieties. You can check out a full list here. Google said that while a few of the newly added languages are some of the most widely-spoken, it’s also working on including others like Manx, Maori and the Fulani alphabet Adlam that are not as well known.
The new languages are rolling out worldwide and should be available within the next few days.