Lenovo ThinkPad X1 Extreme vs. Dell XPS 15
Rich Shibley/Digital Trends
Small laptops are great, but for a blend of productivity and portability, 15-inch models are perfect. Two of the best manufacturers for laptops of that size are Dell and Lenovo, so picking between two of their flagship devices in that range isn’t easy.
Shouldering that task though, we pitted the Lenovo ThinkPad X1 Extreme versus the Dell XPS 15 in a head to head on design, performance, and portability.
Design
Dan Baker/Digital Trends
Neither the Dell XPS 15 or the Lenovo ThinkPad X1 Extreme is a style icon compared to more form-focused laptops like the XPS 13 or MacBook Pro, but they’re hardly bad looking either. The ThinkPad X1 Extreme sports thinner bezels than its last-generation counterparts, but retains the boxy-look so classic of the range. It mounts its camera in the top bezel, which is a much more favorable position for flattering video calls than the XPS 15’s base mounted camera, but that does come hand in hand with a thicker top bezel in turn.
The Dell alternative does have thinner bezels and an altogether more modern looking design than its Lenovo counterpart. It’s a little thinner than the ThinkPad but does weigh in heavier with the larger battery option. It has a silver exterior paint job compared to the ThinkPad X1’s overall black coloring, which impacts personal preference more than anything tangible.
Both laptops feature great keyboards with crisp, responsive keys and comfortable layouts. The Lenovo keyboard is slightly more enjoyable to use long-term, although either would be good for day to day use. The main difference here is the ThinkPad’s TrackPoint, which some people still swear by.
The XPS 15 sports a number of port options on its flanks and rear, offering up a pair of USB-A 3.1 Gen 1 ports (fast, but not that fast), an HDMI 2.0 output, a single Thunderbolt 3 compatible USB-C connector, and a 3.5mm headphone jack. The ThinkPad sports a slightly more expansive port selection, with two Thunderbolt 3 compatible USB-C ports, a pair of USB-A 3.1 ports, an HDMI 2.0 output, a smart card reader, SD card reader, a 3.5mm headphone jack, and a network extension port that when combined with an adapter dongle can be used for ethernet connections.
Performance
Rich Shibley/Digital Trends
The ThinkPad X1 Extreme has a starting price of $1,860 and offers an 8th-gen Intel Core i5-8400H CPU, 8GB of DDR4 memory, an Nvidia GTX 1050 Ti graphics chip, and 256GB of M.2 SSD storage. All of that powers a 15.6-inch 1080p, IPS display that can hit a brightness of 300 nits. Due to some surprisingly favorable online sale prices at the time of writing, you can spend less than $100 more to get 16GB of RAM, 512GB of SSD space, and a Core i7-8750H CPU. Other update options include a more powerful CPU — Core i7-8850H — up to a terabyte of PCIexpress SSD storage, and a 4K display. The most expensive model costs $2,834.
The XPS 15 has a much more modest starting price of $1,000 for its entry-level model, though the hardware configuration is less impressive also. It has an 8th-gen Intel Core i5-8300H CPU, 8GB of DDR4 memory, on board Intel UHD 630 graphics, and a terabyte of hybrid HDD/SSD storage. For $1,400 you can upgrade to a much more capable Core i7-8750H CPU, with 256GB of SSD space, and a GTX 1050 Ti graphics chip. All models apart from the very top ones come with a 1080P display, with options for more storage space and memory throughout the range.
The best configuration is $2,900 and comes with an Intel Core i9-8950HK CPU, 32GB of DDR4 memory, a terabyte of PCIexpress solid-state storage, and a 4K display.
The XPS 15 certainly offers more value at the lower end, with comparable hardware to the entry-level ThinkPad for a few hundred dollars less. However, once you get up to around the $2,000 mark the specifications and costs even out and they are far more comparable in terms of bang for buck. Our review configurations both had the 8750H CPU, and the results favored the Lenovo laptop slightly. In addition, its storage write speeds were much more preferable.
The Dell laptop’s display has better contrast ratio — opting for the 4K panel nets better color accuracy too — but the ThinkPad has HDR support, which certainly gives it the edge in supporting media. Games like Battlefield 1 look stunning on the Lenovo notebook. That said, though neither of these laptops is really designed with gaming in mind, their combination of a powerful CPU with an entry-level dedicated gaming chip make them more than capable. Don’t expect to max out AAA games at 4K, but indies, and lower-detail settings are more than possible at decent frame rates.
Portability
Rich Shibley/Digital Trends
No 15-inch laptop is as portable as some of the smaller form-factor alternatives out there, but the two laptops in this head to head aren’t blocky workstations by any means. Indeed, Lenovo has gone out of its way to make the ThinkPad X1 Extreme much sleeker and streamlined than the other laptops in its professionally-targeted range. It measures 14.24 x 9.67 x 0.74-inches and weighs just 4.06 pounds with the 4K panel. If you opt for the 1080P version you can shave off another third of a pound and 0.02 of an inch in height.
The XPS 15 is definitely the trimmer device, at 14.06 x 9.27 x 0.66-inches (0.45-inches at its thinnest point) but not by a huge margin. It is a little heavier, though, weighing 4.5 pounds with the larger battery option.
That battery comes in at 97 watt-hours, with the option of a much smaller 56 watt-hours in some configurations. The version we tested came with the larger of the two and lasted just over 14.5 hours in our video loop test. The Surface Book 2 might beat such a figure, but the ThinkPad falls far behind with its 80 watt-hour charge delivering just five and half hours of video looping. That’s a respectable result, but the XPS 15 is just vastly more efficient.
Efficiency and style trump grunt
Dan Baker/Digital Trends
In our head to head testing of comparable hardware configurations, the Lenovo ThinkPad X1 Extreme certainly comes out on top, by a little. It also has a better selection of ports, but there’s no denying that it is the more costly of devices — especially at the lower end. It also has much weaker battery life — although few can compare with the XPS 15’s stellar efficiency.
The ThinkPad is certainly worth considering if you like its more professional look, feel, and gorgeous HDR display, but the XPS 15 remains our darling of the 15-inch form factor. It’s the complete package, offering great performance in general usage and gaming, long life on a single charge, and it looks good too.
Overall winner: Dell XPS 15
Editors’ Recommendations
- Dell XPS 15 vs. MacBook Pro 15
- Lenovo Legion Y730 vs. Dell G3 Gaming Laptop
- Dell XPS 15 9570 review
- Need to work from the road? Here are the 5 best laptops with LTE
- Predator Helios 500 vs. Alienware 17 R5
Apple Has Reportedly Acquired Asaii, a Music Analytics Platform ‘Able to Find the Next Justin Bieber’
Apple has acquired San Francisco-based music analytics startup Asaii, according to unnamed sources cited by Axios. The deal, which has not been confirmed by Apple, was reportedly worth less than $100 million.
Asaii built tools that allowed music labels to discover, track, and manage artists using machine learning. The platform pulled data from social networks such as Facebook, Twitter, and Instagram, and streaming music services such as Apple Music, Spotify, YouTube, and SoundCloud, to find hidden talent.
Asaii offered two products specifically: a music management dashboard for A&R representatives to quickly scout and manage talent, and an API for music services to integrate a recommendation engine into their platforms.

“Our machine learning powered algorithms finds artists 10 weeks before they chart,” the startup’s website states. “Our algorithms are able to find the next Justin Bieber, before anyone else,” another page claimed.
The acquisition will enable Apple to bolster its content recommendations to users, and help it compete with Spotify’s efforts to work directly with smaller artists and music labels, according to the report. Apple Music and iTunes are likely to benefit from Asaii’s machine learning algorithms.
Asaii was founded in August 2016 by Sony Theakanath, Austin Chen, and Chris Zhang, who have collectively worked at Apple, Facebook, Uber, Salesforce, and Yelp previously. All three individuals now work on the Apple Music team at Apple, as of October 2018, according to their LinkedIn profiles.
In an email to customers shared by Music Ally last month, Asaii said it would be shutting down operations on October 14, 2018.
Last month, Apple announced that it completed its acquisition of Shazam, a popular music recognition service that can identify the names and lyrics of songs and music videos. Shazam could be more tightly integrated into Apple products and services as a result, ranging from Apple Music to Siri.
Tag: Apple acquisition
Discuss this article in our forums
Meet the deaf gamers raising awareness for games accessibility
There’s a mission early in the first Destiny game where you are plunged into darkness when fighting a group of enemies called ‘The Hive’. It’s about an hour in and requires players to listen closely for nearby enemies since the game’s torch and motion sensor are limited. This may seem like a simple challenge typical of a first-person-shooter but for those who are deaf and hard-of-hearing, it’s an incredibly difficult part to play through and creates a near insurmountable roadblock to the rest of the game.
“I died about 15 times before I realized I wouldn’t be able to do that part,” Susan, a deaf games critic, told me. “Being that it was maybe an hour into the game, I’d just wasted $60.”
Unable to find any information online that could’ve warned her in advance about how unfriendly Destiny was to the hard-of-hearing, Susan was inspired by her frustration to create a directory of her own. Working with her friend Courtney Craven, the two created OneOddGamerGirl.com, a games review site that specifically rates games on their accessibility.
An early mission in Destiny that can be especially challenging for deaf gamers. The hour long quest requires players to listen closely for enemies in lieu of a limited-range torchlight and the game’s motion sensor.
Their system is simple: Susan plays for about an hour, notes all the parts she found hard, and Courtney replays those same sections to see if the issues are related to deafness. The two then play together with Courtney relaying all the stuff Susan’s missing because she can’t hear.
Time to set a universal standard
Their reviews are concise, identifying the very specific ways a game fails and succeeds in being accessible. The amount of games that are readily accessible from launch is growing, but progress is uneven with the most basic issues still commonplace.
” … Studios need to start bringing in testers or talk to people with REAL disabilities.”
“Currently subtitles – and only subtitles, not full captions – are the only thing that’s standard in most games,” Susan said. “They’d be much better if there was a universal standard for size, resizable is preferred and offered an option of a text background. We’re seeing more and more games that do subtitles right, like the ones that released a patch after feedback that included the ability to scale subtitles.”
Captioning refers to using descriptive text for non-spoken sounds, such as gunfire or creatures growling. Games often build collectibles and mechanics around the sound of the environment – like the chiming nirnroot in the Elder Scrolls V: Skyrim — something that, without captioning, can effectively gate content from the deaf and hard-of-hearing.
Chris, a deaf streamer, names Mass Effect: Andromeda and Destiny as games that are terrible with subtitling, the former causing him eye-strain after an hour due to the small, unchangeable font sizes.
Chris streaming on Twitch as DeafGamersTV. DeafGamersTV
“To get accessibility in their games done right, studios need to start bringing in testers or talk to people with REAL disabilities,” He explains. “Then do your homework on how they can implement it in games.”
Under the title DeafGamersTV, Chris has been streaming on Twitch for several years, using the platform to spread deaf awareness and build a gaming community around the subject. His channel attracts deaf and non-deaf watchers alike, with his webcam and chat feed allowing him to communicate in sign and text to his viewers.
Chris gave a talk on accessibility at the Games Developers Conference in 2017. Ubisoft reached out to consult with him on improving their options for deaf and hard-of-hearing players. In general, his advice is to include more options across the board.
Chris/DeafGamersTV
“Let people access the options menu before starting anything,” He advises. “People should be able to set the brightness, button remapping, subtitles settings and other options for different accessibility. If this were to be a standard in gaming then I think maybe more gaming companies would become more inclusive.”
A call to listen and get involved
Although some companies have expressed an interest in making their games more accessible, many have turned a blind eye too. This isn’t the case for David Tisserand, UR Process Manager at Ubisoft, Kait Paschall, Project Manager at Epic, and Karen Stevens, Accessibility Lead at EA Sport. These three are all praised by Susan for their work in advocating for the needs of differently-abled players, each taking part in the Twitter hashtag #a11y to form a better understanding of these communities. But they are few in a big industry.
“The most attention I’ve seen deaf gamers … is when that deaf group did the Destiny raid.”
“[David, Kait, and Karen] have been amazing allies for bringing better accessibility to games and always listen to feedback and are always looking to move game accessibility forward,” Susan said. “Then there’s the studios I, and many others, have reached out to after a game’s release and continually are ignored.” She refused to disclose names but one can gather the list is shamefully long.
Wider games media isn’t any better. Chris has all but given up on most of the standard outlets because they seldom highlight these topics in their criticism or include subtitles in their video content. Nowadays, he prefers to read Susan’s site alongside GameCritics.com, both of which include coverage on accessibility options in all of their reviews.
The truth is coverage is rare and often tied to highlighting specific efforts or events. “The most attention I’ve seen deaf/hard-of-hearing gamers in general get is when that deaf group did the Destiny raid,” Caroline, OneOddGamerGirl co-founder, explained. This is in reference to the clan of deaf Destiny players who defeated the game’s Leviathan raid. While still an incredible moment worth celebrating, it’s clear that these issues and events and the voices behind them could and should be covered and amplified more often. That said, Caroline remains optimistic. “While that wasn’t specific to accessibility, it was a great opportunity to see that there are deaf gamers, which could help take accessibility more seriously.”
Susan as OneOddGamerGirl on Twitter, recently highlighted Shadow of the Tomb Raider’s accessibility settings, lauding the language settings as ‘the most expansive set’ of options she’s ever seen. @OneOddGamerGirl/Twitter
The hope is that when these needs are highlighted and noticed, creators will start making efforts to include them. Visibility is key, and the more we persist in having these discussions, the more they start to have a real impact.
Multiple teams have reached out to thank Susan after reading her reviews and gaining insight on how they can improve their games. The response to her work has been largely positive when it’s found, with the only dissent being the occasional “maybe games aren’t for you” comment, though this is typically left by someone upset that she gave one of their favorite games a low score.
Still more work to do
When it comes to games that have the right idea, Fortnite and Minecraft are noted as having a wide approachability. More recently, Spider-Man for the PlayStation 4 included size options for subtitling, as well as other mechanical features that make playing it more possible for the hard-of-hearing. When a major title like this is so inclusive, it tends to reverberate.
“Spider-Man’s inclusion of a choice of text size and background contrast is a really important step in driving it towards being a standard consideration,” Ian Hamilton, accessibility consultant and curator of the Games Accessibility Guidelines, told me. “Spider-Man himself has a nice in-built accommodation for deaf gamers too; his spidey sense, which can act as a visual cue to replicate information available through sound. When a game of Spider-Man’s popularity and critical acclaim does something [like that], others in the industry really do take notice.”
Ian Hamilton, accessibility consultant and curator of the Games Accessibility Guidelines. Ian Hamilton
Ian’s worked in this area for years, trying to improve and broaden perspective on an industry level. He spoke of a number of special interest and employee resource groups within big studios that help acknowledge these specific kinds of needs, and of more senior staff whose duties include researching accessibility. It’s not much, but it’s something he believes will increase with time.
For now, Susan, Caroline, Chris, and Ian are all in agreement that the most important thing any studio or creator can do is start — start listening, start researching, start trying.
“This is 2018 and technology continues to evolve and these companies need to take advantage of it,” Chris stated. “They just need to listen to know what’s wrong and what to improve.”
“The easiest thing would be for developers to spend some time talking with our community on Twitter,” Susan said. “There’s so many and we all have valuable input, but it seems like it’s the same people over and over that are interested in talking to us.”
Editors’ Recommendations
- Spider-Man’s accessibility options, from the people who benefit from them
- Meet the voice actors of ‘Marvel’s Spider-Man’
- Tap to Alexa makes it easier for deaf community to interact with Amazon assistant
- From sharks to Shaq: Ring CEO Jamie Siminoff’s unusual road to success
- ‘Madden NFL 19’ review
Police Told to Avoid Looking at iPhone Screens Locked With Face ID
Police in the United States are being advised not to look at iPhone screens secured with Face ID, because doing so could disable facial authentication and leave investigators needing a potentially harder-to-obtain passcode to gain access.
Face ID on iPhone X and iPhone XS attempts to authenticate a face up to five times before the feature is disabled and the user’s passcode is required to unlock the smartphone.
Elcomsoft presentation slide talking about Face ID (image via Motherboard)
Given the way the security system works, Motherboard reports that forensics company Elcomsoft is advising law enforcement, “don’t look at the sceen, or else… the same thing will occur as happened [at] Apple’s event.”
The note appears on a slide belonging to an Elcomsoft presentation on iOS forensics, and refers to Apple’s 2017 presentation of Face ID, in which Apple VP Craig Federighi tried and failed to unlock an iPhone X with his own face, before the device asked for a passcode instead.
Apple later explained that the iPhone locked after several people backstage interacted with it ahead of Federighi, causing it to require a passcode to unlock.
The advice follows a recent report of the first known case of law enforcement forcing a suspect to unlock an iPhone using Face ID. The action subsequently helped police uncover evidence that was later used to charge the suspect with receiving and possessing child pornography.
In the United States, forcing someone to give up a password is interpreted as self-incrimination, which is protected by the Fifth Amendment, but courts have ruled that there’s a difference between a biometric recognition system like Touch ID and a passcode that you type into your phone.
In some cases, police have gained access to digital data by forcing people to unlock mobile devices using their fingers. Indeed, before Face ID was in use, law enforcement was advised how it could avoid locking Touch ID fingerprint-based authentication on Apple’s iPhones.
- How to Quickly Disable Touch ID and Face ID on an iPhone
“With Touch ID, you have to press the button (or at least touch it),” Vladimir Katalov, CEO of Elcomsoft, told Motherboard. “That’s why we always recommend (on our trainings) to use the power button instead, e.g to see whether the phone is locked. But with Face ID, it is easier to use ‘accidentally’ by simply looking at the phone.”
Related Roundup: iPhone XSTags: Face ID, lawBuyer’s Guide: iPhone XS (Buy Now)
Discuss this article in our forums
Here’s how to set up an alternate appearance for Face ID
Thanks to iOS 12, we received a number of welcome improvements to Face ID, the facial recognition sign-in option available on the iPhone X and newer models. However, one of the best improvements was the addition of an “alternate appearance” or the ability to program in a second face for your iPhone to recognize.
This is an incredibly useful new option, whether you want to make sure that a loved one can open and use your phone, or just want your iPhone to recognize you with goggles or work equipment on (something Face ID is getting better at, but it can still prove challenging). We’ll show you how to set up an alternate appearance for Face ID right here with just a couple minutes of work.
Step 1: Navigate to Face ID & Passcode
Pick a spot with good lighting and no potential glare, and unlock your iPhone. Head to Settings (the gear icon), and look through the Settings menu until you come to Face ID & Passcode. Select it.
Step 2: Start the alternative appearance process
At this point, you will probably have to enter your passcode to continue. Once logged into Face ID & Passcode, you should see options for enabling Face ID App Store purchases, Apple Pay, password autofill, and other options. It’s worthwhile scanning through these options to make sure they are enabled or disabled as you prefer, especially if you are adding a second person to Face ID.
When you are ready, look below the option to enable and you will see the option to Set Up an Alternative Appearance. Select this to begin.
Note: This is assuming that you have already set up your first Face ID face and passcode. If you haven’t set up Face ID yet, then you will see an option to “Set Up Face ID.” You will want to select this first. If you haven’t set up a passcode yet, then you will be prompted to create a passcode when opening Face ID & Passcode. We recommend creating a passcode, since having a reliable second method to unlock your phone is useful, especially if the camera ever malfunctions.
Step 3: Scan the face
Now you will need to scan in the alternative face. Whether this is a loved one or just you with some obscuring clothing, get ready. A face portrait will appear and your iPhone will instruct you to move your face around in a circle to properly calibrate the sensor. Do this until the iPhone is satisfied and reports that the face scan is complete.
If you have trouble with this process, remember that your face needs to be centered and that your iPhone shouldn’t be angled away. You may need to find better lighting or readjust your position to improve the scan. It usually takes a couple of circles around to fully complete the scan.
When finished, you’re done. The iPhone Face ID will now scan for both face data sets, and will unlock for either of them. You can test the function out immediately to make sure it works.
Step 4: When necessary, replace your alternate appearance
Now, when you go into Face ID & Passcode, you will only see an option to “Reset Face ID” that has replaced the alternate appearance option. Be careful when choosing this option: It will erase all your Face ID data, then ask you to scan in two new faces consecutively. However, it’s also the only way to get rid of an alternate appearance and replace it. Make sure both faces you want to scan in are ready if you hit reset.
Editors’ Recommendations
- Tag Heuer shoots, scores with new Premier League Wear OS watch face
- Here’s how to unlock your phone automatically with Android Smart Lock
- ‘Bill and Ted Face the Music’: Every excellent (and bogus) thing we know so far
- Master your iPhone XS and iPhone XS Max with our favorite tips and tricks
- Don’t know what size screw you’re looking at? Amazon’s Part Finder can help
Adobe MAX 2018 Preview: What it is, why it matters, and what to expect
Comedian Jordan Peele, right, co-creator and co-star of Comedy Central’s Key and Peele, and Adobe’s Kim Chambers open Sneaks at Adobe MAX, The Creativity Conference, on Thursday, Nov. 3, 2016 in San Diego. (Denis Poroy/AP Images for Adobe)
Next week, October 15 through 17, creatives from around the world will flock to the Los Angeles Convention Center in California for the annual Adobe MAX conference. For anyone working in a creative industry, it’s kind of a big deal — this year’s speakers include Academy Award-winning filmmaker Ron Howard; musician and 5-time Grammy winner Questlove; actress and comedian Tiffany Haddish; photographer Albert Watson; designer John Maeda; and designer and illustrator Jessica Hische. (You can also sign up to watch it live online.)
Billed as “the creativity conference,” Adobe MAX hosts more than 300 educational sessions across various creative disciplines — but it also provides a stage for Adobe to announce and demonstrate the latest updates for its ever-growing suite of applications.
If you’re at all invested in the Adobe ecosystem, MAX is where you’ll get a glimpse into the future technologies the company has been working on. It provides a first look at the changes coming to the software that drives your creative workflow, whether that’s new features or entirely new apps.
Like a micro CES, the Adobe MAX show floor invites creative tech companies to showcase their latest products. Daven Mathies/Digital Trends
Adobe oversees a huge portfolio of software, with updates rolling out all throughout the year, but the best reveals are always kept for MAX. In 2017, over 12,000 people were in attendance when Adobe made one of its biggest announcements in recent history, launching a cloud-based version of Lightroom.
We don’t know what’s coming, but Adobe has left some clues. In September, it shared a sneak peek of its new and improved Content-Aware Fill feature said to be coming to Photoshop CC. Based on what Adobe shared so far, the tool is about to get a whole lot smarter and capable thanks to Adobe Sensei, the artificial intelligence that resides in the Creative Cloud.
Sensei took center stage at MAX 2017, and we expect to hear a lot more about it this year beyond Content-Aware Fill.
Adobe
Likely all the major apps, from Photoshop to After Effects, will be addressed, but we’re particularly hopeful to learn more about Project Rush, an all-new mobile video editor with an emphasis on cloud storage, social integration, and cross-device compatibility. Rush shares some similarities with Premiere Pro, but slices down the complexity to focus on editing for social media. One of the key features is an exporting option that automatically formats everything for sharing across multiple social networks. Adobe first teased the program at VidCon, and while the company hasn’t shared a launch date yet, they did say it would be coming sometime this year. With the calendar running out of months, further details during MAX wouldn’t be too surprising.
Beyond that, it’s a mystery, but we don’t have to wait much longer.
Editors’ Recommendations
- Asteroid mining is almost reality. What to know about the gold rush in space
- What is Bitcoin mining?
- What is MHL, exactly, and how does it work with your TV?
- What is Apple CarPlay?
- Disney Play: Here’s what we know so far about the upcoming streaming service
Adobe MAX 2018 Preview: What it is, why it matters, and what to expect
Comedian Jordan Peele, right, co-creator and co-star of Comedy Central’s Key and Peele, and Adobe’s Kim Chambers open Sneaks at Adobe MAX, The Creativity Conference, on Thursday, Nov. 3, 2016 in San Diego. (Denis Poroy/AP Images for Adobe)
Next week, October 15 through 17, creatives from around the world will flock to the Los Angeles Convention Center in California for the annual Adobe MAX conference. For anyone working in a creative industry, it’s kind of a big deal — this year’s speakers include Academy Award-winning filmmaker Ron Howard; musician and 5-time Grammy winner Questlove; actress and comedian Tiffany Haddish; photographer Albert Watson; designer John Maeda; and designer and illustrator Jessica Hische. (You can also sign up to watch it live online.)
Billed as “the creativity conference,” Adobe MAX hosts more than 300 educational sessions across various creative disciplines — but it also provides a stage for Adobe to announce and demonstrate the latest updates for its ever-growing suite of applications.
If you’re at all invested in the Adobe ecosystem, MAX is where you’ll get a glimpse into the future technologies the company has been working on. It provides a first look at the changes coming to the software that drives your creative workflow, whether that’s new features or entirely new apps.
Like a micro CES, the Adobe MAX show floor invites creative tech companies to showcase their latest products. Daven Mathies/Digital Trends
Adobe oversees a huge portfolio of software, with updates rolling out all throughout the year, but the best reveals are always kept for MAX. In 2017, over 12,000 people were in attendance when Adobe made one of its biggest announcements in recent history, launching a cloud-based version of Lightroom.
We don’t know what’s coming, but Adobe has left some clues. In September, it shared a sneak peek of its new and improved Content-Aware Fill feature said to be coming to Photoshop CC. Based on what Adobe shared so far, the tool is about to get a whole lot smarter and capable thanks to Adobe Sensei, the artificial intelligence that resides in the Creative Cloud.
Sensei took center stage at MAX 2017, and we expect to hear a lot more about it this year beyond Content-Aware Fill.
Adobe
Likely all the major apps, from Photoshop to After Effects, will be addressed, but we’re particularly hopeful to learn more about Project Rush, an all-new mobile video editor with an emphasis on cloud storage, social integration, and cross-device compatibility. Rush shares some similarities with Premiere Pro, but slices down the complexity to focus on editing for social media. One of the key features is an exporting option that automatically formats everything for sharing across multiple social networks. Adobe first teased the program at VidCon, and while the company hasn’t shared a launch date yet, they did say it would be coming sometime this year. With the calendar running out of months, further details during MAX wouldn’t be too surprising.
Beyond that, it’s a mystery, but we don’t have to wait much longer.
Editors’ Recommendations
- Asteroid mining is almost reality. What to know about the gold rush in space
- What is Bitcoin mining?
- What is MHL, exactly, and how does it work with your TV?
- What is Apple CarPlay?
- Disney Play: Here’s what we know so far about the upcoming streaming service
Silicon Valley just got a new automated farm where leafy greens are grown by machines
Did you hear the one about the Google software engineer who packed it all in to start a farm? No, it’s not the setup for a joke. Nor is it the premise for some quirky Sundance comedy, probably telling the story of a stressed-out programmer who rediscovers their happiness by moving to the country. It’s a real, honest-to-goodness farm, which just opened in San Carlos, around 20 miles outside San Francisco. Called Iron Ox, the farm aims to produce leafy greens — romaine, butterhead, and kale, alongside various herbs — at a rate of roughly 26,000 heads per year. Oh yes, and it’s staffed almost exclusively by robots.
“This is a fundamentally different way of approaching farming,” CEO and co-founder Brandon Alexander, 33, told Digital Trends. “Traditionally, the farming process means that you seed, you wait a few months, you come back, you harvest, and you distribute. That hasn’t changed a whole lot in hundreds, if not thousands, of years.” Until now, at least.
“This is a fundamentally different way of approaching farming”
Iron Ox’s indoor farm measures around 8,000-square-feet. That makes it paltry compared to the thousands of acres occupied by many traditional farms, but, through the use of some smart technology, it promises a production output that’s more in line with an outdoor farm five times its size. To achieve this, it has a few tricks up its sleeve. For starters, Iron Ox is a hydroponics farm, a method of growing plants without soil, using mineral nutrient solutions in a water solvent. Unlike a regular farm, hydroponic farms grow their produce in vertical and horizontal stacks; every element minutely controlled through the use of glowing LED lights and jets of water to affect the crops’ size, texture, and other characteristics.
In place of a farmer, Iron Ox employs a giant, 1,000 pound robot called Angus. It’s Angus’ job to move the heavy 800 pound, water-filled tubs of fresh produce without spilling them. A robot arm is used to tend the crops, making this the agricultural equivalent of Elon Musk’s automated Tesla factory in Fremont, CA.
“We’ve taken a robotics-first approach to the growing,” Alexander continued, in what can only be described as an understatement. “Everything is designed with that in mind.”
Disrupting the family business
When he was a kid, Alexander was shipped off each summer to his grandfather’s family farm in the Texas and Oklahoma area. Looking back at it today, it’s a cherished memory. At the time, not so much.
“I’ll be honest: I hated it,” he said. “All my friends were going on vacation and I was the one who was stuck on a farm.” When his buddies were sleeping in, he was getting up at the crack of dawn. When they were on the beach, he was on a tractor. Years later, when he and his co-founder and CTO Jonathan Binney, 34, were busy planning out Iron Ox, he called his grandfather. Now 83 and still running a farm, Alexander told him about his plans for roboticizing the work that his family had done by hand for generations.
Iron Ox
But this isn’t a story about a guy who decided to take revenge for summers of hard labor by disrupting the industry. Far from it. Alexander has a deep respect for farming, evident from the reverent way that he speaks about a profession that has looked after his family for years.
“[My grandad is] technophobic; he doesn’t know how to use an iPhone [or about machine learning or computer vision],” Alexander said. “But when I explained what I was doing, he said, ‘This is inevitable.’ That kind of surprised me, but it shouldn’t. When he was a kid, and his dad was farming, they managed 40 acres. Now him and his crew are managing 6,000 acres. He’s seen the progression.”
Just-in-time farming
Farming isn’t an industry that’s at the forefront of many people’s minds in Silicon Valley. It probably should be, though, because the emphasis on farm-to-table produce is only growing. When Alexander and Binney speak to chefs, they regularly hear stories about customers wanting to know exactly where a particular bit of produce has been sourced from, or how old it is.
That typically gets an unsatisfactory answer in the U.S., where the average distance travelled by fresh fruit and vegetables is around 2,000 miles. “There are relatively few places that have the right conditions for growing,” Alexander explained. “Everyone else gets week-old produce.”
Iron Ox
Iron Ox aims to change that by building farms within easy reach of cities. Using its autonomous technology, customers can get fresh greens grown in their neighborhood. Better yet, they can get it year round, since an indoor farm isn’t subject to the same seasonal conditions as traditional farms are.
“We call this just-in-time farming,” Alexander said. He is using terminology that is usually applied to manufacturing, pioneered by automaker Toyota in Japan during the 1960s and 70s. What makes just-in-time manufacturing special is that it focuses on making items to meet demand, rather than creating surplus in advance of need. It means less waste with overproduction, less waiting, and less excess inventory. That works well for cars, computers, or smartphones. The Iron Ox team hope it will work great for crops, too.
A.I. which constantly monitors information relating to nitrogen levels, temperature, and the location of robots.
“In a traditional greenhouse, you’re committed to growing a thousand or tens of thousands of a particular varietal,” Alexander said. “Our systems gives us the ability to fine-tune the nutrients for each crop. We’re only committed to growing a hundred of something at a time. That’s important. Previously you would committed to, for example, kale. ‘Kale’s going great,’ you say. ‘Let’s go all-in on kale.’ But trends change. If we suddenly notice a big demand for purple bok choy or Italian basil, our system can adapt to that consumer demand very quickly.”
Overseeing the farm, like a green-fingered HAL 9000 from 2001: A Space Odyssey, is what Alexander calls “The Brain.” This is a cloud-based, A.I. which constantly monitors information relating to nitrogen levels, temperature, and the location of robots. Over time, it will expand this to take into account data pertaining to food orders, or more general information about food-based trends.
Weighing up all this data, it can then make decisions about exactly what should be growing — and in what quantities — in each of the modular tanks.
The road from here
Right now, Iron Ox is starting to take chef’s orders for the two dozen-plus varieties of leafy greens that it is growing from the start. It aims to be in full production by the end of the year. This is still the beginning of the journey, but it’s one that Alexander and his co-founder are happy to be on.
“We had some pretty good, cushy jobs at Google and whatnot,” Alexander said. “We wanted to make sure that, when we took the next step, it was something we were passionate about. It’s not about staying passionate for one year; it’s about whether or not this was something we could put decades of our life into? That’s a different metric, for sure.”
How does he feel about the impact of automation on jobs in the farming community as a whole?
“I think farming is a fairly unique space in this regard,” he said. “Agriculture is one of the few industries right now where they can’t get enough help. That was something that surprised Jon and myself when we first started. When we quit our jobs, we spent four months roadtripping California, talking to farmers. We talked to dozens of outdoor and indoor farmers. One of the questions we asked was ‘what’s your biggest pain point?’ 100 percent of them said that it was labor scarcity. They could not get enough help for their farms.”
Added to this is the fact that, in the United States, the average age of a farmer is 58. “It’s a bell curve distribution, and it keeps shifting over to older and older,” he said.
“There simply aren’t enough people wanting to do this”
Those jobs are not being replaced in equal numbers by the younger generation. “There simply aren’t enough people wanting to do this,” he continued. “And I don’t blame them. It’s hard, back-breaking work. It’s just where it’s going.”
Iron Ox isn’t the only startup applying the latest technology to farming. Other companies and researchers are building self-driving tractors for farms, using CRISPR gene editing to improve the efficacy of crops, and building robots that are capable of picking a variety of fresh produce without damaging it. But Iron Ox’s business model nevertheless represents an enormous step potential forward in U.S. agriculture and the way that it works.
In 1820, more than half of the United States population lived and worked on farms. Today, this is fewer than 2 percent of the population, with the overwhelming majority having moved to the city. Thanks to companies such Iron Ox, people may no longer have to choose between farm and city. If people won’t leave the city for farms, then the farms will just have to come to them.
Editors’ Recommendations
- From picking to pollinating, agribots are pushing farming into the future
- ‘Crop-dusting’ drones drop biodegradable sensors instead of pesticides
- Strawberry-picking robots could replace human workers in the field
- Goldfish, Ritz crackers, and Swiss Rolls are being recalled over salmonella scare
- Giant wind farm in Morocco will help mine cryptocurrency, conserve energy
How to protect your iCloud account
If you haven’t heard, iCloud security is a hot topic these days. From claims that China infiltrated Apple with hidden spy chips (reports that Apple vigorously denies) to last year’s threats from the “Turkish Crime Family” regarding stolen account passwords, it’s understandable if you’re worried about how safe your iCloud data is.
You can learn more about how Apple works on end-to-end encryption that has thus far kept iCloud largely safe from hackers. But there’s plenty you can do on your end to help make iCloud safer and well protected as well. Here are the basic steps you should to increase your iCloud security.
Step 1: Create a strong password
The password you use for iCloud is the same password for your Apple account. Apple requires that this password must be at least eight characters long, use upper and lowercase letters, and have at least one number, but we can do a lot better.
Reset your Apple ID password and make it as strong as possible. That means around 15 characters, both upper and lowercase letters, multiple numbers, and symbols. If you’re worried about remembering a random string of characters, a common tactic is to take a familiar phrase or word and exchange letters for numbers and symbols. However, if you want to invest time in a dedicated password manager, the software can come up with very strong passwords for you. Password managers are becoming increasingly important in today’s digital security environment, so if you don’t use one yet it’s certainly worth considering.
Step 2: Set up your security questions if necessary
If you haven’t visited your Apple ID in a while, you may not have gotten a chance to set up any security questions. These questions work just like the security questions for any thorough online security portal — you set a few specific questions about your life with answers that strangers would never know. Apple will ask these questions when you log into your Apple account or make big changes.
To find your security questions, log into your Apple account with your ID and password, and look for the section that says “Security.” On the right-hand side of the page, select the Edit button to expand the section so you can examine the Security Questions heading. . If you haven’t added any questions, you will see an option to “Add Questions.” If you have set questions up but want to check and refine them, you will see an option to “Change Questions.”
Note: Some people cannot see an option to set up security questions when they log into Apple ID. If you don’t see this option, you can skip this step: This happens when someone sets up two-factor authentication, which overrides the need for security questions and may erase from your account info.
Step 3: Enable two-factor authentication
Apple used to have “two-step verification” but upgraded to “two-factor authentication” which is an effective method of making sure that the real you is accessing your account from one of your real devices. Basically, this authentication sets up a trusted device and/or phone number that Apple will send a verification code to when you try to log in from an unrecognized device.
If you haven’t already done so, turning on two-factor authentication is a simple process. If you have already logged into your Apple account online, you can go to the Security section and look at the section for Two-Factor Authentication, which will take you through the process of setting it up. You can also set up the authentication at any time on your iPhone by going to “Settings, Password & Security,” and enabling “Two-Factor Authentication.”
Again, remember that two-factor authentication will probably cancel out your security questions. We encourage you to set up security questions first so that they (hopefully) remain associated with your account in case support staff needs to verify your identity or something goes wrong with the authentication. However, you are perfectly free to skip right to the two-factor authentication if you want.
Step 4: Always sign out when not using your devices
Julian Chokkattu/Digital Trends
Finally, always be aware if you are signing into your Apple account on a public device or a device that isn’t yours. This isn’t a very good idea (especially when connected to guest Wi-Fi), but sometimes it may be necessary. Just remember to log back out from your account when you are finished.
In a similar vein, don’t give out your Apple ID or password to anyone if you aren’t sure it’s an official Apple login or representative.
Editors’ Recommendations
- How to change your Gmail password
- How to reset your Apple ID password
- How to back up your iPhone
- Google’s Titan Key ensures your phone and apps are virtually unhackable
- There’s no silver lining for iCloud users, and Apple needs to fix it
Huawei Mate 20 Lite review: Not so smart
The Huawei Mate 20 Lite is a curious case of the “lite” edition predating the “standard” edition, offering an interesting glimpse into the future for fans of the Mate line.
What’s the forecast? Let’s find out in our Huawei Mate 20 Lite review. (Kind of gave it away in the title though.)
Specs and looks
The selling point of the Huawei Mate 20 Lite is the AI goodness it offers, even with mid-range specs (and price, in theory). It’s a promise Huawei has been known to deliver on in the past, and it should translate to some interesting features in the camera department.

Speaking of which, that’s the phone’s other big selling point — a total of four cameras, thanks to an extra lens around the front. It’s got a 20MP f/1.8 lens around the back, and a 24MP f/2.0 lens around the front, both backed up by secondary 2MP depth sensors.
It also comes with 64GB of internal storage, backed up by either 4 or 6GB of RAM, and a 6.3-inch 19.5:9 IPS LCD 1,080 x 2,340 display. It’s got USB Type-C, NFC — something a lot of users will be happy to see — FM radio, a headphone jack, and a fingerprint sensor. There’s no splash resistance or wireless charging, but that’s par for the course for a mid-range device. The decent 3,750mAh battery easily saw me through a day.
On top of all that, it runs Android 8.1 Oreo with EMUI 8.2.

The phone’s metal and glass design is attractive. I’m not actually a fan of my navy version, or the pattern around the vertical camera module, but that’s very much a personal preference (does anyone else think the black and navy clash?). Despite that, I can tell it is well made for its 379-pound (~$500) price tag. Other colors include sapphire blue and platinum black.
It’s nicely reflective and feels solidly built, but as Android Authority‘s own Kris Carlon pointed out in his hands-on, it’s also a little on the generic side.

That said, this phone’s notch might turn some people off. It’s not just another notch — it’s a super-wide notch. This is a demonstration of OEMs’ true commitment to notches these days.
You might think having two selfie-cameras gives the perfect excuse to drop the divisive feature and return to a brow instead. Rather than go that route, Huawei chose instead to just make its notch r e a l l y w i d e. Cool cool.
As with other Huawei devices, you can at least turn it off in the settings if you prefer.
Huawei has chosen to make its notch really wide
This gives it an 81.7 percent screen-to-body ratio, and in my opinion, it slightly impacts on the looks.
Performance
Performance is unfortunately where things start to fall down a little for this Huawei Mate 20 Lite review. This is not a particularly quick phone. It’s rare I feel the need to point that out these days.
The phone’s performance isn’t terrible, or anywhere near Honor 7s territory, but it is occasionally noticeable when you’re navigating the UI. Every now and then, an animation will stutter, or something will take just a little bit longer than it should and it takes the sheen off of the experience slightly.

My natural inclination at this point is to head over to a few benchmark apps and see just how the chipset holds up.
Except neither of the benchmark apps I usually use (Antutu and Geekbench) would install. This was weird since I had no problem with other apps.

Call me a conspiracy theorist, but maybe this has something to do with Huawei’s recent practice of gaming its benchmark scores.
Is it now just banning users from finding out? Or has it been banned?
We already know a bit about the Kirin 710 and can draw a few conclusions about the likely performance. This chipset is not quite as quick as the Snapdragon 660, though its energy efficiency may be better due to the 12nm manufacturing. It is ahead of something like the Helio P60, though not by a huge margin.
The Helio P60 powers the Realme 1, which I actually found performed a little quicker than the Mate 20 Lite in daily use. Whether that’s due to better optimization or software I’m not sure.
For gaming, the Mate 20 Lite relies on the Mali-G51 MP4, which doesn’t pack much punch. It’s not awful and you’ll certainly be able to play the majority of games. Just don’t expect to run things on the highest settings, or for the phone to be particularly future-proof.

Performance may not be helped particularly by the EMUI layer plastered on top of Android. If you’ve used an Honor or Huawei device before, you’ll know what to expect. It’s not my favorite Android skin, but I think it’s a step-up from ColorOS. Your mileage may vary.
Dual selfie camera and AI features
At the moment then, this Mate 20 Lite review finds the device very much on the back foot. The device will need to impress with its AI and dual lens features to have a chance of clawing itself back into contention.
Turns out two lenses up front combined with AI is a fairly potent combination, at least in principle. It promises improved facial recognition at any angle (AI ability to recognize your face + depth information), as well as some interesting AR features (like Animoji). It also could mean you get the same scene detection and general AI magic brought to photos on the front you get around the back.

Then there’s face unlock, which for me is actually among the slowest implementations of the technology I’ve ever experienced. It certainly doesn’t work at any angle, and it takes a very noticeable one or two second pause before it registers me when head-on.
I don’t wear glasses, but maybe I just have a weird face.
Maybe the YouTube comments are right and I am a hobbit.
A device with IR face unlocking like the Pocophone F1 is much quicker already and works in the dark. It’s cheaper, too.
I haven’t been terribly blown away by the general quality of the selfie camera’s photos. The bokeh mode is quite good at cutting me out of the image, but not noticeably better than devices sporting just one lens up front. The general performance of the front camera is also just okay, which is a shame given the high pixel count — something I always welcome on a front shooter.

The animoji were the only things that impressed me. These are the definition of “gimmick” (and not an entirely novel one either), but in fairness they work very well. I can raise one eyebrow and the little chameleon or whatever will do it too. It’s impressive how quick and accurate this is even compared the more expensive Note 9. I can’t see myself using it often, but it’s a good showcase for the tech. It’s a fun party trick, too.
This is me as a rabbit, and also a penguin, in case you’ve ever wondered how that might look.

Main camera

The main camera is fine.
I have a bit of a love-hate relationship with Huawei and Honor cameras. The big selling point these days is the AI scene recognition, and getting it on a mid-range chipset is a solid lure.
That scene detection is really hit or miss. A lot of the time it saturates images to a weird degree, making them look unnatural. On the Honor 10 and Honor Play, I noticed this improved over time, eventually becoming occasionally pretty impressive. Here that’s not the case. Several of my photos came out noticeably worse thanks to the AI mode — saturated to the point of being ugly.

Here the Mate 20 Lite accurately recognized this as food, then proceeded to make it look radioactive…
I wonder if the Kirin 710 is just learning everything again from scratch and will also improve with time, or whether it’s just not quite up to the task. It’s noticeably slower at spotting things like cars, which is understandable too.
Turn that scene recognition off and you have a pretty standard Huawei camera. Images are sometimes washed out, or have that strange warm-hue I’ve noticed on other Huawei phones. It also has moments of brilliance where it reacts very nicely to different lighting.

I thought this pic came out pretty nicely
Low-light performance isn’t particularly good. I put this against the Realme 2 Pro, seeing as I was reviewing both at the same time, and that much cheaper device was significantly better at taking photos at night. The Realme 2 Pro had a slightly higher aperture, but I’d still like more here. The Mate 20 Lite’s darker shots come out a little smudgy and grainy in comparison.

Not fantastic low light performance
It would be remiss of me not pointing out how the Huawei Mate 20 Lite wins major points for its incredibly feature-rich camera app. Just like other devices from Honor and Huawei, you get cool stuff like document scanning, a full-fledged pro mode, time lapse, and (my favorite) light painting. While AI scene recognition might not always work as advertised, it’s at least fun to play around with and I am hopeful it will get better. The portrait mode on this side also works quite well.

It’s a fun camera to use and if you really work at it, you can get some good photos. However, this is not a “great” camera by any stretch. I wouldn’t rely on it if you take a lot of photos as part of your job, or as a serious hobby.
For more camera samples, check out the folder here.
Value and closing thoughts
If it seems like I’ve been a bit hard on the phone during my Huawei Mate 20 Lite review, that’s probably due to its price.
Like I said, the Mate 20 Lite will set you back about $500. I just can’t see any justification for paying that much.

For significantly less money you could get yourself a Pocophone F1 and you’d experience drastically faster performance — they’re not even in the same ballpark — as well as much stronger camera performance and loads more features. The F1’s only drawback is its plastic build.
If that matters a lot to you, for just $29 more, you could buy a OnePlus 6, a true “flagship” experience with a beautiful metal build, excellent AMOLED screen, and the same killer performance as the Pocophone or better. You could also get the Honor 8x, with mostly the same specs at a fraction of the cost, as well as a glass build. It uses microUSB and you lose a few megapixels in the camera department, but it’s nearly the same.

Heck, you could even get an Honor Play or Honor 10 for a lot less than this — devices from Huawei’s own subsidiaries with all the same scene recognition, light-painting, UI layer, and the company’s own flagship Kirin 970 chipset (flagship for at least for a little while longer, anyway).
Offering AI smarts in a mid-range processor isn’t really all that impressive, when you charge less for your own flagship processor elsewhere!

The only thing this phone has going for it really is the dual lens camera up front. Seeing as the selfie camera isn’t all that and the face unlock is slower than many other options, I just can’t really recommend this device to anyone.
The tech has promise, I’m hoping it will be better realized when we see the real Mate 20. This isn’t a bad phone, it’s just not a compelling option for the money, given what else is out there right now. The mid-range has become incredibly competitive, and you need to do a lot more to stand out.
This feels like a proof of concept more than something anyone should actually buy.



