How to enable cookies
Browser cookies may get a bad rep for helping companies track us online and for enabling intrusive advertising, but they’re also a core component of how many modern websites function. Without them you would have to login to every website every time you visit, which can be a real pain.
Although third-party cookies, or tracking cookies, can be blocked to help prevent targeted advertising, first-party cookies are mostly for convenience, and despite what your antivirus software might report, they’re not viruses or malware.
Most browsers enable cookies by default and we certainly wouldn’t suggest you allow them to store any personal information if you’re using a public or shared computer. If you want to learn how to enable cookies though, it’s pretty straightforward, no matter which browser you use.
Google Chrome
Step 1: Launch Chrome and click the three vertical-dot menu icon in the top right-hand corner.
Step 2: Click the settings menu and scroll down to the button. Click “Advanced.”
Step 3: Under “Privacy and security,” click “Content settings.”
Step 4: From the list that appears, click “Cookies.” Then if unticked, click “Allow sites to save and read cookie data,” to enable cookies.
If you want a more nuanced implementation than an internet wide acceptance of cookies, you can block and allow specific sites further down the page, as well as block third-party cookies on any website.
Mozilla Firefox
Step 1: Click the three horizontal lines menu icon in the top right-hand corner followed by “Options” next to the cog icon.
Step 2: Click “Privacy and Security” on the left-hand side.
Step 3: Under the heading “History,” click the dropdown next to “Firefox will” and choose “Use custom settings for history.”
Step 4: If not already, tick the “Accept cookies from websites” box.
If you want to vary it by website or reduce the use of third-party cookies, tick or adjust the relevant options found nearby.
Microsoft Edge
Step 1: Open the Edge browser and click the three-dotted menu icon in the top right-hand corner.
Step 2: Click “Settings” at the bottom of the list.
Step 3: Click the “Advanced Settings” button.
Step 4: Scroll down to the heading “Cookies” and use the drop-down menu to select “Don’t block cookies.”
Safari
Step 1: Launch Safari and click the “Safari” menu icon in the top left-hand corner.
Step 2: Click Preferences and select the “Privacy” icon in the top menu-list. Its icon is a flat hand, palm facing the user.
Step 3: Next to the heading “Cookies and website data, make sure that “Always allow” is enabled.
If you want to enable all but third-party cookies, choose “Allow from websites I visit.” If you want to prevent any cookies being stored apart from when accessing websites, tick “Allow from current website only,” instead.
Opera
Step 1: Click the easy setup icon in the top right-hand corner of the browser pane.
Step 2: Scroll down to the bottom and click “Go to browser settings,” next to the cog icon.
Step 3: Click “Privacy and security,” in the left-hand menu.
Step 4: Scroll down and under the heading “Cookies,” make sure that “Allow local data to be set,” is selected from the list.
If you want more varied options, you can have cookies stored until you quit a browsing session, block third-party cookies only, or select which sites specifically are allowed to use cookies.
Editors’ Recommendations
- How to password protect a PDF
- ‘My PlayStation’ is a new social portal, and we’ll tell you just how it works
- How to set up parental controls on your PlayStation 4
- How to take a screenshot on a PC
- How to use Skype
How Nvidia is helping autonomous cars simulate their way to safety
Nvidia
Imagine that you’re the driver of a four-door family sedan approaching a stop sign. When you reach the stop sign, you notice a bicyclist trying to cross the road. Through eye contact, facial expression and body language cues, the bicyclist negotiates their right of way with you. As a result, you decide to let the bicyclist cross the road first, before you proceed to cautiously enter the intersection.
In the autonomous driving world today, there would be no way to “tag” or categorize such an event, said Cognata CEO Danny Atsmon. Current methods allow you to visually identify the bicyclist, but training systems to recognize and understand complex negotiations on the road remain a challenge for the $10.3 trillion autonomous driving industry.
In fact, autonomous driving represents “the single hardest computing problem the world has ever encountered,” as NVIDIA CEO Jensen Huang admitted when he unveiled some of the world’s most powerful graphics processors during the GTC 2018 keynote in San Jose, California.
Bridging the Real and the Virtual
“The world drives 10 trillion miles per year,” Huang said in a pointed presentation — but Atsmon pointed out that self-driving cars only covered three million miles of roads last year. For self-driving vehicles to drive better, they must learn more, and that’s fundamentally the largest challenge faced by the industry. To train an autonomous driving system to have the competence of a human driver, computers would need to drive roughly 11 billion miles, Atsmon told us.
It’s the single hardest computing problem the world has ever encountered.
That figure is calculated based on the 1.09 fatalities per 100 million miles driven in 2015. “So, to say a machine could have as safe a performance as a human being with 95 percent of confidence, you would need to validate for 11 billion miles,” said Atsmon.
Aside from the time needed to reach that goal, there’s also the expense to consider. Right now, the cost per mile for operating an autonomous car is in the hundreds of dollars — accounting for engineering time, data collection and tagging, insurance costs, and the time of a driver to sit in the cockpit of a car. Multiply that by the 11-billion-mile benchmark, and the massive expensive associated with training autonomous cars becomes clear.
Validation is key, and recent accidents involving autonomous vehicles show that incomplete data tests and training scenarios can prove fatal. In one less extreme example, a self-driving shuttle in Las Vegas was navigating at around 0.6 miles per hour, but it crashed into a truck (Jeff Zurschmeide, a freelancer contributor to Digital Trends, was there when it happened). No one was injured, but the puzzling scenario happened because the truck was pulling forward, then backing up as it tried to park. The cause for the crash, according to Atsmon, is that the shuttle wasn’t validated for this type of situation, and it didn’t know what to do — so it proceeded forward slowly and crashed.
Better Simulation for Deeper Learning
The industry’s current solution to bridging the 11-billion-mile gap for autonomous systems to reach human driving competency is to develop simulations to allow cars to learn faster by combining deep learning with a virtual environment.
“Simulation is the path to billions of miles,” Huang said at GTC. Late last year, Alphabet-owned Waymo unveiled Carcraft, its approach to learning by simulation.
Cognata is using the latest advancements in graphics and sensor hardware to create more life-like and realistic models of the world for autonomous cars to learn from. For the computing brains of a self-driving car, it’s like entering a video game modeled on the real world, and that could lead to more realistic driving scenarios to test and validate car driving data. The company has recently mapped out select cities, like San Francisco, using data from GIS — high definition cameras and sophisticated computer algorithms that runs over satellite and street view imagery, resulting in a photo-realistic scene.
Simulation is the path to billions of miles.
To further improve simulations, Nvidia, and some of its partners, are using data from the sensors of autonomous vehicles to build higher definition maps. When autonomous vehicles hit the road, these machines will not only rely on the data that is available through training, but also contribute to data collection by sharing the data that it has captured from its LIDAR, IR, radar, and camera arrays.
When this newly captured data is combined through deep learning with existing low-quality data sets, it will make streets and roads look more photo-realistic. Cognata claims that its algorithms can process the data in a way to bring out details in shadows and highlights, much like an HDR photo from your smartphone’s camera, to create a high-quality scene.
While simulation is an excellent tool, Atsmon noted it has its own flaws. It’s too simple, and for autonomous driving to be realistic, it must learn from edge cases. Cognata claims that it only takes a few clicks to program in an edge case to validate autonomous vehicles for more unusual driving scenarios. Companies building autonomous vehicles will have to be diligent in their search for edge cases that can trick self-driving cars, and creative in crafting solutions for them.
When Self-Driving Fails
Safety is so paramount to autonomous vehicles that Nvidia considers it the single most important thing for the industry. When things fail, fatalities can and do occur, as was recently proven when an autonomous Uber struck and killed a pedestrian in Arizona.
“I can assure you that [Uber is] equally crushed at what happened.”
When questioned in a press meeting about the Uber crash — Uber is a partner of Nvidia — Huang deferred to the ride-sharing company for comments, saying that “we should give Uber a chance to understand what has happened and to explain what has happened.”
“I can assure you that [Uber is] equally crushed at what happened,” Huang added.
Because Nvidia develops an end-to-end solution for autonomous driving, different partners — from Uber to Toyota and Mercedes Benz — may utilize all or some parts of the system. “There are some 370 companies around the world who are using our technologies in some way.” At the show, Nvidia also announced Orin, the next-generation computer of its DRIVE platform.
Nvidia
Humans as a Backup
While self-driving cars are getting smarter over time, Huang still believes that there should always be a human backup, even in instances where a car is designed without a driver seat. To achieve this, Nvidia showcased its Holodeck during this year’s GTC keynote, allowing a remote driver to control a physical car in real-time through virtual reality.
“It’s teleportation,” Huang said, highlighting that this is possible through Nvidia’s early investments in virtual reality.
During the demo, Tim, the driver, was located in a remote location. When he put on a pair of virtual reality glasses, he will felt like he’s in a physical car, enabling him to feel the car, and see the car’s controls and instrument panel. From this remote location and with the aid of his VR headset, he could take control of an autonomous vehicle, allowing him to drive the vehicle and park it.
It’s like what the military has been doing for a while — allowing drone operators to fly unmanned drones from remote location. But in Nvidia’s case, with the power of VR, the driver will feel like he is physically present in the cockpit. The company believes that simulation powered by its GPUs will eventually make autonomous cars nigh-infallible but, until them, the Holodeck can help humans watch over self-driving fleets.
Editors’ Recommendations
- Take a virtual ride in one of Waymo’s self-driving cars with this video
- Sit back, relax, and enjoy a ride through the history of self-driving cars
- Video of deadly Uber autonomous car crash raises more questions than it answers
- Here’s every company developing self-driving car tech at CES 2018
- Volkswagen enlists Nvidia’s powerful Xavier chip for autonomous cars
Google finally killed its Finance pages — and everyone hates the new look
Last year, Google announced that it was working on a revamped version of its finance page, which removed the portfolio feature. Many users were not happy with this change, but there was a workaround in place. Users could access the old finance page by navigating to https://finance.google.com/finance. That itself has now changed, and the URL redirects users to the new finance page. User response has been less than kind.
There is currently a change.org petition in place requesting that Google bring back the old finance page. As of the time of this writing, it currently has about 700 signatures. The petition was started two weeks ago, but appears to be picking up steam.
The #Googlefinance hashtag on Twitter is faring no better, with the vast majority of users expressing their dislike of the page’s new look and lack of features.
Next, could you make #GoogleFinance more useful by deleting the absolutely horrible redesign and giving us back the vastly superior previous version?
— Patty Bodeo Vintage (@PattyBodeo) March 30, 2018
@sundarpichai So disappointed with the new #googlefinance. #Google please bring back the old format!!! I don’t say this lightly, but whoever was in charge of the new Google Finance should be fired.
Lost potential. It is almost as though they had zero understanding of their users.— Gopinath J (@salemgopi87) March 30, 2018
The new #GoogleFinance sucks!! Big time!!! How can a company like #Google accept such an absolutely useless load of crap?!? I am guessing there is some politics behind this. There’s got to be an app or a website that is benefitting from the disaster that #GoogleFinance is now!
— Dimitre Tchalamov (@DTchalamov) March 22, 2018
Many users were also discussing moving to new platforms, with both Wikinvest and Yahoo Finance suggested as potential alternatives. For our advice on some of the best stock trading and finance apps on iOS and Android, check out our guide.
In addition to a simple distaste for the new look, many users are upset over the removal of key features from Google Finance. One of of the issues that gets a lot of complaints is the loss of the ability to create and download stock portfolios. Many users have complained that the loss of portfolios was simply change for the sake of change.
Google has not yet publicly commented regarding the changes made to Google Finance.
Editors’ Recommendations
- You’ll soon be able to get Google AMP quality without that pesky AMP URL
- Google makes it even easier for you to plan a trip on your smartphone
- As Alexa, Google Assistant dominate CES, Apple and Microsoft face tough decisions
- Ahead of potential federal law, Seattle is asking Facebook for election data
- Intel’s ‘Vaunt’ smartglasses could be the iPhone of augmented reality
Alexa can now take over DVR duties for services like TiVo and DirecTV
A new update for Alexa will give the Amazon voice assistant even more power in your living room. The Video Skills API update announced on the developer blog will allow Alexa to interact with DirecTGV, TiVo, DISH, and Verizon services to record shows with a voice request.
Rather than fumbling through menus with your remote, you can now just say, “Alexa, record the Mariners game” and it will be cued up and ready to view when you return home.
You’ll also be able to jump straight to your favorite apps or streaming services with the network or name of the show. “Alexa, launch Netflix” will let you check out what’s new this month or “Alexa, play episodes of Silicon Valley” will get you up to date on the new season.
Features like pause, rewind, or mute can also be handled with simple voice commands using the new API, without having to specify which service you’re referring to. “Alexa, pause,” is a lot easier than” Alexa, pause DirecTV DVD.”
“By integrating with Amazon’s Video Skill API, TiVo is making it even easier to control entertainment with your voice,” said Ted Malone of TiVo in a statement. “Simply ask Alexa to change the TV channel using either the channel name or number. Pause, fast forward, or rewind any of your live TV, DVR recordings and popular Internet streaming services.”
Developers still need to catch up and learn how to use the new API tools, so these updated skills will likely appear over the next few months as the various services include them in their firmware.
Amazon added voice control with Alexa to Fire TV last year, including playback and navigation commands.
This latest Alexa update means the battle for control of your living room is just getting started. Google Assistant already integrates with Chromecast and Shield TV, and Bixby 2.0 is controlling Samsung TVs.
DISH was the first company to begin utilizing Alexa as a voice remote with an update in May of last year. “DISH customers love the convenience that our Alexa compatibility brings to their home entertainment,” said DISH’s Niraj Desai. “With Amazon’s updates to the Video Skill API, DISH has the ability to deepen our Alexa integration and continue working toward providing our customers with a completely Hands-Free TV experience.”
Editors’ Recommendations
- If TiVo wasn’t already simple enough, you can now control it with your voice
- Vizio SmartCast TVs add support for voice control via Amazon Alexa
- The best Alexa-enabled devices you can utilize with Amazon’s Echo lineup
- You can now ask your LG AC unit to cool your home
- Apple posts record $88B Q1 haul but iPhone sales actually ticked down a bit
Google will broadcast real-time analytical predictions during the NCAA Final Four
Google has teamed up with the NCAA for this year’s March Madness (it’s apparently the “official cloud” of the tournament), bringing machine learning and statistical analysis to decades of game data. But now they’re going even further, with an audacious real-time experiment that could well blow up in its face.
As a blog post from the Google Cloud team explains, a group of basketball enthusiasts, data scientists, and computer techs set out to analyze the data from 80 years of NCAA games and unleash machine learning on this massive pile of data to uncover trends or facts that might not be readily apparent.
For example, if you’re picking an upset, go with the cats — teams with a feline mascot are more likely to bust your bracket.
If you’re a statistics nerd, this is all interesting stuff, but so far the “Wolfpack” (that’s what the team is calling themselves) has focused on historical data culled from thousands of minutes played. But what if they turned their efforts to anticipating what would happen next?
That’s exactly what they’ll be doing this weekend, and they’ll even be airing their predictions in a commercial set to air during halftime of the Final Four games.
Other sports have been using predictive models for quite some time, of course. For instance, baseball uses WAR (Wins Above Replacement) to estimate how much a player will add to a team’s win total as compared to a hypothetical average player. That’s over an entire season though, with hundreds of at-bats. The Google team is planning to estimate what will happen in the final 20 minutes of a single game, which is a tiny sample size.
During the first half, the team will use all the data they can gather and crunch the numbers up, down, and sideways against historical tournament performances, team tendencies, basic strategies, and anything else they can come up with to produce statistical predictions that they think are highly probable to occur.
On site at the Alamodome in San Antonio, they will then quickly turn their predictions into a television ad, which will be passed on to CBS and Turner to air on TBS just as the second half begins.
They aren’t planning to predict the game winner, so don’t get your bookie on the phone just yet. The predictions could be anything — number of three-point attempts, offensive rebounding percentage, even minutes played.
“One of the exciting things about running an experiment like this in real time is we don’t know for sure what will happen,” the team wrote on their blog. They could be right on the money or wildly off base. Either way, it will be fun for number-crunchers everywhere to watch the results of their experiment unfold in real time.
Editors’ Recommendations
- A.I. perfectly predicted last year’s Super Bowl score. What happens to betting?
- Zuckerberg: My bad, but we’re going to fix Facebook’s data privacy problems
- Tweeting during a game actually improves engagement, study suggests
- Minnesota Twins go deep into analytics with adoption of Blast Motion bat sensor
- 9 things to know about Facebook privacy and Cambridge Analytica
The next big ‘Destiny 2’ update involves a revamped mobile app
Bungie only just released Destiny 2’s “Go Fast” update, but it’s already set to talk about its next releases — and this time, the most important moves are outside the game itself. The studio is readying updates to both the Companion mobile app and its website that will make them decidedly more useful when you’re away from home. An April 4th update to Companion for Android and iOS will bring back the 3D models available in the original Destiny era. You can not only see what your Guardian looks like with equipped gear, but explore your weapons, vehicles and ghosts. It’s helpful for bragging rights with your friends, of course, but it also gives you a better reason to dig through your equipment when you aren’t playing.
Further improvements will also make life much easier if you prefer to fight in fireteams. Betas for iOS and the web (Android is in the pipeline) will let you create fireteams in your clan, and make them private if you prefer. You can send PSN and Xbox Live fireteam invitations from inside the app, and even set phone-native calendar reminders to make sure everyone is on time for that giant raid. Other mobile update also show all available vendors, what they’re selling and their potential rewards.
No, Bungie hasn’t forgotten about Destiny 2 itself. Its next large patch (1.2.0) is still due in May, and there are now hints of its plans to make exotic weapons more useful. The Sturm and Drang weapon combo, for instance, will be much more powerful. Kill enough with Drang and its Sturm counterpart’s excess rounds will deliver 80 percent damage — you’ll be very dangerous if you’re regularly swapping between the two guns. As with the “Go Fast” upgrade, the May release is aimed at winning back players convinced that D2 lost some of the original game’s thrills.
Source: Bungie
Amazon turns ‘A League of Their Own’ into a TV series
Amazon still isn’t done turning classic movies into Prime Video productions. Hollywood Reporter has learned that the internet is turning the legendary baseball movie A League of Their Own into a half-hour comedy series. It’s too soon to learn about the casting, but Broad City’s Abbi Jacobson and Mozart in the Jungle’s Will Graham are poised to both write and executive produce the show. It’s still focused on the creation of the all-women pro baseball league in 1943 and the story of the Rockford Peaches, although it’s safe to presume that the movie’s cast won’t be reprising their original roles. The series won’t include the central characters of Dottie (played by Geena Davis) or Kit (Lori Petty), either.
The creators reportedly sought the approvals of both Davis and director Penny Marshall for the Amazon show, which puts a “contemporary spin” on the social issues of the US in the league’s era (it’s not clear if the show will cover the period beyond World War II).
This isn’t the first time someone has taken a shot at parlaying A League of Their Own’s movie theater success into a TV show — CBS ran a short-lived sitcom in 1993. Amazon’s series may fare better without the pressures involved with conventional TV ratings and time slots, though. And whatever the expectations, the online giant’s goals are clear. Once again, it’s turning a well-known story into a broadly appealing series that (hopefully) becomes an international hit and boosts Prime Video’s stature in a world where Netflix dominates.
Source: Hollywood Reporter
ESA tests its giant Mars mission parachute
The European Space Agency has put one of ExoMars’ landing parachutes to the test for the first time. And while it was deployed from a helicopter merely less than a mile above the ground, its successful descent is a major milestone for the mission. The ESA’s test focused on the project’s second main parachute, the largest to ever fly on a Mars mission with a diameter of 115 feet. It’s also one of the two designed to safely deliver the ExoMars rover and science platform to the surface of the red planet in 2021.
The 154-pound parachute and its 3 miles of cords had to be folded a specific way in order to deploy properly while attached to a 1,100-pound test load. During the trial, the system successfully triggered its release 12 seconds after a smaller pilot chute was inflated. It took two-and-a-half minutes for it to reach the ground, moments of which were captured by the GoPros attached to the test vehicle, as you can see in the video below.
This test is but the first in a series of what’s most likely a number of trials, considering the agency doesn’t want to repeat what happened to the Schiaparelli lander. Back in 2016, the first part of the ExoMars mission headed to the red planet with the Trace Gas Orbiter and the Schiaparelli. Unfortunately, the lander crashed into the crust, because it wasn’t equipped for Mars’ extremely harsh, supersonic conditions during descent.
That’s why this test was conducted in sub-zero temps in Kiruna, Sweden, and why the next one would be deployed from a stratospheric balloon around 19 miles above the ground, which has an environment closer to Mars’ low atmospheric pressure. After that? Well, the ESA plans to take the mission’s two main parachutes for a spin and ensure that their deployment sequence works.
“It was a very exciting moment to see this giant parachute unfurl and deliver the test module to the snowy surface in Kiruna,” ESA’s Thierry Blancquaert said, “and we’re looking forward to assessing the full parachute descent sequence in the upcoming high-altitude tests.”
Source: ESA
Huawei’s P20 Pro rivals the best smartphone cameras out there
We’re a skeptical bunch at Engadget, and when Huawei briefed us on its P20 Pro smartphone, listing an endless torrent of specifications and dubbing its Leica Triple Camera system “the most advanced camera on a phone yet,” we collectively rolled our eyes. Forty-megapixel camera sensor? I’ve heard that one before, Huawei.
It was only once I was able to test the P20 Pro away from briefing rooms and technical demos (spending a day shooting around a rain-soaked Paris) that the phone started to win me over — and others. If you like the idea of an accomplished 5x zoom function, and the potential for gorgeous nighttime photography, you have to consider Huawei’s latest phones.

First, let’s summarize some of the major camera specs. All the fun is centered around the “Leica Triple Camera.” That includes an 8-megapixel telephoto shooter with an f/2.4 lens and optical image stabilization and a 40-megapixel camera with an f/1.8 lens.
Oh, and there’s an extra 20-megapixel monochrome sensor (with f/1.6 lens) and a color temperature sensor for accurate white balance. If you needed one more lens, don’t worry: There’s a 24-megapixel front-facing camera too.
AI camera
Beside all the hardware numbers, Huawei’s sales pitch on imaging centers on the AI smarts that come alongside the P20’s camera spec sheet, and the standard still photography mode has AI assistance turned on automatically. (You can turn it off in settings.)
However, coaxing the temperamental AI feature to make an appearance was rage-inducing — especially when you’re looking to test whether it’s identifying objects correctly and how it decides to adjust settings when it spots something that warrants it … like in a camera test. Is this better than misjudged scene modes, like we saw with LG’s similarly AI-skilled V30 ThinQ? I’m not sure — I just wanted more consistency.

Temperamental behavior wasn’t a deal-breaker, however, simply because I was so pleased with the photos that the AI-assisted camera eventually did capture. They are all Instagram-ready right out of the box: portrait-mode-detected images had smoother skin tone and added gently blurred backgrounds, while the P20 Pro boosted the colors it picked out on food and flower shots.
Huawei’s imaging algorithms and settings, while a little aggressive with saturation and bokeh effects, sometimes employed only a light touch to photos. Here’s the P20 Pro’s AI setting for food photography, compared with a standard shot on the iPhone X.

Left: Huawei P20 Pro; Right: iPhone X.
If anything, Apple’s phone amps up the color on the strawberries a little too far. Note how the white sign stays white in the P20 Pro sample. Huawei installed a color temperature sensor amid all those camera lenses, and it seems to do a pretty good job.
Hybrid zoom
Mat Smith, Engadget
Huawei’s zoom system works so much better than I thought it would. The P20 Pro uses both the telephoto camera lens — capable of 3x optical zoom — and the 40MP primary sensor to offer a hybrid 5x zoom that takes really, really impressive stills.
Interestingly, 3x zoom on the telephoto lens alone would be enough to best the iPhone or the Galaxy series, both of which go with 2x optical zoom. The hybrid option then adds a substantially bigger jump in magnification, which made framing easier and ensured I got all the detail I wanted in my shots. I’d normally shy away from digital zoom, because it usually makes muddier, noisier images. The P20 Pro’s hybrid zoom didn’t have that issue.
Nighttime shooting was another situation where the P20 Pro shone. As I mentioned in passing during my preview, the phone comes with long-exposure modes that don’t require tripods. From what I’ve been told, it combines the Kirin NPU chip to chew over long-exposure captures and bring them all together, with the high-megapixel primary shooter doing a lot of the legwork.
The results were often jaw-dropping: When in Paris, right?
Mat Smith, Engadget
In comparison, taking the same shots on my iPhone X resulted in noisy, hazy photos. Pretty, but simply not as good. Meanwhile, look at the detail on the Eiffel Tower’s cross-struts above: It’s impressive as heck. (I’ve included all my full-size images in a Flickr album below.) Better still, it was easy. No tripod; swiped to “Night mode” in the camera app; tapped on la tour Eiffel; and this was the result. In low light elsewhere, the P20 Pro also delivered reliable photos that were nicely contrasted, with plenty of detail.
Finding the right mode for great photos
Mat Smith, Engadget
There’s a handful of other tricks and modes that you have to actively seek out, and that’s the biggest problem when it comes to Huawei’s otherwise wonderful camera phone. It’s hard to find the settings that you’re looking for. The company says the P20 can reach an ISO setting of up to 102,400 — that would be as much as many DSLRs, but I couldn’t test it, because it’s coming in a later software update. It also took me a while to figure out long-exposure settings too. My tip? Just select Night mode. It’s easier.
I enjoyed unearthing the P20 Pro’s range of shooting tricks (monochrome photography, aperture mode for post-shooting focus, portrait mode, light painting and many more) and useful features that goes beyond image quality: the fact that the P20 Pro can boot into the camera app and snap a (out-of-focus) still immediately in less than 0.3 seconds.
Those interface issues shouldn’t detract too much from the miraculous photos this phone is capable of taking. The main question is: How patient are you? You really have to pin down the shooting options that work for you, whether that means turning the AI assistant off entirely or sticking to the Pro mode.
Yes, it takes time to learn the ropes on Huawei’s P20 Pro, but the results prove it’s worth it.
Source: Full size sample images (Flickr)
Seeing the light [#acpodcast]

Daniel Bader and Andrew Martonik, are joined by associate editor Hayato Huseman to talk about the creepiness of Facebook, the upside of notches, and taptic engine performance on Android devices.
They also discuss the new Acer Chromebook Tab 10 — the first Chrome OS tablet to hit the market. Plus, the crew takes a look at the triple threat Huawei P20 and P20 Pro, the Porsche Design Mate RS, and why software continues to be such a big problem for Samsung.
Listen now
- Subscribe in iTunes: Audio
- Subscribe in RSS: Audio
- Download directly: Audio
Show Notes and Links:
- Hayato Huseman on Twitter
- Facebook kept logs of calls and messages on Android phones, and followed the rules to do it
- Facebook #DeleteYourSoul
- OnePlus 6 will have a notch
- OnePlus explains the rationale behind the notch on the OnePlus 6
- It’s 2018 and Android phones still can’t compare to the iPhone’s Taptic Engine
- Acer debuts the first Chrome OS tablet, the Chromebook Tab 10
- How does Apple’s new push for the education market compare to Chromebooks in the classroom?
- iPad (6th generation): Everything you need to know
- Huawei P20 and P20 Pro: Triple threat
- Huawei P2 and P20 Pro specs
- The Porsche Design Mate RS is a Huawei P20 Pro with an in-display fingerprint sensor and a $2000 price tag
- I enjoy the Galaxy S9+ despite its software, not because of it — and that’s a problem for Samsung
Sponsors:
- Audible Go to audible.com/acp or text ‘ACP’ to 500-500 to get started.
- Thrifter.com: All the best deals from Amazon, Best Buy, and more, fussily curated and constantly updated.



