Skip to content

Archive for

3
Mar

The Ray is a stretch of highway paving the way for the future of motoring


The future of motoring is looking toward electrified and autonomous vehicles (and sometimes a combination of both), but there are other technologies that are being developed under a smaller spotlight. “The Ray” is an 18-mile stretch of west Georgia’s I-85 highway and it also includes the surrounding land and community. It was designed to be a “proving ground for the evolving ideas and technologies that will transform the transportation infrastructure of the future.”

The team behind The Ray joined forces with numerous companies to create pilot projects to showcase and test on the high-tech road. It counts Kia Motors Manufacturing Georgia (KMMG), the Georgia Department of Transportation (GDOT), Hannah Solar, Wattway, The Land Institute, Drawdown, The University of Georgia College of Environment and Design, Resilient Analytics, the Georgia Conservancy, The Ray C. Anderson Foundation, and the Chattahoochee Nature Center among its partners.

Georgia is around the top 10 states in the number of registered vehicles but charging stations for electric vehicles are scarce outside metro Atlanta. The state’s first solar-powered PV4EV (photovoltaic for electric vehicle) charging station was installed on The Ray.

Tire pressure is an important factor in fuel efficiency as well as safety. The Ray’s rollover WheelRight Tire Pressure Monitoring System sends drivers a text message with information about their tire pressure. It can measure pressure on vehicles traveling up to 15 mph and is connected to a camera that recognizes number plates. As stated on The Ray’s site, “Simply drive over the monitor to get your text!”

A solar paved highway can handle all types of traffic while providing renewable energy in the form of electricity. A pilot site by Wattway debuted in France and the U.S. followed with a 50-square meter-installation of solar paving panels. Wattway can be applied existing roads, so it can be used to retrofit highways. It has the potential to feed electricity into the grid, which can be used for everything from street lighting to electric vehicle charging.

The Ray’s innovations go beyond the road itself. The land around the interstate, usually used by drivers who need to pull over, can serve other functions without affecting its primary purpose. Kernza plants have been installed along The Ray to test the feasibility of farming along the highway. These wheat grasses have deep roots that “enrich the soil, retain clean water, and sequester carbon.” Wheat straw is finding more use in our everyday products, such as diapers, paper towels, and toilet paper.

Bioswales are drainage ditches filled with vegetation or compost and are designed to trap pollutants (such as heavy metals, rubber, and oil) and slow water movement during wet weather. As a positive side effect, the plants used in the bioswales add beauty to the highway.

“We’re starting a movement to make The Ray a net-zero highway,” the group states on its website. “Specifically, our vision for the future includes improving the safety, beauty, and ecology of our 18-mile stretch of I-85.”

Editors’ Recommendations

  • Watch a Land Rover climb 999 vertigo-inducing steps to China’s Heaven’s Gate
  • How to maximize electric vehicle range (and minimize range anxiety)
  • Survive the apocalypse (or just a week off the grid) with these zombie-proof rides
  • It’s not self-driving, but this electric VW six-seater is a smart solution for urban mobility
  • Awesome Tech You Can’t Buy Yet: Tiny rafts, self-cooling tents, and more


3
Mar

How to use Remote Desktop


Do you need access to your computer from afar? We have good news for you: Today’s remote desktop solutions are much easier to implement than the remote tools of the past — and a lot more reliable. Whether it’s for business or school, having remote desktop access should make your life much easier.

Let’s go over how to use Remote Desktop functions, step by step. We have recommendations for Mac solutions below as well.

Remote Desktop on Windows

Microsoft makes it easy to enable remote desktop features on Windows 10, as long as you follow the right instructions. Here’s what to do to get started:

Step 1: Head to the computer that you want to control — the one that will be “remote” in this setup. Go to the Menu window and search for (or open directly) Settings. Once in Settings, choose “System” and make sure you are in the “About” section. Look for the PC name. Write down the name or otherwise make sure that you will remember it later on. It will be a specific code, so it’s smart to make a note!

Step 2: Look again to the left side menu in the System window and choose the “Advanced system settings” option (if you don’t see it, type in “view advanced system settings” into the search bar and select the Control Panel result). This should open the “System Properties” box. From here, navigate to the “Remote” tab, and look for the heading that says “Remote Desktop.” Make sure the option to “Allow remote connections to this computer” is enabled.

Step 3: Now your remote computer is ready to be controlled from a distant login. However, there’s one more important thing that you need to do. Head back to Settings and go to the “Power & sleep” tab. Here, make sure that the “Sleep” is set to “Never” on all accounts. The last thing you want is for your remote computer to shut off when there’s no one around — then remote desktop functions won’t work at all.

Step 4: Okay, it’s time to switch over to your “local” PC that you will be using to control the other computer. If you’re thinking, “Wait, I want to control a remote computer with my mobile device instead,” then fear not! Microsoft provides a remote desktop app for both iOS and Android devices (and also other less compatible Windows devices) that you can download to do the same thing as a Windows 10 computer.

The app isn’t quite as reliable or speedy, but it will get the job done, so download those if you need to, and open them up. If you remain on a computer, then head over to the search box and search for “Remote Desktop Connection,” which should bring up the option to open a new window.

Step 5: Both the app and the “Remote Desktop Connection” tool on the computer will have an option to add a new desktop. In the app, it’s a plus sign in the upper right corner. On a computer, the Remote Desktop Connection window will have a single fill-out bar next to “Computer:” with a drop-down option for any names you’ve previously used. In either case, you need to enter that PC name that you noted in the first step.

Step 6: Check to make sure the name is accurate, then select “Connect.” Now wait for the connection to complete, and a window should open into the OS of the remote computer, allowing you to use it. Keep in mind that there may be some delays, especially if you have a slow internet connection or are trying to do something complicated. Be patient, and carefully test out the features you want to use remotely to make sure they work.

What if I have a Mac?

Apple offers its own remote desktop solution (also called Remote Desktop) for those of us who may have Macs instead of PCs. The good news is that it’s compatible with Windows and can be used to control either type of operating system as you need, as long as you have the right information. The bad news is that the Apple Remote Desktop app is… not popular. It has a painfully low 1.6 rating on the Mac App store, and dozens of complaints about bug problems, glitches, and failures. Thus, we are hesitant to recommend it.

But, if nothing else will do, then we suggest that you download the app and take a look at Apple’s fairly clear instructions on how to set up the remote desktop, add clients, and get started. It’s primarily geared at administrators, but it can work in many different scenarios. Just make sure that everything is updated first.

Editors’ Recommendations

  • How to change the background on a Mac
  • Pebby lets you play with your pet even when you are away
  • How to boot into safe mode in Windows 10
  • How to root Android phones or tablets (and unroot them) in 2018
  • How to fix Microsoft Edge’s most common problems


3
Mar

The Ryzen 7 CPU could see a nice speed increase over AMD’s current chip


A second-generation Ryzen desktop processor recently surfaced in benchmarks: The Ryzen 7 2700X. It’s an eight-core sample distributed to system manufacturers for testing that relies on AMD’s refreshed “Zen” CPU architecture dubbed “Zen+.” It follows another second-generation Ryzen chip, the six-core Ryzen 5 2600 processor, that appeared in January in the SiSoftware database. 

For comparison, here are the numbers yanked from benchmarks regarding these two unannounced processors along with the first-generation CPUs they replace: 

 

Ryzen 7 2700X 

Ryzen 7 1700X 

Ryzen 5 2600 

Ryzen 5 1600 

Architecture: 

Zen+ 

Zen 

Zen+ 

Zen 

Process tech: 

12nm+ 

14nm 

12nm+ 

14nm 

Cores: 

8 

8 

6 

6 

Threads: 

16 

16 

12 

12 

Base speed: 

3.7GHz 

3.4GHz 

3.4GHz 

3.2GHz 

Max speed: 

4.1GHz 

3.8GHz 

3.8GHz 

3.6GHz 

XFR: 

+100MHz (?) 

+100MHz 

+100MHz (?) 

+100MHz 

L2 cache: 

4MB 

4MB 

3MB 

3MB 

L3 cache: 

16MB 

16MB 

16MB 

16MB 

Power use: 

95 watts 

95 watts 

65 watts 

65 watts 

 As the numbers show, the upcoming Ryzen 7 2700X will sport a 300MHz faster base speed than its predecessor along with a higher boost speed at 4.1GHz. With Extended Frequency Range (XFR), this chip could jump up an additional 100MHz for a maximum boost of 4.2GHz. Meanwhile, the jump from AMD’s Ryzen 5 1600 to the upcoming Ryzen 5 2600 won’t be quite as dramatic with a 200MHz difference between the two. 

XFR is part of AMD’s SenseMI technology built into its Ryzen processors. Embedded sensors determine the current temperatures, power use, and current speed of the chip. If the thermal conditions are right, Precision Boost will crank up the chip’s base speed when you need more processing power. The boost can go even higher if using just two cores. 

From there, XFR will increase the speed of two cores even further based on the PC’s cooling system: The lower the temperature, the higher the speed boost. The maximum, however, appears to be locked at 100MHz, at least in the first-generation Ryzen CPUs. That could change with XFR 2.0 in the newer CPUs. Note that for XFR to hit its maximum allowance, PCs need premium air or water-based cooling solutions.

AMD’s newer Ryzen-branded desktop chips are reportedly code-named as “Pinnacle Ridge” whereas the current CPUs are/were code-named as “Summit Ridge” prior to their release. Of course, keep in mind that the two second-generation Ryzen chips sampled out for testing are just that — samples. The numbers we see now could be preliminary, meaning AMD could squeeze out a bit more performance before they reportedly hit the market in April. 

AMD’s first wave of original Ryzen 7 and Ryzen 5 CPUs hit the market in March 2017 followed by the Ryzen 3 units in July 2017. That said, seeing an April 2018 release of the new Ryzen 7 and Ryzen 5 CPUs isn’t out of the question. Second-generation Ryzen 3 chips could appear in April as well, or follow the first-generation’s lead with a summer-bound release.

AMD said in December that the new processors won’t need a new motherboard as seen with the original Ryzen processor launch. AMD plans to support its AM4 motherboard “seat” through 2020, indicating third- and fourth-generation Ryzen CPUs will be AM4-based as well. 

Editors’ Recommendations

  • AMD talks details on second-gen Ryzen chips, teases Vega for mobile
  • The best AMD CPUs on any budget
  • AMD vs. Intel: How does tech’s oldest rivalry look in 2018?
  • CPU, APU, WTF? A guide to AMD’s processor lineup
  • AMD’s upcoming Ryzen refresh won’t require a new motherboard, company confirms


3
Mar

Raspad is a portable Raspberry Pi tablet for bringing creative projects to life


We’re big geeks when it comes to Raspberry Pi, the small education-focused single-board computers which have beaten the odds to become the third highest selling general-purpose computer ever built. Those are some pretty impressive stats, but it doesn’t mean that Raspberry Pi isn’t sometimes confusing for beginners.

That is where the Raspad, a portable Raspberry Pi tablet, comes into play. Promising to lower the barrier to entry for creating customized projects, it’s a simple tablet computer that gives you access to all the Raspberry Pi ports you need, along with a simple tablet interface so you can start programming from anywhere. “Raspad is a Raspberry Pi Tablet for makers’ creative ideas,” a spokesperson told Digital Trends. “Every ‘maker’ deserves a personal tablet for developing purposes.”

Raspad boasts a chunky 10.1-inch touchscreen design with stereo speakers. While it’s certainly not the prettiest tablet around, it promises to deliver on its goal of giving Raspberry Pi enthusiasts the opportunity to program a project from anywhere at any time, serving as a mobile workstation. For beginners, there is a graphical user interface (GUI) and a pack sensor kits to explore. It comes with easy visual programming software, a series of hardware kits for building, and an intuitive tutorial book full of projects to gain inspiration. For more advanced users, there is the opportunity to switch out motherboards and connect with other hardware such as Arduino.

So far, the project has exceeded its funding goal significantly, with 906 backers (and counting) pledging funds hoping to see this become a reality. As always, we recommend caution when it comes to pledging money for a crowdfunded project that doesn’t yet exist. However, Raspad is confident it can deliver.

“Raspad has finished the final tests already,” the spokesperson continued. “We had a trial production in the last month. Now, we are preparing for mass production, at the same time we are crowdfunding at Kickstarter.”

If you are interested in getting your hands on Raspad, you can pledge on the project’s Kickstarter page, where $189 will go toward a finished unit. Other price options come with a sensor kit and even, for $735, a robot arm and vision camera kit. Shipping is set to take place in May. After that, it should just be a matter of getting building!

Editors’ Recommendations

  • These Raspberry Pi 3 bundles will cover everyone, from coders to gamers
  • 20 major Kindle Fire problems, and how to extinguish them
  • You can build your own smart home hub with Mozilla’s Project Things
  • Meet MHL, the easiest way to wire your phone to your TV
  • 100 awesome Android apps that will transform your tired tablet


3
Mar

Raspad is a portable Raspberry Pi tablet for bringing creative projects to life


We’re big geeks when it comes to Raspberry Pi, the small education-focused single-board computers which have beaten the odds to become the third highest selling general-purpose computer ever built. Those are some pretty impressive stats, but it doesn’t mean that Raspberry Pi isn’t sometimes confusing for beginners.

That is where the Raspad, a portable Raspberry Pi tablet, comes into play. Promising to lower the barrier to entry for creating customized projects, it’s a simple tablet computer that gives you access to all the Raspberry Pi ports you need, along with a simple tablet interface so you can start programming from anywhere. “Raspad is a Raspberry Pi Tablet for makers’ creative ideas,” a spokesperson told Digital Trends. “Every ‘maker’ deserves a personal tablet for developing purposes.”

Raspad boasts a chunky 10.1-inch touchscreen design with stereo speakers. While it’s certainly not the prettiest tablet around, it promises to deliver on its goal of giving Raspberry Pi enthusiasts the opportunity to program a project from anywhere at any time, serving as a mobile workstation. For beginners, there is a graphical user interface (GUI) and a pack sensor kits to explore. It comes with easy visual programming software, a series of hardware kits for building, and an intuitive tutorial book full of projects to gain inspiration. For more advanced users, there is the opportunity to switch out motherboards and connect with other hardware such as Arduino.

So far, the project has exceeded its funding goal significantly, with 906 backers (and counting) pledging funds hoping to see this become a reality. As always, we recommend caution when it comes to pledging money for a crowdfunded project that doesn’t yet exist. However, Raspad is confident it can deliver.

“Raspad has finished the final tests already,” the spokesperson continued. “We had a trial production in the last month. Now, we are preparing for mass production, at the same time we are crowdfunding at Kickstarter.”

If you are interested in getting your hands on Raspad, you can pledge on the project’s Kickstarter page, where $189 will go toward a finished unit. Other price options come with a sensor kit and even, for $735, a robot arm and vision camera kit. Shipping is set to take place in May. After that, it should just be a matter of getting building!

Editors’ Recommendations

  • These Raspberry Pi 3 bundles will cover everyone, from coders to gamers
  • 20 major Kindle Fire problems, and how to extinguish them
  • You can build your own smart home hub with Mozilla’s Project Things
  • Meet MHL, the easiest way to wire your phone to your TV
  • 100 awesome Android apps that will transform your tired tablet


3
Mar

YouTubers can soon replace ugly video backgrounds — no green screen required


Google Research

Replacing the background on a video typically requires advanced desktop software and plenty of free time, or a full-fledged studio with a green screen — but Google is developing an artificial intelligence alternative that works in real time, from a smartphone camera. On Thursday, March 1, Google shared the tech behind a new feature that’s now being beta tested on the YouTube Stories app.

Google’s video segmentation technique started out like most A.I.-based imaging programs — with people manually marking the background by hand on more than 10,000 images. With that data set, the group trained the program to separate the background from the foreground. (Adobe has a similar background removal tool now inside Photoshop for still images.)

That data set, however, was trained on single images — so how did Google go from a single image to replacing the background on every frame of video? Once the software masks out the background on the first image, the program uses that same mask to predict the background in the next frame. When that next frame has only minor adjustments from the first, such as slight movement from the person talking on camera, the program will make small adjustments to the mask. When the next frame is much different from the last — like if another person joins the video — then the software will discard that mask prediction entirely and create a new mask.

While the ability to separate the background is impressive by itself, Google wanted to go one step further and make that program run on the more limited hardware on a smartphone rather than a desktop computer. The programmers behind the video segmentation then made several adjustments to the program in order to improve the speed, including segmentation, downsampling, and reducing the number of channels. The team then worked to enhance quality by adding layers to create smoother edges between the foreground and background.

Google says those changes allow the app to replace the background in real time — applying changes at over 100 fps on an iPhone 7 and over 40 fps on the Google Pixel 2. The training set, Google says, had a 94.8 percent accuracy rating. All of the examples Google shared are videos of a person — the company didn’t say if the feature works on objects.

Inside the beta test of the feature, YouTubers can change the background in the video by selecting from the different effects, from a night scene to a blank background in black and white. Some of the effects in the sample even added lighting effects, like a lens flare at one corner.

The video segmentation tool is already rolling out — but only as a beta test, so the feature isn’t yet widely available. After taking in results of that test though, Google says it plans to expand the segmentation effects along with adding the feature to other programs, including augmented reality options.

Editors’ Recommendations

  • Bokeh for beginners: How to blur a background in Photoshop in mere minutes
  • Instagram Stories now has an option to add photo-free text with Type Mode
  • Learn how to play YouTube in the background on iOS and Android
  • Google unlocks the Pixel 2’s HDR+ skills in third-party apps like Instagram
  • How to add music to Instagram videos


3
Mar

This $6 LED bias light strip powers via USB to amplify your HDTV setup


Enhance your media setup with this 6.6ft LED light strip.

For a limited time, JackyLED is offering its 6.6ft. TV LED Light Strip for just $5.69 when you enter promo code 6CIF4U7A during checkout. That saves you over $4 off its current price.

bias-jackyled-light-strip-wc.jpg?itok=GY

Adhering this backlight behind your tv not only helps reduce eyestrain while watching in the dark, but it also adds a classy touch to your home media setup. The strip is flexible allowing you to fit it along the shape of any surface you want, whether it’s your TV, your desktop computer, or elsewhere in your home. It’s powered by a USB plug and provides extremely low-heat, so you won’t have to worry about children hurting themselves by touching these lights either.

See at Amazon

3
Mar

Take a chance on virtual reality with this Aukey headset for just $6


Try VR with just your phone and this super inexpensive headset.

The Aukey virtual reality headset for smartphones is down to $5.89 with code ITP42D4M on Amazon. It’s $10 without the code, and it was selling as high as $18 in early January. This is the lowest price we’ve seen and better than any direct discounts.

aukey-vr-headset-4s11.jpg?itok=Le-hRJ8Q

This headset works with any smartphone that is 4.7 inches to 5.2 inches, including the iPhone 7 and Nexus series. It’s a lightweight device with foam cushions and adjustable head straps for comfort. You can immerse yourself in your favorite video games, movies, or just a slideshow of your favorite photos.

All Aukey devices have two-year warranties.

Looking for some other hobbies you’d like to try without spending hundreds to get started? Try drone flying with this $38 Aukey Mohawk quadcopter or add a little green to your desk with this $23 Blok Lamp and small succulent.

See on Amazon

3
Mar

Huawei Watch 3 is coming, but it’ll be a while before we get it


“It will come later.”

The Huawei Watch series has been a mixed bag so far. The original Huawei Watch from 2015 is still considered to be one of the best Android Wear watches ever made, but the Huawei Watch 2 was met with a very lukewarm reception. There hasn’t been much talk from Huawei in regards to its wearable business as of late, but during MWC 2018, the company’s CEO finally broke the silence.

huawei-watch-2-classic-6.jpg?itok=hnBmjZ

In an interview with TechRadar, Huawei CEO Richard Yu confirmed that a Huawei Watch 3 is officially in the works. However, don’t expect to get your hands on it anytime soon.

According to Yu:

It will come later – there’s no hurry because Huawei Watch 2 sells well. We’re not in a hurry, so we’re launching the new watch later.

Yu didn’t offer any insight as to what we can expect with the company’s third smartwatch, but whatever we get, I certainly hope it goes back to what made the original Huawei Watch so great. The Watch 2 isn’t necessarily a bad product, but its bland design and lack of any compelling features (such as a rotating bezel or crown) ended up making it a tough recommendation.

Whenever Huawei decides to kick out its next smartwatch, what are you hoping to see the most?

Huawei Watch 2 review: No time for this half-baked sequel

Android Wear

  • Everything you need to know about Android Wear 2.0
  • LG Watch Sport review
  • LG Watch Style review
  • These watches will get Android Wear 2.0
  • Discuss Android Wear in the forums!

3
Mar

Here are our first photos taken with the Galaxy S9+


My initial take on what the pair of cameras can do.

With so much shared between the Galaxy S9+ and its predecessor, much focus has been put on the all-new camera setup as a true differentiation between models. So it makes sense that of all the questions I’ve seen about these phones, a majority have been about its cameras. To give you a sneak peek of what to expect, as I’m getting into my full review of the Galaxy S9+ I want to offer up some quick impressions of how I’m liking using the camera.

Having only used the phone for a couple of days, I don’t have full conclusions on what the cameras are capable of. But I want to share some initial thoughts and photo sample so you can see how I’m feeling and also judge the images for yourself. And though I’m using the larger Galaxy S9+ here, everything is fully applicable to the standard Galaxy S9 as well aside from the brief mentions of the secondary 2X lens.

Daylight photos

gs9-first-photo-daylight-1.jpg?itok=nunPgs9-first-photo-daylight-2.jpg?itok=Hy8jgs9-first-photo-daylight-3.jpg?itok=u06Lgs9-first-photo-daylight-4.jpg?itok=zYNygs9-first-photo-daylight-5.jpg?itok=LBv3gs9-first-photo-daylight-6.jpg?itok=uZYFgs9-first-photo-daylight-7.jpg?itok=BJAggs9-first-photo-daylight-8.jpg?itok=zPWQgs9-first-photo-daylight-9.jpg?itok=tABdgs9-first-photo-daylight-10.jpg?itok=8Xd

Daylight shots with the Galaxy S9+ are great. As you’ll see even more dramatically with the low light shots below, this camera takes extremely sharp and smooth photos. Details are fantastic, colors are good and the dynamic range is plenty wide so you don’t feel like you need to tap-to-meter or adjust the exposure on the fly. In all but full daylight brightness, the camera was choosing to shoot at f/1.5 — but the few shots I’ve had so far at f/2.4 looked great as well.

The f/1.5 lens makes you wonder why you’d settle for faux bokeh from Live Focus.

The f/1.5 lens also affords you fantastic bokeh in close-up or macro shots, leaving me to wonder why you’d want to settle for the faux Live Focus blurring in many situations. But in a couple situations that super-wide lens caused me issues with the camera “missing” focus, since the focal plane is so narrow — moving to a tap-to-focus for more artsy shots where I had a specific subject in mind fixed that right up, though.

The Galaxy S9+ doesn’t feel like it has quite as much opinionated post-processing as something like the Google Pixel 2. What I mean by that is the Galaxy S9+ seems to take a sharp, well-exposed and colorful version of a scene, while the Pixel 2 goes a bit stronger in using HDR-style processing to bring out more highlights, colors and overall dynamic range. The Pixel 2’s photos can be a bit more pleasing to the eye as a result, but don’t let that take away the fundamental soundness and excellent quality the Galaxy S9+ is offering here.

If you’d like to view full-resolution versions of these daylight photos, you can download them right here.

Low light photos

gs9-first-photo-low-light-1.jpg?itok=Yiggs9-first-photo-low-light-2.jpg?itok=Dj4gs9-first-photo-low-light-3.jpg?itok=4bPgs9-first-photo-low-light-4.jpg?itok=vdWgs9-first-photo-low-light-5.jpg?itok=Tldgs9-first-photo-low-light-6.jpg?itok=e-ags9-first-photo-low-light-7.jpg?itok=GsEgs9-first-photo-low-light-8.jpg?itok=60Ogs9-first-photo-low-light-9.jpg?itok=esUgs9-first-photo-low-light-10.jpg?itok=Dxgs9-first-photo-low-light-11.jpg?itok=r1

Samsung’s new “Super Speed Dual Pixel” sensor may have a funny name, but the goal of providing greatly improved low light photography is no joke. Its advanced multi-frame processing is clearly doing some amazing things with very little light, aided of course by the f/1.5 lens. Low light shots are incredibly sharp, which is noticeable immediately before you even zoom in to details. Flat surfaces exhibit very little noise, even when shooting scenes that have minimally usable light. Things don’t look unnaturally smooth or fake, either, which is a tough line to walk.

This camera doesn’t care if you’re shooting something with barely any light.

The most impressive part so far, from a photography nerd perspective, is the parameters the Galaxy S9+’s camera shoots with. Even in a super-dark scene, like the shots from the lounge above, we’re looking at between ISO 100 and 300. Even still, they’re shot at a shutter speed of 1/40 second or faster. With those kinds of dark scenes you typically expect something to give out in either blur or softness or grain, and the Galaxy S9+ just doesn’t have any of those things. Low light shots are almost as sharp as the daytime ones, which is amazing.

One thing worth mentioning when it comes to low light photos is what happens when you switch to the “2X” zoom mode. Just like other phones with dual cameras, the Galaxy S9+ will often choose to use the main camera with a digital crop rather than switching to the secondary camera. With the primary camera’s great sensor and f/1.5 lens, it can better handle low light shots, even at 2X digital zoom, than the dedicated telephoto lens can in many cases. That kind of adds to my initial theory that standard Galaxy S9 buyers aren’t missing out on much by “only” having the main camera.

If you’d like to view full-resolution versions of these low light photos, you can download them right here.

More to come

This is just the beginning of my time with the Galaxy S9+, and of course with its cameras. You’ll see more about its photographic capabilities in our full review, and as I put this phone through its paces you’ll see more photo samples showing up on my Instagram.

Samsung Galaxy S9 and S9+

  • Hands-on with the Samsung Galaxy S9 and S9+
  • Galaxy S9 and S9+: Everything you need to know!
  • Complete Galaxy S9 and S9+ specs
  • Galaxy S9 vs. iPhone X: Metal and glass sandwiches
  • Galaxy S9 vs. Google Pixel 2: Which should you buy?
  • Join our Galaxy S9 forums

Verizon
AT&T
T-Mobile
Sprint