How to check CPU temperature
Bill Roberson/Digital Trends
There are many important stats to keep track of if you’re interested in the working health of your PC, but few are as important as the temperature of major components like your central processor. If you’re not sure how to do so though, don’t worry about breaking out the mercury thermometers, there are a number of quick and easy ways to keep an eye on how toasty your CPU is.
In this guide we’ll walk you through exactly how to check your CPU temperature, from your motherboard’s own reporting tools, to great third-party apps for occasional checks, to software and hardware solutions that keep you in the loop whenever you’re system’s booted.
If you find your CPU is running hotter than expected, here are some tips on how to keep it cool.
Windows apps
You don’t need to get into the nitty-gritty of UEFI/BIOS to measure your CPU’s temperature. Monitoring applications use the same physical temperature sensors in your system as your UEFI/BIOS, but make it accessible right through Windows. That means you can check it without a restart and you can also force your CPU to do something difficult so you can see how warm it gets when it’s working hard.
There are a number of first and third-party apps out there that you can use to get quick and easy access to your CPU’s temperature and a lot more information besides. Some of them can be a little overwhelming, but if you’re just looking to find out how to check your CPU temperature, our favorites listed below will see you right.
Intel XTU
If you have an Intel Core processor, then Intel’s Extreme Tuning Utility (XTU) is arguably the best way to check how hot your processor is running. Although designed primarily as an overclocking tool, Intel XTU comes with a number of built-in monitoring functions as well.
Step 1: To find out how hot your CPU is when running it, download the program from Intel’s download center and install it like you would any application.
Step 2: Booting it up, you’ll be presented with a lot of information, but in the lower panel of the main screen, you’ll see a few pieces of key information about your CPU. Most important for this particular guide however, is the “package temperature,” and associated graph. That’s your CPU temperature.
Step 3: You can also see how hard your CPU is working by its “CPU Utilization” percentage. The higher that is, the more your CPU is having to do. If you want to see how it does under stress, you can use the XTU’s built-in CPU benchmark under the relevant left-hand tab.
AMD Ryzen Master
Step 1: If you’re running one of AMD’s new Ryzen processors you can make use of AMD’s own Ryzen Master tool. It works in much the same way as Intel’s XTU, but for Ryzen chips instead. Head on over to its download center to install the program.
Step 2: Alongside its core clock tweaking abilities, it also has a CPU temperature monitor you can view on the left-hand side. Like the XTU, there’s also a graph that can plot your CPU’s temperature over time, even breaking it down by the core, so you can see if individual cores are getting warmer than others.
Step 3: The Ryzen Master tool can also give you average and peak readings, so you can see how hot your CPU gets over a long period of time — great for those concerned about time of day or outside forces affecting CPU temperature.
An alternate software option: HWMonitor
A classic PC monitoring solution, HWMonitor can tell you everything about the various components in your system, from the voltages they require — to the temperatures they run at. It doesn’t feature any sort of overclocking tools and its interface is barebones, but it’s clean, lightweight and easy to parse at a quick glance.
Hardware monitors
If none of the above methods are quite what you’re looking for when it comes to checking your CPU temperature, you could always opt for a hardware monitor. These typically come as part of fan controllers which slot into one of the optical drive ports on desktop systems. They sometimes use your onboard temperature sensors, but many come with their own wired thermometers to give you additional information about how hot your CPU is getting.
Note: These hardware monitors do require installation to some degree, so be prepared to open up your system to fit them, or pay to have it done by a professional. For tips on DIY PC building, check out our guide to building your first PC.
Here are some hardware monitors worth considering:
NZXT Sentry ($34): With a touch screen interface and bright, 5.4-inch display, the NZXT Sentry offers detailed information on your system’s — and by extension, CPU’s — temperature. Its main function however, is fan control, whereby you can adjust the speed of up to five fan channels individually, helping you keep your system cool and quiet.
Thermaltake Commander FT ($35): Another touch-screen fan controller, the Thermaltake Commander FT has a 5.5-inch display which gives you temperature readouts for multiple channels and will let you monitor your CPU closely while controlling a number of fans to keep your system cool.
Kingwin Performance FPX-002 ($24): Often on sale for even cheaper, the Kingwin fan controller lets you keep track of three temperatures, including CPU, simultaneously, as well as control three different fans. There’s even a built-in alarm should your CPU get too hot at any point.
Editors’ Recommendations
- Yes — Core i7 is faster than Core i5. But what’s the real difference?
- How to set up multiple monitors for PC gaming
- The best AMD CPUs on any budget
- Stay organized with the best to-do list apps for Android and iOS
- Dominate multiplayer with our ‘Destiny 2’ Crucible guide
Blade emerges from the shadows with a virtual PC gamers will love
France-based company Blade said that its new “Shadow” virtual PC cloud service is now making its North American debut in California. This platform essentially streams a virtual high-end Windows 10 desktop computer to any device so you’re not sinking $2,000 or more into upgrades or a new machine. Blade says this cloud-based PC will stay current with the latest technologies, such as the most-recent processor and graphics card.
That said, the current Shadow cloud-based PC relies on an Intel Xeon processor with eight dedicated threads, Nvidia’s GeForce GTX 1080 graphics chip, 12GB of server-grade DDR4 system memory (2,400MHz), and 256GB of storage. It also has a dedicated internet connection of up to one gigabit per second per customer. It’s fully compatible with fiber-based connections, DSL, 4G LTE, and so on.
Of course, you will never see the physical version. The Shadow service is accessible through apps provided for Windows 10, MacOS, iOS, and Android. But the company knows how many customers desire a dedicated, physical device, so it’s offering a stylish “streamer” packing physical ports for peripherals and controllers, and one of AMD’s APUs for local, real-time decoding of 1080p content at 144Hz, or Ultra HD content at 60Hz.
“Shadow frees PC power users from bulky, loud and unwieldy hardware, and allows them to work and play, with no lag, delay, hardware issues or other major worries, whenever and wherever,” Asher Kagan, co-founder of Blade, said in a statement. “We’re thrilled to debut Shadow to Californians today and can’t wait for all of our American users to experience the freedom and convenience that Shadow provides.”
The company said that it also partnered with Razer to bring its Shadow streaming service to the Razer Phone. That simply means the Shadow service will stream high-end PC games running at a 2K resolution and a 120HZ refresh rate. According to Blade, the Shadow service will “deeply integrate with the Razer Phone in the coming months.”
Shadow made its stateside debut at CES 2018 in Las Vegas at the beginning of 2018. Customers are essentially subscribing to a virtual machine, which is a software-based environment running on servers located in Blade’s data centers. Shadow had an early run in July 2016 and soon became a full-fledged service in France. It’s had time to mature, thus Shadow is finally making an entry into the North American market.
Emmanuel Freund, co-founder of Blade, said the company initially targeted the most-demanding audience you can find: PC gamers. “[They] are able to see any quality loss on any image, any latency,” he told VentureBeat. “If we can show them that what we’re doing is exactly the same as on the computer, we can show that this is working.”
But Shadow isn’t just about gaming. Any software that can run on a Windows 10 machine will run on the Shadow service. But Shadow isn’t cheap, costing $35 per month for a 12-month commitment, $40 per month for a three-month commitment, or $50 per month with no commitment.
Shadow will be widely available across the U.S. this summer.
Editors’ Recommendations
- Shadow is a cloud gaming service that wants to make your gaming PC obsolete
- GeForce Now hands-on preview
- Razer Phone review
- 65 inches. 4K. 120Hz. Nvidia’s BFGD is all the monitor you will ever need
- Google looks to expand cloud operations by going deep sea (cable) diving
SEC guidelines push for clearer data breach disclosures
American companies haven’t always been forthright about disclosing data breaches in a responsible way, and regulators want to encourage better behavior. The Securities and Exchange Commission has issued “interpretive guidance” that it hopes will both promote clearer disclosures and fewer ethical conflicts. The guidance asks companies to share more information about cyberattacks and other risks, and warns executives against trading securities before they’ve publicly shared the details of a breach — they shouldn’t dump shares knowing a hack will tank the company’s stock price.
Whether or not this makes a difference is another story. Although Democrats at the SEC supported the guidance, they argued that the real solution would be tougher rules requiring better disclosures and improved security standards. The guidance may formalize SEC interpretations that haven’t always been made public, but it doesn’t change those laws to keep pace with modern cybercrime. It’s not uncommon for companies to downplay or cover up incidents, but they won’t necessarily face serious repercussions for their actions.
If nothing else, though, this is a shot across the bow. It’s a reminder that companies shouldn’t sit on news of a breach, jeopardizing the data of their customers for the sake of profit. If companies honor the guidelines (and that’s a big “if,”) you may understand the true severity of a breach and have a better chance at mitigating the damage.
Via: Reuters
Source: SEC
Ford president Raj Nair leaves over ‘inappropriate behavior’
Ford will have to adjust its technology strategy, and not for the right reasons. The automaker’s North America President Raj Nair has left the company after an internal investigation determined that “inappropriate behavior” was out of line with the employee code of conduct. While the company wouldn’t actually say what that was, it noted that it was committed to a “safe and respectful culture.”
Nair’s departure took effect “immediately,” and was abrupt enough that Ford doesn’t have a replacement lined up.
The exit comes right as scrutiny over sexual harassment is reaching a fever pitch in many industries. While it’s not certain Nair left for that reason, zero-tolerance policies on harassment are increasingly commonplace — companies don’t want to be seen as tolerating harassment, regardless of a worker’s position or experience.
And in Ford’s case, that experience played an important part in its technology decisions. Nair had previously been Ford’s CTO and head of global product development before assuming the president role in June 2017, and led the brand’s shift in focus toward self-driving cars, phone-savvy infotainment and mobility services. Ford’s technology efforts won’t necessarily go awry, but the brand will have to regroup at the same time as it improves its corporate culture.
Via: Autoblog, Detroit Free Press
Source: Ford
Apple Partners With 2018 BRIT Awards, Shares New Apple Music Feature and Playlists
Apple today announced on Twitter that it was the official digital music partner of the 2018 BRIT Awards, which is celebrated each year with a major awards ceremony in London.
The 2018 BRIT Awards, which just wrapped up, featured performances from artists like Justin Timberlake, Stormzy, Ed Sheeran, Sam Smith, Liam Payne and Rita Ora, Dua Lipa, and Foo Fighters.
To celebrate the BRIT Awards, Apple has a dedicated section in iTunes and the Apple Music app under “Browse” that highlights music from BRIT Award nominees and winners along with exclusive playlists and album compilations. An exclusive live performance from Rag’n’Bone Man is also included in the Apple Music app.
Get ready for the biggest night in British music!
Experience highlights from 30 years of The BRIT Awards. #BRITs2018https://t.co/6GSNCOpxKJ pic.twitter.com/gnxuAKVlIx— Apple Music (@AppleMusic) February 21, 2018
Stormzy’s Gang Signs & Prayer won the award for British Album of the Year, and Stormzy was also named best British Male Solo Artist. The award for best British Female Solo Artist went to Dua Lipa, and Gorillaz was named the best British Group.
Dua Lipa was named British Breakthrough, and the Critics’ Choice Award went to Jorja Smith. British Single of the Year was Rag’n’Bone Man’s “Human,” Lorde was named International Female Solo Artist, and Kendrick Lamar was named International Male Solo Artist. Billboard has a full list of winners.
The full Apple Music section dedicated to the BRIT Awards is worth checking out if you’re looking for a curated selection of playlists and albums from BRIT Award winners.
Tag: Apple Music
Discuss this article in our forums
Google uses AI to place ads across the internet
Google’s ubiquitous AdSense ads are already heavily automated by their nature (they’re targeted based on a look at a site’s content), but it’s taking that hands-off approach one step further. The search firm has officially launched Auto Ads, a system that uses machine learning to not only determine the types of ads you see, but how they’re placed. The AI technology will decide how many ads are appropriate for a page and where to put them. Advertisers have to give up control, but Google has bet that they won’t mind the results. A long beta test saw publishers rake in an average of 10 percent more revenue.
There is a concern that trusting AI could create problems. Beta testers complained about ads crowing their pages, and there’s always the concern that it’ll serve fake ads or other dodgy promotions. While this could give small ad publishers an easy way to reach a wide audience, it raises the possibility of inappropriate ads not only surfacing on a website, but getting prominent placement. You may see more of the ads you’d actually want to click, but only if Google can be sure that its machine learning technology makes good judgment calls.
Via: TechCrunch
Source: Google Support
Taryn Southern’s new album is produced entirely by AI
Taryn Southern
Music has been made on computers for decades, but the technology has traditionally been much more utilitarian than collaborative when it comes to the music-making process. In recent years, however, artificial intelligence (AI) has evolved to a level where it can help artists actually create music for 50-piece orchestras and even help craft Billboard hits.
Singer-songwriter and YouTuber Taryn Southern has decided to push the limits of AI composition, putting the sound of her new album into the “hands” of four AI programs: Amper Music, IBM’s Watson Beat, Google’s Magenta, and AIVA. Aptly titled I Am AI, the album will be the first of its kind to be fully composed with and totally produced by AI when it releases in May.
While each AI program is unique, they generally create music by following certain parameters (genre, tempo, style). Artists input music for the programs to analyze, and the machines learn the structure in order to create original music in minutes. Specializing in creating classical music, AIVA got so good at composing it became the first non-human to be recognized as a composer.
Ahead of the February 20 release of Life Support, the latest song from I Am AI, Southern spoke with Digital Trends about the album-making process, how time-consuming making music with AI is, and its exciting potential to break down traditional barriers in the music industry.
Digital Trends: A phenomenon of just the last few years, using AI to make music has mostly been on the experimental level to test out the capabilities. What inspired you to make an entire album with AI?
Taryn Southern
Taryn Southern: Last January, I was reading an article in New York Times, actually, about the future of artificial intelligence and how it was being used in creative ways. At that point, out of curiosity, I was reading a lot about AI more for its data applications for enterprising companies. Once I learned it was being used for musical applications, I was really intrigued. So, I started reaching out to companies in the article asking if I could get access to their platform. Within a few months I was experimenting with a few platforms [and] it became evident that I could create what I felt was similar to what I was able to do on my own, before artificial intelligence.
Most musicians need producers to help guide them, but with Watson Beat and Amper you can click a few preset moods and tempos and create a fully composed production. What was the process like for you?
“You can literally make music with the touch of a button.”
I think the cool thing about these technologies is you can literally make music with the touch of a button. Something like my album has been a bit more involved, though. Or maybe, a lot more involved. [Laughs]. I could make songs how I want to hear them and have the right structure in place. With Amper, you can make it as easy or as difficult as you want. I would iterate anywhere between 30-70 times off a song within Amper. Once I’m happy with the song, I download the stems [individual music elements of a multi-track recording], and then I rearrange the various sections of the instrumentation that I really like, and cut what I don’t like. I do that to create the structure of the song, like Life Support.
What do you mean by “upwards of 70 different iterations?”
I started with one mood, then I started converting it to several others. Changing the key. Changing the tempo. I think I downloaded 30 stems, arranged the song, and then created a new template beat that was of the same key and genre, but as a hip hop beat. I think the original beat I went with was a cinematic, uplifting genre. Then once it had a really strong song structure that I really liked, I took the same parameters, popped them into a hip hop beat to get some of the drums, and some of the percussive elements. Basically, [it was] two variations of the song, within different genre parameters with the same rhythmic structure.
You started with one preset/mood, it spit out a beat, then you took the best parts of that beat and mixed it with something else?
Yeah. For the [Life Support] beat, I probably iterated 15-20 times, to get something where I liked the rhythm and the melodic structure. From there, once I had a sound song structure, I went into a different preset and set the genre parameters the same, so I could take sounds to add to the song. That adds to that layered feeling that you get from a song like Life Support, which has about 35 stems.
That must have been time consuming. Is that how it was for the entire album?
Every song on the album has a different process depending on the technology used, [and] depending on how quickly I could get something I really loved. There is another song I did on Amper that I only iterated on three times. A lot of those iterations are around the instrumentation, [and] playing with different instruments.
With something like Watson, I’m basically taking the code, running it through terminal, then taking all of the stems, pushing them through a DAW [Digital Audio Workstation] and changing the instruments myself to whatever I see fit. There’s a lot more involvement in working with a platform like that. On the plus side, [Watson] gives musicians who potentially have more background … in writing music potential opportunity to have more creative license … where Amper might be easier for beginner musicians and early music creators who want a bit more of a full production experience.
How did you get your hands on these programs, and how were they each different?
Magenta is open source, so that was a matter of going on Github, reading documentation, and downloading it. Fortunately, I had some friends at Magenta who have been very helpful answering questions and helping me out. They also have a number of different tools outside of Magenta, like NSynths, that are really cool, AI based tools that can help you customize a sound, or song, or tones even more than you had access to through other programs.
“I’m working on a song right now that’s basically an ode to revolution and I call it my Blockchain song.”
With Watson Beat I just reached out to everyone I could at Watson telling them how I’d love to get my hands on this. They emailed me back last fall, and … [via Google hangout] they set it all up on my computer and walked me through the whole program. They’ve been really helpful and I’ve been in direct contact with them quite a bit. I’m really impressed with the background code they’ve done on this and the program. It’s really intuitive. What I like about Watson is being able to inject the code with any kind of data or inspiration point of music that I’d like.
For instance, I’m working on a song right now that’s basically an ode to revolution and I call it my “Blockchain song.” It’s a song that’s inspired by the blockchain revolution, but I really wanted it to encompass this idea of revolution. So, I’ve been feeding Watson various songs, as far back as the 1700s, that represent revolution, trying to see what it can … glean from those songs to make a new anthemic, revolution song.
I would hope the Beatles’ Revolution made it in there at some point.
[Laughs] I started with 1700, 1800 revolution songs, because there’s no copyright issue with those. Currently, the rules around teaching AI based on copyrighted works is still a grey area. So I’m trying to play within the bounds of what’s legally acceptable at this point. I thought it would also be interesting to have these really old songs as inspiration points. It’s probably 15 songs from the 1700s and 1800s that are old-school anthemic songs, and it was really fun to have the AI algorithm learn from those songs and then force that function through anthemic pop structure that Watson already designated to see what kind of things it’d come with.
You mentioned AI being taught copyrighted music and spitting out new compositions. Did someone tell you teaching AI copyrighted material was a legal gray area, or did you figure that out yourself?
I figured it out myself. I think, as is the case with all of these new technologies, you’re writing the rules as you go. Ten years ago, I don’t think there were many people talking about artificial intelligence and copyright infringement. These are conversations that are happening in every single industry, not just music. I actually just did a panel at the copyright society this week that was digging into these predicaments. They asked, “What kind of attributions are given if artificial intelligence is learning off copyrighted works?” A lot of these things have to be figured out, but there aren’t any hard and fast rules on this.
Hypothetically, if someone ran a copyrighted song through AI, would the original song be discernible to the copyright holder?
I have run popular songs through, just to see what would happen. Usually what comes out of it is something that is not even close to resembling the original piece. It depends on the AI. Sometimes it’s just running pattern recognition on the chord structure. Other times it’s running statistical analysis, saying “if there’s an F-chord here, then you’re 70 percent likely to get a G-chord after the F-chord or an E-minor chord after the F-chord.” … If we’re looking at this from a purely theoretical point of view, I think that holding an AI accountable for stealing from copyrighted works would be very similar to holding a human accountable who’s grown up listening to The Beatles their entire life and now writes pop music. [ Laughs]. …
If we’re looking at a really sophisticated AI program that is built … similar to the way our own neural networks integrate and mimic information, then you could think of an AI as a really sophisticated human. [Laughs]. Even artists joke that some of the best music producers out there, like Max Martin, are just really advanced AI … Many of his songs have repeatable patterns that can be studied and mimicked.
So when the album’s done, will the programs be credited as producers?
I look at each of these AI programs as creative collaborators, so they’re produced in collaboration with Amper, AIVA, Magenta and Watson. There are 11 songs in total, although I might be adding two songs.
Have you used multiple programs on one song? Which program have you used the most?
One program per song. I’ve probably used Watson and Amper the most. If I end up with 12-13 songs, those would be additional songs from Amper.
What was your experience with AIVA?
AIVA was trained specifically off classical music. It primarily writes symphonies. I have two songs with AIVA that I love that are really unique. Because they were trained in classical music, it’s like, “How do we take these classical interpretations and turn them into pop songs?” So, they have a very different kind of feel to them, but they’re symphonic in the way that my Amper songs have symphonic and synth sounds.
One of the most expensive aspects of making an album is paying for studio time and producers. If you can do it in your room with a program, this has the potential to reduce the role of the human producers, doesn’t it?
I 100 percent agree. I think that the most exciting aspect of all of this is it will democratize access. I know that’s a really scary thing to the industry, and for understandable reasons. No one wants to lose their job and no one wants to feel like they might be beat out of their own game, vis-a-vis a computer. But at the same time, the music industry for so long has been kind of an old-boys club. … It has many gatekeepers.
If you want to produce a really well done album, it’s expensive. You have to find great producers, [and they] are not cheap. Sometimes the artists don’t make any money. As a YouTuber who grew up in the digital content revolution, I love when new tools come along that allow me to be scrappy and create without worrying about how I’m going to pay my bills. That might be the entry point for someone to say, “Wow, I love music. I’m going to do more of this.” … I feel like these kind of things are actually just helpful in widening the creative community and allowing more people to join the creative class.
After this album, will you continue to use AI to make music?
I’m sure I will. I can only imagine these are just the first few technologies to become available and there will be many more and they will evolve. I’m really excited to see how they evolve. But, they really do make my life easier as the artist, because I can focus on so many of the others things that I love to focus on in the creation process.
Editors’ Recommendations
- The 50 best albums of 2017
- Pad your collection with the best free (and totally legal) music download sites
- Producer Cardo on making Drake’s new No. 1 hit, Kendrick Lamar’s “evil genius”
- A.I. bots just dropped a black metal album that will make your head explode
- Avenged Sevenfold is down with Beach Boys covers, but not with going to Mars
Cryptocurrency not an ideal long-term investment, warns Ethereum co-founder
Looking for good financial advice? Vitalik Buterin, co-founder of the digital currency Ethereum, jumped on Twitter to offer just that, but it’s probably not the advice you’d expect. He believes traditional assets are still best for those who want to generate lucrative interest from long-term investments. Cryptocurrencies on the whole are still in the infancy stage, and don’t generate interest.
“Cryptocurrencies are still a new and hyper-volatile asset class and could drop to near-zero at any time,” he states. “Don’t put in more money than you can afford to lose. If you’re trying to figure out where to store your life savings, traditional assets are still your safest bet.”
Buterin, a cryptocurrency researcher and programmer, proposed Ethereum at the end of 2013. He helped get the platform up and running in July 2015 after selling an initial run to early adopters in 2014. Ethereum is the overall decentralized transaction platform while Ether are the digital coins. Currently, just one coin’s value is around $939, but that same coin was worth a mere $13 just a year ago.
The problem with cryptocurrency is that its value is volatile. For instance, on November 19, 2017, one Ether coin valued at $355 in real-world cash. By January 13, 2018, the value skyrocketed to $1,139 and then plunged to $591 per coin by February 5, 2018. Investing in cryptocurrency is obviously risky business.
Bitcoin isn’t immune to severe rises and falls either. On November 12, 2017, Bitcoin had a value of $5,969. That number jumped up to a hefty $19,189 on December 16, 2017, and then tumbled down to a $7,000 value by February 6, 2018. Right now, Bitcoin’s worth sits at $11,083, but that could rise or fall at the drop of a digital dime.
Cryptocurrency platforms such as Ethereum and Bitcoin rely on blockchains, which are networks of data blocks protected by cryptography. This data doesn’t reside within a central location, nor is it managed by one specific entity. Instead, these data blocks are “chained” between participating PCs scattered across the globe. This data cannot be altered or hacked because all blocks have a cryptographic hash of the previous block along with their transaction data and a timestamp.
Cryptocurrency is becoming a popular form of transaction because digital monies aren’t managed by a central entity, like a bank or government. Your transaction data also isn’t stored on a specific server that could be hacked. Even more, data stored in blockchains can’t be altered, promising a highly secure, anonymous transactional platform.
Investing in this emerging technology does sound promising, but as Buterin points out, now may not be the time for those looking to build a large investment value over time. Cryptocurrencies appear to be a high-risk investment that require constant monitoring to determine the ideal time to purchase and/or sell digital coins.
Meanwhile, Buterin is having problems on Twitter. Scammers are creating fake accounts using his profile picture to make cryptocurrency transactions. More specifically, they request a specific amount of digital currency in return for a larger amount. “Don’t trust anyone asking for or offering money on Twitter,” he warns.
Editors’ Recommendations
- The best bitcoin alternatives
- Ethereum vs. bitcoin: What’s the difference?
- Litecoin vs. Ethereum
- What is Ethereum?
- Bitcoin is still soaring. What’s the limit?
Bokeh for beginners: How to blur a background in Photoshop in mere minutes
Background blur, often called “bokeh” after the Japanese word for blur, is generally associated with high-end cameras with wide-aperture lenses. The effect is popular for portraits, and is emulated — with some limitations — by the “portrait modes” now found on many smartphones. But even without a high-end camera or portrait mode, you can still create beautifully soft backgrounds in Adobe Photoshop.
Beyond simply granting you an ability you may have not had access to in camera, choosing to add blur in Photoshop can give you more control and flexibility over where the blur is applied and how it looks. The program includes a number of different tools to selectively blur the background of a photo, along with many options for controlling the type of blur. One of the easiest ways to go from blah to blur, however, is by using Photoshop’s field blur tool, which creates realistic background blur without requiring you to waste hours in front of your computer.
Hillary Grigonis / Digital Trends
After
Hillary Grigonis / Digital Trends
Before
Before you get started
Photoshop includes a handful of different options to blur a background, with each option offering a varying level of control — and level of difficulty. After trying everything from detailed selections to a full-on depth map, the field blur tool offered the best, most realistic results in the least amount of time.
Bokeh is a tricky thing to try to imitate in Photoshop because true lens blur is based on many factors, including the focal length of the lens, the shape and size of the aperture, and distance from the subject. Of these, getting the effect of distance correct is perhaps the most important. In Photoshop, you have to tell the computer what objects are closest and farthest from the camera in order to get a blur that resembles the real thing and changes with distance — i.e., objects that are farther away from the subject should have more blur than objects that are closer. You could spend an hour creating a detailed depth map, but the field blur tool lets you approximate this with much less work.
We should note, Photoshop techniques are almost always more work than getting the effect in-camera, but the field blur tool will quickly imitate the bokeh of a more expensive lens. As you work, consider how the blur in a real image looks. A lens focuses on a tw0-dimensional plane in space, with everything on that plane being sharp. The level of blur increases with distance from the plane of focus — that is, either toward or away from the camera — but any objects that fall on the same plane as your subject should remain in focus.
How to blur a background in Photoshop
1. Open up the field blur tool.
With the image open in Photoshop, navigate to Filter > Blur Gallery > Field Blur. Inside the field blur window, you will choose what areas of your image to blur, while the blur tools on the right will control the amount and type of blur.
2. Set your first blur pin.
The blur pins tell Photoshop where to blur and how much. When you opened the field blur window, Photoshop automatically placed that first pin for you. Drag and drop that pin into the background, or the area the farthest from the focal point. On the right, drag the blur slider until you achieve the desired amount of blur. (You can also change the blur amount by clicking and dragging on the partial circle outside the pin.)
Since this first pin is the furthest point from the focal point, this pin will have the most blur. In the sample image, I used a blur of 100, but the numbers will vary based on the effect your are looking for. You can always go back and refine the blur of any pin simply by clicking on it.
3. Set a blur pin on the subject at zero.
When you first open the field blur tool, your entire image will be blurry. Set a pin directly on top of the subject by clicking on it, then dragging the blur slider all the way down to zero. You should now have a generally blurry background and a generally sharp subject.
Continue to place blur pins on the subject, setting each at zero, until the entire subject is sharp. Use as few pins as possible, but don’t worry if the background appears more sharp as you place pins.
4. Continue to refine the blur.
At this point in our sample image, the horse’s face was sharp and the background was blurred — but the rest of the horse’s body was just as blurred as the background. To fix this and achieve a more natural result, simply add more pins. Adjust the blur based on the distance from the original background point — objects closer to the background should have a blur closer to that original point (closer to 100, in our case) while objects closer to the subject should have a much lower level of blur (closer to zero).
Continue placing points and adjusting the blur until you every part of the image is blurred based on the distance from the subject. If this starts to interfere with the background blur, don’t worry — just place additional background points to ensure the background remains properly blurred. In our sample image, the background just to the left of the horse’s face was still a bit sharp, so we added another point there, setting it to the same blur value of 100.
5. Adjust the blur effects, if necessary.
Once you are happy with the placement and level of blur on the different distances in the image, you may (or may not) want to use the blur effects options, depending on your image. Here’s what each one does:
- The “light bokeh” control will brighten the brightest points in the out-of-focus area to mimic lens bokeh. Avoid these controls if you don’t have point lights in the background. “Bokeh color” will adjust the color of those bright areas, while “light range” will adjust what tones are included in the bokeh effect.
- The noise tab will restore any blurred noise in order to get the background to match the subject. If you are working with an image shot at a high ISO, for example, you’ll need to use this option so that the subject doesn’t have more noise than the background, which would look unnatural. Use the sliders to change the amount and the size of the grain to best match the grain in the subject. If there simply wasn’t any noticeable noise in your original image, you can leave this setting untouched.
Once you are happy with the level of blur, bokeh effects, and noise, click OK, and Photoshop will render the effect.
There are a number of of other ways to add blur in Photoshop, but the field blur tool is a great place to start. It offers flexible, realistic effects without requiring complex masks and depth maps.
Editors’ Recommendations
- Lensbaby Sweet 80 review
- Camera showdown: Shooting hedgehogs with the iPhone 8 Plus and Galaxy Note 8
- Bid farewell to lengthy Photoshop cutouts with new AI-powered tool
- Photoshop streamlines the photo-editing process with one-click selections
- The best free photo-editing software
Got an old cardboard box? Make your own VR goggles for under $10
Google Cardboard
You know Google Cardboard? The super cheap virtual reality headset made from a sheet of foldable cardboard and a pair of lenses? Ever since it was first announced back in 2014, dozens of companies have developed their own take on the idea, and nowadays you can get your hands on a fully functional cardboard VR headset for about 20 bucks — sometimes even less.
That’s pretty damn cheap by most people’s standards, but what you might not realize is that you can build a DIY version for even cheaper. Google open-sourced the design specifications for the headset shortly after they announced it, so you can easily build your own with a few basic hand tools, a spare sheet of cardboard, and some cheap lenses from Amazon.
You can access all of Google’s technical specifications and design schematics here — but truth be told, Google’s directions are so comprehensive that they’re almost confusing. So, in an effort to keep things simple and easy-to-follow, Instrucables user mnatanagara put together a much more approachable build guide. We like these plans better than Google’s, since they doesn’t require you to make a bunch of measurements and draw out all the parts. Instead, you just print a template on regular printer paper, glue it to your cardboard, cut everything, and fold it together. Here’s everything you’ll need to get started:
Tools:
- Utility knife/razor
- Scissors
- Metal edged ruler
- A large, solid cutting surface
Materials:
- Printed templates (download them here)
- Glue (both stick-style and Elmers)
- A 2’x3′ sheet of corrugated cardboard.
- Pro Tip: you might want some extra for your first build, just in case you make a mistake. Also, thinner shoebox-like cardboard is best, but you can make do with the thicker “moving box” variety if that’s all you’ve got. Just don’t expect all the pieces to fold nicely if you use the thicker stuff.
- A pair of 45mm focal length biconvex plastic lenses, either 25mm in diameter (GC 1.0) or 37mm (GC 2.0)
- velcro patches (preferably somewhat weak)
- Copper foil tape
- A small piece of dense foam (roughly 0.25″ x 0.25″ x 1.0″)
- velcro patches (The cheaper the better. Expensive stuff is too grippy, and you only need a weak hold)
Once you’ve got everything together, you can find the full build instructions here. Happy building!
Editors’ Recommendations
- Weekend Workshop: Show off your record collection with this DIY storage frame
- Wish you could fly? Here are the best drones on the market right now
- The best 3D printers you can buy for under $1,000 right now
- These five 2016 phones are still great buys worth considering
- From DIY to AAA, here’s how to take a passport photo in five different ways



