Skip to content

Archive for

21
May

8 bold biohacks that blur the line between human and machine


From metrics apps to wearables, many of us are already outfitted with an array of gadgets to help us collect wellness data and streamline our day-to-day lives. However, a group of individuals known as “grinders” are taking this technological integration a step further. These real world cyborgs are going under the knife and pushing the transhuman envelope for the common — albeit, gory — good. From novel body modifications to the exceptional pioneering implants of tomorrow, here are eight of our favorite biohacks.

Blood test implant

Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) have developed an implantable device they hope to use as a personal blood testing laboratory in the future. The implant uses five sensors, a radio transmitter, and a basic power system. Each sensor is coated with a specific enzyme, allowing the device to monitor substances in the body (namely lactate, glucose, and ATP). The implant is recharged through the patient’s skin via an external battery patch.

Once the blood work has been analyzed, the implant transmits this data to the patch via Bluetooth. This information can then been sent to a doctor over a cellular network for further analysis. The device is still in the prototype phase, though EPFL researchers believe the unit will be commercially available within the next few years. This step toward personalized, at-home medicine could greatly diminish the need for regular check-ups for elderly patients, and individuals with chronic illnesses.

Hearing colors

Artist Neil Harbisson was born with achromatopsia, rendering him completely color-blind. The Human Antenna experiment started as a way for Harbisson to extend his perception of color beyond this limited grey scale. Eventually, Harbisson and his team developed software capable of transposing colors to vibrations using a simple head-mounted camera extension. Initially, Harbisson carried this rather bulky hardware in a backpack, and simply wore a pair of headphones to hear the colors.

Harbisson eventually had this antennae surgically inserted into his occipital bone. The colors he experiences now resound via bone conduction, as the occipital bone reverberates this sensation around his skull. Harbisson was even permitted to wear his headgear in his passport photo. Although Harbisson has certainly enjoyed the advantages of this unique sensory experience, he claims to have never intended to use this as a way to conquer his achromatopsia.

“My aim was never to overcome anything. Seeing in greyscale has many advantages,” said Harbisson in a recent interview with national Geographic. “I have better night vision. I memorize shapes more readily… and black-and-white photocopies are cheaper.”

EyeBorg

The Eyeborg Project is the brainchild of filmmaker Rob Spence. Spence wanted to the ability to deliver a literal point-of-view filming experience. He has accomplished this primary objective by teaming up with ocularist Phil Bowel, who designed the prosthetic camera and electronic eye shell housing. Then, a team of engineers and RF Links, a company that specializes in RF wireless design, created a miniature camera and RF transmitter for the project. This massive collaboration eventually gave us the world’s first literal, POV feed. Next, the team hopes to create an “eyecam” that looks like a real eye rather than a Terminator-esque mockup. If you want to ruin your entire day, you can watch part of the procedure here.

The North Sense

Unlike many of these other implantable biohacks, the North Sense from Cyborg Nest is an “exo-sense” model, meaning the unit is implanted on the surface of the skin. The company suggests attaching the unit specifically to the upper chest. Once there, it has a rather basic purpose. As the name suggests, the device gently vibrates whenever the wearer is facing magnetic north.

Couldn’t this basic functionality be replaced by a compass or rudimentary app? Simply put, yes. However, like the aforementioned Human Antenna project, the whole point is to develop a subliminal relationship with this new stimuli input. At first, this gentle vibration may be slightly jarring, but once you’ve become accustomed to the response, you’ll interpret this data almost subconsciously. Cyborg Nest is currently designing similar technologies pertaining to panoramic audio and visual sensing, which will enable individuals to interpret what is happening behind them at any given time. Read more about the North Sense here.

Biomagnets

Biomagnets are one of the more popular trends in biohacking. This augmentation usually includes either the disc-shaped or bulkier cylindrical magnets, which afford humans alternative ways to interact and sense their environment. As we are all quite familiar, when a magnet encounters a magnetic field, it reacts. Once inserted beneath the skin (ideally along the fingertips or palm), these slight reactions stimulate tactile nerve endings. While the smaller magnets are ideal for tactile sensations, the larger magnets are preferred when it comes to actually lifting and interacting with magnetic objects. You’ll have hours of bar trick capacity literally at your fingertips, however, you might have to be careful to not interfere with your electronic devices.

NFC and RFID tags

RFID and NFC implants are another wildly popular biohack. Most individuals choose to have these implants placed on the back of the hand, directly between the thumb and index finger, or along the wrist. This particular technology is one of the more functional cyborg modifications. Once inserted, these “chips” can be used for a slew of identification purposes, among other functions. These chips can replace many of our keys and passwords, allowing us to use transponders to unlock doors, start vehicles, and even log into computers and smart devices.

Circadia implant

In 2013, Tim Cannon, “cyborg” and Grindhouse Wetware co-founder, infamously had a Circadia 1.0 implanted in his arm without professional medical assistance. The bulky implant protrudes rather conspicuously from the bottom of his forearm. Currently, the Circadia is capable of little more than measuring Tim’s body temperature. However, eventually Cannon hopes more advanced versions of the implant will be able to relay this information to smart appliances, enabling his home to react to his internal temperature. Until that glorious day, we suggest using your run-of-the-mill thermometer, and adjusting your thermostat like the rest of us mere mortals.

Bioluminescence

This project from Grindhouse Wetware adds a whole new meaning to “Day-Glo.” The biohack collective inserted these Northstar V1 implants into a few willing grinders. These units are slimmer than the Circadia 1.0 we previously mentioned, though, they’re about as limited when it comes to functionality. The implants are designed to emulate bioluminescence using a series of LEDs.




21
May

Tesla extends free Supercharger use to all existing owners


When Tesla put an end to free lifetime Supercharger access for new customers, it left more than a few people crestfallen — one of the nicer perks of ownership just went out the window. However, Tesla has had a (partial) change of heart. As of now, any existing owner has free Supercharger use, even if you bought after the January 15th cut-off. And if you upgrade to a Model S or Model X in the future, you’ll get to keep that gratis electricity. There aren’t many people who’ll need the offer right away (the Model S is only 5 years old), but this could give you a reason to upgrade quickly instead of holding on to your EV for as long as possible. And if you’re a first-time buyer, don’t fret — you’ll have a way of scoring free charging as well.

The automaker’s referral program now lets current Model S and Model X owners give that unlimited free Supercharger use to as many as five buyers through their referral code (which still includes $1,000 off, we’d add). If you know a friend who’s an early adopter, you might just catch a big break.

These moves have been a pleasant surprise to at least a few recent owners, but they make sense in a larger context. While Tesla’s sales are already ramping up even before the Model 3 arrives, they’re still small compared to some mainstream car brands. Whatever financial hit Tesla takes by extending free Supercharger access may pay for itself by keeping sales healthy, especially among potential customers who were kicking themselves for not buying before 2017.

Source: Electrek

21
May

At I/O 2017, Google doubled down on a future built on AI


A few years ago, when a cadre of dudes jumped out of a zeppelin wearing Google Glass, nearly everyone watching had a “holy shit” moment. Company execs had just run through a slew of big, consumer-facing announcements, and then Sergey Brin threw the presentation to a live video streamed by people hurtling through the air. In that moment, Google wasn’t just a terribly smart company — it was a terribly cool one, too.

Fast forward a few years, and I/O now seems a little subdued. Apart from the crowd clamoring to see LCD Soundsystem run through a set, the energy in the air seemed calmer than before. Last year’s fun, open-air demo areas were gone too, replaced mostly by air-conditioned domes after attendees last year complained about the heat.

So yes, I/O is different now. Google doesn’t have the exact same priorities for its developer conference as it used to. And that’s OK. Arguably, things are as they should be. The shiny new gadgets can come later — Google’s message with I/O 2017 is about all about weaving the fabric of the future, with artificial intelligence as the most crucial thread.

Just look at the things Google played up in its keynotes. The search giant made it easier for hardware makers to bake the Google Assistant into their products, for one. (Oh, and there’s iOS support now, too). That means broader adoption of a feature whose primary purpose is to understand you, be it through via your voice or the bits of information that form a digital outline of your life.

The level of computational intelligence required to understand a person writing or speaking (in several new languages, no less) is crucially important as the interfaces on our devices become more personal. And of course, there’s Google Lens, which seeks to understand the physical artifacts in front of a camera and present the user with ways to interact with it. Like we did so many ages ago, AI is crawling out of the dark.

The technological concept isn’t new — just look at Google Goggles. What’s new is the extent to which we’re able to find meaning in data that looks like noise. Consider the announcement of next-generation Tensor Processing Units (woven into Google’s cloud no less). It more-or-less means the massive data crunching needed to train AIs to play poker or tell hot dogs from non-hot dogs won’t take as much time.

TensorFlow is already one of the leading platforms for teaching AIs, and doubling-down on hardware that makes such growth easier is both impactful and financially savvy. TensorFlow will likely shape the way you get things done whether you’re aware of it or not. With TensorFlow Lite — a scaled-down version of the software library meant to run on mobile devices — the company believes it has found a way to make Android phones even better at interacting with you

“We think these new capabilities will help power a next generation of on-device speech processing, visual search, augmented reality, and more,” said Android engineering chief Dave Burke on-stage.

The boundaries between some of these projects are mostly conceptual. Just look at Google’s push for standalone VR headsets. I personally think Google will run into trouble positioning these devices between low-cost fare like Gear VR and the premium, Oculus Rift-level stuff. But squeezing all that intelligence and processing power into a single device is endlessly intriguing, and it doesn’t take a huge leap to see how Google’s Lens and TensorFlow Lite could make future self-contained headsets — ones that focus more on AR, at least — remarkably useful.

Google didn’t rock our faces with new phones, it offered a vision of the future that feels both full of potential and surprisingly imminent. Tell me that’s not exciting.

For all the latest news and updates from Google I/O 2017, follow along here

21
May

LearnPhoto365 app: Our first take


While the LearnPhoto365 app could use a design and customization upgrade, the random assignment generators are worth the price of the app.

From college classes to online tutorials, there are almost just as many ways to learn photography as there are cameras on Amazon. But there’s one method that every photographer uses to hone their craft: practice. Starting a 365-day photo challenge makes that practice a priority, but comes with the risk of burnout after the ideas simply stop coming. Fortunately, this is 2017, and as with so many other aspects of our lives, there’s an app for that: LearnPhoto365 Photography Assignment Generator.

LearnPhoto365 serves as both a platform for sending daily push notifications to remind you to actually do the challenge and a method for generating new challenge ideas when the lack of inspiration strikes. The app includes both pre-written assignments and programs that randomly generate a much larger variety of potential shoots. Available on both iOS and Android platforms, the app has a free version with a smaller number of assignments and a full $4.99 version. For iOS users, there’s also a $1.99 version that focuses on iPhone photography instead of universal challenges that can be shot with any DSLR or mirrorless camera.

A challenge tailored to your style — even if that’s selfies

The app opens with a menu allowing you to choose from a list of photo challenges, randomly generate an assignment, access favorites, or start a long term challenge. While the name suggests a daily photo challenge, users can also choose less intense 52-week or 30-day challenges.

There’s a good amount of pre-written assignments, but the app’s real gem is the nine random assignment generators.

Along with randomly generated lists of 365, 52, 30, or single challenges, the app also includes a 365-day selfie option, generating a list of new places or things to include in your next shot. A weekly portrait challenge encourages photographers to photograph people from different walks of life with a list of different careers. The 30-day subject challenge picks a subject as random as apple juice and armchairs then suggests 30-plus different ways to shoot it, from lens choice to lighting.

When you select a challenge, you can save that list to the favorites to access again later. But “save” is a sort of misnomer — if you generate a new list within the same challenge category, that favorite list will be replaced with a new one. A pop-up warning reminds you that the new list generation cannot be undone. Along with saving complete lists, you can also save individual challenges.

Each list is randomly generated, which is great for challenging yourself with new subjects and techniques that you haven’t experimented with yet. Clicking on the challenge will bring up a small description which is usually accompanied by a shooting tip — like using a rear or second curtain sync when shooting flash with a slow shutter speed — and a few thumbnail sample photos.

The pre-written assignments also use a list of suggested camera settings, including what mode and exposure settings to use and whether or not a tripod is necessary. The suggested settings aren’t designed to teach a newbie the ropes entirely from scratch, but they serve as a good reminder while out shooting in the field.

While the challenge lists are well-varied, there are few customization options. Inside those lists, you can click the “X” to remove a task, which might be necessary when, say, the portrait challenge asks you to photograph a “milkman,” a profession that is rather scarce in 2017. Somewhat strangely, after removing “milkman” from the list, the app didn’t randomly generate a new challenge to replace it, leaving us with a 51-week challenge.

Put new skills to the test

The app includes a range of challenge options, but the only push notifications are daily, so the reminders cannot be used for weekly or monthly challenges. The notifications also don’t include any advanced features, like choosing what time you want to be reminded of your task. Our notifications came in at 9 a.m. every day — so if you work a 9-5, you’re getting the reminders at the beginning of the work day and may very well forget by the time 5 p.m. rolls around and you can actually go out shooting.

On the home menu and inside the “choose from list” option, users can access all the challenges organized by categories. This option makes it easy to find an assignment not for a daily photo challenge, but to practice specific techniques. If you just learned how to use shutter speed, for example, you can access a list of good subjects to practice using fast shutter speeds with, along with tips, sample shots, and suggested camera settings.

learnphoto  first take img

learnphoto  first take img

learnphoto  first take img

learnphoto  first take img

While there’s a good amount of pre-written assignments, the app’s real gem is the nine random assignment generators designed for getting out of a creativity rut. A scavenger hunt option generates a list of items to go out and shoot, while the number of the day assignment asks you to shoot objects found only in groupings of that number. Another sends you out shooting with only a single focal length to use.

The “places” challenge suggests a new shooting location, the “two random objects” challenge generates two unrelated items for you to shoot together, and the “object in environment” challenge pairs an item with an unrelated location. The random assignments do appear to be truly random, picked by software without consideration for practicality. This is can be a good thing, as all of the random generators are excellent tools for getting outside of your comfort zone and shooting something different, or generating unique, often odd assignments when you’re at a loss for what to shoot.

Conclusion

Unlike most apps, LearnPhoto365 isn’t from a big company, but an individual photographer — Noel Chenier. Chenier is a Canadian photojournalist, part-time photography teacher and blogger at a site with the same name, LearnPhoto365.

While the app is overall impressive for coming from a one-person team, there are a few shortfalls. The design and user interface is very basic and rather boring — while I didn’t experience any crashes using version 3.0 on an iPhone 7, the platform could use some graphic design love (especially since photographers, after all, tend to appreciate strong visual aesthetics). The lack of customization options for push notifications and the inability to add new items after deleting them from a challenge list are weaknesses that should be seriously examined for the next update.

LearnPhoto365 is different because it actually suggests ideas and challenges designed for serious photography

While LearnPhoto365 is a tad simplistic and could use some design and customization improvements, it does exactly what it advertises. It offers ideas to get enthusiasts out of a creativity rut, gives newbies different ways to try out their new knowledge, and crafts interesting challenges that will inspire all walks of photographer.

While there are a handful of other 365 photo apps, they simply put your photos on a calendar in a sort of daily journal and are largely designed to be used with the built-in smartphone camera. LearnPhoto365 is different because it actually suggests ideas and challenges designed for serious photography with DSLR and mirrorless cameras.

While less than the cost of lunch, the price may still be a bit high for such a small app that could use a few customization and design tweaks, but there are few options like it available. The free version of the app is worth a download for anyone starting a photo challenge, while the full version is a good option for photographers who need that extra push to get outside their comfort zone with unusual photo assignments. Download LearnPhoto365 if you feel intrigued by the photography challenges or if you want some inspiration — but don’t download it if you’re turned off by outdated designs or need something with full customization. Hopefully, a future update will improve on both of those shortcomings.

Highs

  • Random photography assignment generators
  • In-app shooting tips
  • Wide variety of photo challenges

Lows

  • Push notifications cannot be customized
  • Deleted list items are not regenerated
  • App design is a bit old school




21
May

Weed, booze, guns. This vending machine can sell just about anything


Why it matters to you

If this tech catches on, it could make weed, alcohol, and guns more accessible than ever — for better or worse

What if sandwiched between the Cheetos and gum the next time you walked up to a vending machine you also had the option of buying marijuana, a beer, or a handgun?  Well, that is a real possibility thanks to a so-called “smart vending machine” developed by American Green, a Phoenix-based company working on technological advancements in the legal cannabis industry.

Yes, weed is why the company decided the world needs this vending machine — but the machine itself is designed to sell just about anything that you’d otherwise have to buy from a human who can verify you’re of a certain age, have a necessary prescription, or have a clean enough history to purchase a firearm.

How? Well, for starters, you’ll have to register online with an official ID so that American Green knows who you say you are.  You then couple your profile with your vein architecture.  A biometric vein scanner on the front seems like it takes your fingerprint, but developers insist vein architecture, as in how the veins are stacked, shaped, and run through your finger, is an equally unique attribute. Plus, unlike a fingerprint, which can be spoofed in a few ways, vein scanners are harder to cheat, and wont work if someone decides they’re willing to cut your finger off to get your share of weed or alcohol or whatever.

American Green says it’s just producing the vending machines, and that it’s up to the actual vendors to determine what goes inside and fight whatever legal battles come with, say, putting an unmanned box full of guns on the street. Creators say, though, that with its sturdy construction, extra layers of metal, a camera up front, and an alarm that’s trigger the second someone starts messing with it; this vending machine is at least as safe as a convenience store — if not more.

The company insists you’ll start seeing these out in the wild in as soon as a couple months as it finalizes talks with some casinos, professional sports teams, and marijuana dispensaries.




21
May

Talk it up in this week’s comment thread!


We. Are. Tired.

Google I/O is one of the coolest things and most fun places we’ll visit this year — but we’re tired. While we’re resting our minds and bodies, we can think about the things we saw and people we talked to in Mountain View and all the things we’ll have to say as they move from the “demo” to the real.

And there is a lot to think about this year. It’s easy when Google shows us phones or Chromebooks or any other product. We know what to do and how to do it so to give you everything you need to know. I’ll dare say we’re pretty damn good at it, too.

img_20170519_125424~2.jpg?itok=Sfe16oK8

But this year we have to think. This was very much Google looking everyone in the eye and saying, “look what we can do,” and then enjoying the dropped jaws of everyone in the audience. After doing some thinking and talking we agree that Google Lens is going to change a lot of the ways we do things and be a hit on every platform.

While we’re resting, you guys take this time and this space to talk about anything and everything. Then be ready for whatever comes next!

21
May

Recommended Reading: Inside Apple’s new spaceship campus


One More Thing
Steven Levy,
Wired

Apple’s so-called spaceship campus, or Apple Park, has been in the works for a while now, but this week Wired gave us a big update on the progress. The obsessive attention to detail, tunnel entry, modular “pod” sections for employees and more are all detailed here. Like any other Apple product, the company is using a keen eye when designing it’s biggest project thus far.

Musicians Have Fought Too Damn Hard to Let Trump Kill Net Neutrality Now
Marc Hogan, Pitchfork

Turns out musicians have just as much on the line with regards to net neutrality as anyone.

Will ‘Game of Thrones’-Like Alliances Shape the Future of Driverless Cars?
Nicholas Clairmont, The Atlantic

Amid a big legal battle with Uber, Alphabet’s Waymo announced it’s teaming up with Lyft. Will the two sides do battle in the streets of King’s Landing?

Why We Love — and Need — The Rock
David French, National Review

The Rock for President? Who knows, but there’s no denying Mr. Johnson’s charisma and dominance on the big screen.

Any Half-Decent Hacker Could Break Into Mar-a-Lago
Jeff Larson, Surya Mattu and Julia Angwin
ProPublica

Welp.

20
May

SteamVR makes its launcher ‘more social’ with Home


We’ve seen attempts at free-flowing virtual worlds from the likes of Second Life and the (dearly departed) PlayStation Home, but with VR we’re getting another shot at it. First Facebook Spaces appeared, and now Valve is beta testing SteamVR Home. The Destinations Workshop previously allowed users to build and customize spaces, but opting into Home puts the world up front when they start SteamVR. Now Destinations is Home, and people who have been using it will find all of their avatars and wearables in the new space.

With Home, the virtual environments are higher res “and support animation, sound, games, and interactivity.” If you find one you like, you can set it as your default space, and once it’s set up the way you want, you can choose to open it up to the public or invite friends in for a visit complete with voice chat.

To opt into SteamVR Beta:
Open Steam on your desktop
Find SteamVR in your Library under Tools
Right click and go to Properties
Select the Betas tab and pick SteamVR Beta from the dropdown

Source: Steam Community

20
May

Mind-controlled exoskeletons could let quadriplegic walk again


Why it matters to you

Brain-computer interfaces are making our childhood dreams of mind control come true.

A new device could let paralyzed people control exoskeletons with their thoughts. Developed by researchers at the University of Melbourne, the device plugs directly into the brain without doctors having to perform surgery on the skull.

Stents are used by cardiologists to prop open blood vessels, and this implant is designed around one; it slides through an incision made in the jugular vein up to a blood vessel near the motor cortex in the brain. At the end of the stent, a metallic mesh with electrodes picks up brain activity and relays this information to a recording device in the wearer’s chest, which wirelessly transmits it to an external computer that will control the exoskeleton. The researchers have called their device a “stentrode.”

When a wearer thinks about a certain direction — for example, left — the brain fires in a particular way. By uncovering and detecting the way the brain fires, brain-computer interfaces are able to effectively “read” thoughts and translate them into actions.

The stentrode has advantages over other devices that detect brain signals. Electrodes are non-invasive but not always reliable when attached to the outside of the head, where they have to pick up signals through the skull. Direct implantation requires brain surgery, and since the brain sees the electrodes as foreign objects, it covers them in scar tissue, inhibiting their function.

The stentrode on the other hand is minimally invasive, requiring no open brain surgery, and can pick up brain signals with high fidelity. The device was trialled with live sheep last year.

“The brain doesn’t even know it’s there,” David Grayden, a University of Melbourne engineer who oversaw the devices development, told New Scientist. “The recordings are not quite as detailed as those from directly implanted electrodes,” he added, “but they’re close.”

Next year as many as five quadriplegic patients will test the device. The causes of their paralysis vary, including from a stroke, spinal cord injury, and muscular dystrophy.

“The end goal is that the person will be able to think about moving and an exoskeleton will obey,” Grayden said.

The new tech is timely — modern exoskeletons are quickly surpassing science fiction’s dreams.




20
May

The best screen protectors for the LG G6


LG recently released the LG G6, ushering in the era of 18:9 aspect ratio. Until then, we were used to 16:9 aspect ratio, at least on smartphones. The face of LG’s latest phone is nearly 85 percent screen, and while it’s certainly beautiful, all that glass is vulnerable to drop damage and scratches.

Picking the right screen protector isn’t an easy task, however. Some of them won’t work with cases, while others may affect the clarity or touch sensitivity of your screen. Thankfully, we’ve put together a list of some of the best screen protectors for the LG G6, so you can help protect that beautiful display.

The Best

Otterbox Alpha Glass ($36)

best lg g6 screen protectors
When it comes to drop protection, the name Otterbox comes up again and again, and with good reason. This screen protector does everything that you would want a glass screen protector to do. It maintains the clarity of your screen, it won’t interfere with your use of the smartphone as it doesn’t affect screen sensitivity, and it resists both scratching and shattering.

Buy one now from:

Amazon Otterbox

The Rest

Spigen Glas.tr Slim HD ($30)

best lg g6 screen protectors
Spigen’s G6 screen protector offers decent scratch and shatter protection. The design curves perfectly with the G6’s screen, but it doesn’t go all the way to the edges, which is great if you want to pair it with a case. It has no nasty rainbow effect — unlike cheaper alternatives — and maintains the full clarity of your display.

Buy one now from:

Spigen

BodyGuardz Pure 2 ($40)

best lg g6 screen protectors
The BodyGuardz protector is interesting in that it uses a new material called Aluminosilicate. The material takes care of something some people don’t like about tempered glass protectors — specifically, their thickness. Aluminosilicate makes for a thinner protector, one that is more protective than its soda-lime-based alternatives. Another advantage to BodyGuardz wears is that they come with a lifetime guarantee, assuming you purchase them from an authorized reseller.

Buy one now from:

Amazon BodyGuardz

iCarez Tempered Glass (Case Friendly) ($8)

best lg g6 screen protectors
The iCarez screen protector features rounded edges, making it hard to tell if your phone is even using a screen protector. This version doesn’t go all the way to the edges, however, which ensures it will play nice with most LG G6 cases. They also have a regular, 2.5D protector that goes to the edges, if you like to go caseless. Like other quality screen protectors, it also offers the utmost clarity and impact protection, without sacrificing touch sensitivity.

Buy one now from:

Amazon iCarez

Supershieldz Tempered Glass ($8)

best lg g6 screen protectors
Spupershieldz offers one of the best deals you can find on LG G6 screen protectors. This bundle of three is a mere $8 on Amazon, and it offers features typically reserved for more expensive options. It is, however, worth mentioning that this tempered glass protector has 2.5D rounded edges and is not case-friendly. It also offers 9H scratch resistance, an oleophobic coating that resists fingerprints, and no rainbow effect, which helps maintain screen clarity.

Buy one now from:

Amazon