Become an Creative Cloud Master with the Complete Adobe CC Training Bundle!
Designing posters for events, making business cards, or creating graphics for Youtube videos all take powerful designing software to complete. The Adobe Creative Cloud is the gold standard in the design industry today, used by professionals and amateurs alike.
Learn Adobe Photoshop, Illustrator, InDesign, and more! Learn more
While the Adobe CC is loaded with powerful programs like Photoshop, Illustrator, and InDesign, they won’t be much good to you unless you know how to use them, and learning how to use these sophisticated design programs can be tricky — especially if you want to master each program. Lucky for you, Android Central Digital Offers has the perfect solution.

The Complete Adobe CC Training Bundle covers everything you need to know to get started using Photoshop, Illustrator, and InDesign for all your graphic design and publishing needs, along with comprehensive courses for using Adobe Flash, Premiere Pro, and a dedicated course for animation.
Each course individually is valued at $79, but you’ll pay just $29 for all seven courses — that’s 65 hours of valuable content and tutorials to get you started using Adobe CC.
Get the Complete Adobe CC Training Bundle and save 95% off the regular price!
See at iMore Digital Offers
Best USB-C hubs for Chromebooks

Are you looking to turn your Chromebook into a productivity powerhouse? Check out these USB-C hubs!
Chromebooks are great for simple productivity on the go, but when you get home, it’s nice to move to a larger setup. A bigger screen, a nice mechanical keyboard, a proper webcam, and so on. Yeah, you can plug things in individually, but a USB-C hub means you just need to plug one cable into your Chromebook and all your accessories connect. Newer Chromebooks like the Pixelbook only feature USB-C ports, so you’ll need adaptors to use most accessories.
These are the best USB-C hubs for your Chromebook!
A word on compatibility
Google helps manufacturers design all of the motherboards inside every Chromebook and Chromebox, and builds all of the necessary drivers into Chrome OS This is why Google can send an update to every Chrome device every six weeks for years on end, and it also means if an accessory works with one Chromebook, it works with all of them. And if you see a USB-C port on a Chromebook, know that it supports charging, video out and data transfer. All of these hubs have been used by a member of the Android Central team with their personal Chromebooks.
- ARKTEK USB-C Hub
- AUKEY USB-C Hub
- HooToo USB-C Hub
- QacQoc 8-in-1 hub
- Dell WD15 Monitor Dock
- Plugable USB-C Triple Display Docking Station
ARKTEK USB-C Hub

If you just need a display out and charging port and don’t want to spend too much money, ARKTEK has you covered. They offer a basic hub with a USB-C port for passthrough charging, HDMI port, and either one or two USB-A 3.0 ports. I’ve had this hub in my bag for about a year now, since it’s so compact and easy to carry around. The HDMI port will output at up to 4K at 30hz with a compatible HDMI cable and monitor. The lightweight nature of this hub means it’ll slide around on your desk and potentially scratch things, so consider some double-sided tape to keep things in place.
The ARKTEK USB-C Hub costs $16 for one USB-A ports, or $22 for two USB-A ports.
See at Amazon
AUKEY USB-C Hub

Stepping up in price and selection is this hub from AUKEY. It offers a USB-C port for passthrough charging, one HDMI port, three USB-A 3.0 ports, one microSD card slot and one full-sized SD card slot. The HDMI port can output at 4K at 30hz with a compatible HDMI cable and monitor, and this hub is still light enough to live in your bag if you want to carry it around. That light weight means it’ll slide around a bit, even with the rubberized grips on the bottom of the hub.
The AUKEY USB-C Hub is available for $40.
See at Amazon
HooToo USB-C Hub

This HooToo hub is a bit more expensive than the AUKEY one, but depending on the layout of your desk it may be a better fit. It offers three USB-A 3.0 ports, a USB-C port for passthrough charging, and an HDMI port that can output at 4K at 30hz. There’s also a full-sized SD card slot, but this one is much faster than the one on the AUKEY model: it transfers at 5 Gbps instead of 480 Mbps. This is another lightweight hub, perfect for throwing in a bag or sliding across your desk.
The HooToo USB-C Hub is available for $48.
See at Amazon
QacQoc 8-in-1 hub

The last truly portable hub is more expensive than the others, but offers more ports. You get a gigabit Ethernet port, a USB-C port for passthrough charging, a microSD slot, a full-sized SD card slot, three USB-A 3.0 ports, and an HDMI port that can output at 4K at 30hz. This hub also includes a pouch for travel, and is available in a variety of colors to best match your Chromebook.
The QacQoc 8-in-1 hub is available in gray with white, gold, silver, rose gold, and gray with black.
See at Amazon
Dell WD15 Monitor Dock

This will be overkill for most users, but indispensable if you have a lot of accessories. There are three display output options — HDMI, Mini DisplayPort and VGA — though you can only use two of these with a Chromebook. You also get three USB-A 3.0 ports — two on the front of the dock for easy access — and two USB-A 2.0 ports, an Ethernet jack, and a combination headphone/microphone jack on the front for easy access. The hub itself uses a proprietary charger, since the hub may need more power than USB-C can provide. Your Chromebook connects and can charge with a single USB-C cable.
The Dell WD15 Monitor Dock is available for $136.
See at Amazon
Plugable USB-C Triple Display Docking Station

This is another super expensive option for power users. It features two HDMI outputs and one DVI outputs, though you can only use two of these with a Chromebook. Also on the back are gigabit Ethernet, three USB-A 2.0 ports, the USB-C port to charge and connect to your Chromebook, and the proprietary power port for the dock itself. On the front, you get another USB-A port, this time at 3.0 speeds, separate headphone and microphone jacks, and a USB-C 3.0 port. This dock may be a bit overkill for most Chromebooks users, but if you have a lot of accessories it’ll be well worth the money.
The Plugable USB-C Triple Display Docking Station is available for $180.
See at Amazon
What say you?
Which USB-C hub do you use with your Chromebook? Share them with us below!
Chromebooks
- The best Chromebooks
- Chromebooks in education: Everything you need to know
- Should you buy a Chromebook?
- Chromebook Buyers Guide
- Google Pixelbook review
- Join our Chromebook forums
Mynt massagers, portable gas grills, Star Wars e-books, and more are all discounted today
Whether you’re looking for new tech gear or household items, we’ve got you covered.
We found plenty of great deals today that include big discounts on various Mynt massagers, Cuisinart’s portable gas grill, a Star Wars e-book and much more! Time’s running out to take advantage of these prices, so hurry!
View the rest of the deals
If you want to know about the deals as soon as they are happening, you’ll want to follow Thrifter on Twitter, and sign up for the newsletter, because missing out on a great deal stinks!
How to Get a MacBook or MacBook Pro Keyboard Repaired Free Under Apple’s Service Program
Apple has initiated a new worldwide service program offering free repairs of MacBook and MacBook models equipped with low-profile, butterfly mechanism keyboards, after the company determined that “a small percentage” of the keyboards may develop one or more of the following issues:
- Letters or characters repeat unexpectedly
- Letters or characters do not appear
- Key(s) feel “sticky” or do not respond in a consistent manner
Apple or Apple Authorized Service Providers will service eligible MacBook and MacBook Pro keyboards free of charge. Apple says the process may involve the replacement of one or more keys or the whole keyboard.
The following MacBook and MacBook Pro models are eligible for the program:
- MacBook (Retina, 12-inch, Early 2015)
- MacBook (Retina, 12-inch, Early 2016)
- MacBook (Retina, 12-inch, 2017)
- MacBook Pro (13-inch, 2016, Two Thunderbolt 3 Ports)
- MacBook Pro (13-inch, 2016, Four Thunderbolt 3 Ports)
- MacBook Pro (15-inch, 2016)
- MacBook Pro (13-inch, 2017, Two Thunderbolt 3 Ports)
- MacBook Pro (13-inch, 2017, Four Thunderbolt 3 Ports)
- MacBook Pro (15-inch, 2017)
All other MacBook, MacBook Air, and MacBook Pro models are not equipped with butterfly mechanism keyboards, and thus are ineligible.
To identify your MacBook or MacBook Pro model to see if it is eligible for this program, click on the Apple logo in the top-left corner of the screen and select About This Mac. A window should open, and in the Overview tab, the model should be listed, such as MacBook Pro (15-inch, 2016).
Apple or an Apple Authorized Service Provider will examine the MacBook or MacBook Pro prior to any service to verify that it is eligible for this program. If the notebook has any damage which impairs the service, that issue will need to be repaired first, and in some cases, there may be repair fees.
Step-by-step instructions ahead…
MacRumors obtained an internal service document from Apple that outlines exactly which pre-existing damage is eligible for this program:
Eligible physical damage includes:
• keyboard damage due to attempted keycap repair
• physical top case damage unrelated to the keyboardIneligible physical damage includes:
• liquid damage
• physical keyboard damage unrelated to keycap repair
Apple’s service program is valid for four years after an eligible MacBook or MacBook Pro was originally purchased, meaning all of the models listed above remain eligible until at least April 10, 2019, four years after the original 12-inch MacBook, the first model with a butterfly keyboard, was released.
An eligible MacBook or MacBook Pro purchased today, meanwhile, would be eligible for coverage under this program until June 2022.
How to Initiate a Free Repair
There are three ways to take advantage of Apple’s service program: book an appointment with an Apple Store or Apple Authorized Service Provider, or mail in the notebook to Apple’s repair center.
Apple Store or Apple Authorized Service Provider via Web

These steps are web-based, so they work across Mac, iPhone, iPad, and other devices.
Navigate to the Get Support page on Apple’s website.
Select “Mac.”
Select “Mac notebooks.”
Select “Hardware Issues.”
Select “Keyboard not working as expected.”
Select “Bring in for Repair.”
Sign in with your Apple ID.
Specify your location.
Sort by “Availability” or “Distance.”
Select an Apple Store or Apple Authorized Service Provider.
Select an available day and time slot for the appointment.
Arrive at the Apple Store or Apple Authorized Service Provider when your appointment is scheduled. Bring your affected MacBook or MacBook Pro and government-issued photo ID.Apple Store or Apple Authorized Service Provider via Apple Support App

These steps are based on the Apple Support app for iPhone and iPad.
Open the Apple Support app.
If necessary, tap on the “Account” tab and sign in with your Apple ID.
Tap on the “Get Support” tab.
Tap on your affected MacBook or MacBook Pro.
Tap on “Hardware Issues.”
Tap on “Keyboard not working as expected.”
Tap on “See all” to the right of “Get More Help.”
Tap on “Bring in for Repair.”
Select an Apple Store or Apple Authorized Service Provider.
Select an available day and time slot for the appointment.
Arrive at the Apple Store or Apple Authorized Service Provider when your appointment is scheduled. Bring your affected MacBook or MacBook Pro and government-issued photo ID.Mail-in to Apple Repair CenterNavigate to the Get Support page on Apple’s website.
Select “Mac.”
Select “Mac notebooks.”
Select “Hardware Issues.”
Select “Keyboard not working as expected.”
Select “Talk to Apple Support Now” or “Chat” to speak with an Apple support representative by phone or online chat.
Inform the representative that you would like to mail in your affected MacBook or MacBook Pro to Apple’s repair center.
Follow further instructions as advised.Apple’s internal document for this program advises that mail-in service will take five to seven business days to be completed, but Apple’s public-facing page about the program notes that service turn-around time may vary depending upon the type of service and availability of replacement parts.
How to Initiate a Refund
Apple’s website notes that customers who believe their MacBook or MacBook Pro was affected by this issue, and who paid to have their keyboard repaired prior to this program, can contact the company about a refund.Navigate to the Get Support page on Apple’s website.
Select “Mac.”
Select “Mac notebooks.”
Select “Hardware Issues.”
Select “Keyboard not working as expected.”
Select “Talk to Apple Support Now” or “Chat” to speak with an Apple support representative by phone or online chat.
Inform the representative that you would like a refund.
Follow further instructions as advised.It should also be possible to visit an Apple Store to inquire about a refund in person, but call the store ahead to confirm.
MacRumors first highlighted customer complaints about the 2016 MacBook Pro keyboard over a year ago, including non-functional keys, strange high-pitched sounds on some keys, and keys with a non-uniform feel. The issues are believed to result from small amounts of dust or debris accumulating under keys.
Apple has been hit with at least three class action lawsuits over the keyboards in the United States, after the company charged some customers with out-of-warranty fees as high as $700 to replace the keyboard on affected MacBook Pro models, as the process requires replacing the entire top case assembly.
Until now, Apple had not publicly acknowledged the keyboard issues, and only offered cleaning instructions to affected customers.
More Details on Apple.com: Keyboard Service Program for MacBook and MacBook Pro
Related Roundups: MacBook Pro, MacBookTag: repair programBuyer’s Guide: MacBook Pro (Don’t Buy), MacBook (Don’t Buy)
Discuss this article in our forums
How L’Oréal designed the UV Sense to fit on your fingernail
L’Oreal
Beauty tech is still in its infancy, but lifestyle brands are slowly warming up to the idea of incorporating technology to improve user experience and sometimes, the product itself. We’ve seen a smart hairbrush that can determine the quality of your hair, a smart mirror that can identify the health of your skin, and now there’s nail art that can track exposure to ultraviolet radiation when you’re out and about.
The UV Sense is a battery-free wearable electric sensor you stick onto your fingernail. It can measure UV exposure, which you can track via a companion app on your smartphone. It’s from L’Oréal’s Technology Incubator, which has created products such as the Makeup Genius app, where people can try different looks using a smartphone camera, and Le Teint Particulier an in-store device that scans your skin to create custom foundation for you.
The tricky part about the UV Sense is that if people are supposed to wear it 24/7, it has to look good and feel comfortable. We spoke to the team behind this micro wearable to see how they managed the feat.
From a temporary patch to nail art
The UV Sense may be L’Oréal’s smallest battery-free wearable, but it isn’t the company’s first device to measure UV exposure. Back in 2016, the company released My UV Patch (marketed under the La Roche-Posay brand), and it looked more like a temporary tattoo.
L’Oreal
L’Oréal Technology Incubator’s Global Vice President, Guive Balooch, and his team knew wearing a sticker daily wasn’t practical, unless at the beach or swimming pool.
To help come up with a new design and form for the UV Sense, Balooch brought back Swiss designer and Fuseproject founder, Yves Béhar, who designed the original My UV Patch.
“There are no moving parts or buttons, and no physical interfaces to the device. It all happens wirelessly to the phone.”
“The original UV Patch was an incredible feat in technology, however it was strongly lacking a design component,” Béhar told Digital Trends. “The product was unnecessarily large, and was being presented as a novelty item. If you’re asking someone to wear a product 24/7, it should be comfortable, beautiful, fit [into] their personal style, and provide necessary insights into our daily [lives].”
But figuring out the formula for a hassle-free and comfortable device required highlighting pain points that exist within wearable technology today. Balooch and his team narrowed it down to two major issues: Having to charge a battery, and the lack of real estate on people’s wrists. Balooch called in a team of engineers from Northwestern University, who brought up the idea of flexible electronics that don’t need a battery. Earlier this year, Digital Trends spoke to John Rogers, a Northwestern engineer who was a part of the team.
“From a user standpoint, it’s hard to imagine anything simpler, in the sense that you never have to worry if your battery is charged up since it doesn’t need one,” Rogers said. “There are no moving parts or buttons, and no physical interfaces to the device. It all happens wirelessly to the phone.”
L’Oreal
Rogers said his team did face challenges developing the UV Sense, specifically when it came to developing designs for the antenna and electronic circuit that would enable it to work with a smartphone. But once those issues were solved, it “opened up the ability to mount technology in places that were previously not considered.”
Some of the “places” Rogers is referring to are fingernails. Through testing, the L’Oréal team found that placing the UV Sense on nails returned the most accurate feedback to UV exposure. Since taking the battery out helped create a thinner and smaller sensor, Béhar saw the opportunity to disguise it as nail art. For even more versatility, the team is in the process of creating bracelets and watch accessories with the sensor, as well as clips that can attach to sunglasses.
How the UV Sense works
The UV Sense is dead simple to use. Stick it on your nail, swipe it over your iPhone or Android phone, and it will wirelessly transfer UV exposure data to the companion app using near-field communication (NFC). It’s the NFC chip that also charges the device through the data transfer process.
UV Sense itself strictly measures UV exposure. The app is where you can find additional info like allergens and pollution.
Apply the sensor to a nail using an adhesive, and you can reapply it using additional adhesives that come packaged in the box. Placing it on your thumbnail exposes the UV Sense to optimal sunlight, and the sensor is activated by UVA and UVB rays. Along with your UV report, you’ll also get some advice on on avoiding the sun, and recommendations on L’Oreal products to purchase.
The data the sensor collects is accurate, or at least that’s what L’Oreal claims.
“People would wear the sensor and then we’d have a clunky expensive detector that was right next to it, and [we’d] test the data to make sure we’re really measuring the UV correctly,” Balooch said. “It’s very important to us [and] it’s something we really need to be accurate on.”
L’Oreal
It’s important to note the UV Sense itself strictly measures UV exposure. The app is where you can find additional information such as allergens, pollution, and other factors in the environment that can effect your skin. While wearing the sensor does provide you with your own personalized metrics when it comes to your UV exposure, you can still use the app without it to get a more general sense of what else is around you.
Pilot program and launch
The UV Sense will launch in the U.S. this summer as a pilot program. The company will continue to do testing with dermatologists and consumers, which allows L’Oreal to get even more feedback to improve the experience even better.
L’Oreal plans on launching the UV Sense under the La Roche-Posay brand worldwide in 2019.
“Our question has always been [how] to bring more performance to the product,” Balooch said. “So we took on this challenge of figuring out how could we make a wearable that’s interesting and exciting technologically, [that] could also give the user some tools to understand the personal level of exposure and what they could do with that information.”
After the pilot program, L’Oreal plans on launching the UV Sense under the La Roche-Posay brand worldwide in 2019. A price for the wearable hasn’t been set yet.
Editors’ Recommendations
- Oculus wants to stretch your skin to see what it feels like to be human
- A fleet of delivery robots could soon be coming to a campus near you
- With ChroMorphous technology, clothes may soon change color with a tap of a phone
- 8 Amazing accessories that could make virtual reality even more immersive
- I wore Levi’s smart jacket for three months, and it changed how I use my phone
Google is adding DRM to all Android apps, but it’s for the right reasons

Metadata will be added to application files when they are digitally signed and built. That’s a lot of letters to say DRM.
Earlier this week, Google quietly rolled out a feature that adds a string of metadata to all APK files (that’s the file type for Android apps) when they are signed by the developer. You can’t install an application that hasn’t been signed during its final build, so that means that all apps built using the latest APK Signature Scheme will have a nice little chunk of DRM built into them. And eventually, your phone will run a version of Android that won’t be able to install apps without it.
What the hell? DRM? Why?
DRM is why Netflix used to only work on approved phones. But it doesn’t have to be used for evil.
We can relax (for now). We all hate DRM (technically, Digital Rights Management) because of the way developers and publishers have abused it. DRM means you are being treated like a thief before you buy any software. A great example is having to install the Orgin client and have it regularly be checked online to run any games published by EA.
EA doesn’t trust that we paid for the software title so it forces us to present our papers when demanded. PC gaming is rife with DRM and applications like Steam or U Play exist for the very same reason. Other examples come from Sony, Disney, EMI, and every other entertainment publisher which decides where in the world you are allowed to listen to music or watch a movie that you paid for, or how many times you are allowed to do so.
So DRM is bad to the core. But not really. DRM is simply a way for a developer or publisher to keep track of software versions and authenticity. Sometimes you need to do that for the right reason.
As of now, Google’s reason is right. That doesn’t mean the company can’t change its tune and go all out crazy (like EA) in the future and limit how, where, when, and why we can use the apps we paid for, but for now everything is good. Google added this metadata so you can buy an app from any approved distributor and it will work with Google Play Store features like family library and subscriptions.
Apps have to be “signed” to verify their contents. Adding metadata to this signature ensures we will have DRM in every app eventually.
Android can read the metadata automatically inserted into an app and verify that it’s a legitimately sourced version and approved for use by the developer. If it passes these checks, it is added to your Google Play Store library. You’ll be able to update through Google Play, use things like Google Play Games for leaderboards and achievements, or share an app with people in your Family Library. And the developer can change the metadata at any time with a new signing key, which ends support for the current version and creates a new listing in Google Play.
Google says it did this for two reasons — the first is a little worrisome, and it’s to allow developers more control over how their apps are used. There is certainly potential for abuse there, but we have to wait and see if any developers get any bad ideas. The second is straight out of left field for most of us — many people live where data isn’t affordable and available, so they share apps using peer-to-peer distribution channels. That doesn’t mean these people are stealing apps. It means they can pay through a portal then use a peer-to-peer network to get their copy using as little data as possible.
Developers want us all to have access to the apps they create. More downloads mean more exposure and more income via sales or ad revenue. That’s what app developers want.
Google may be using a fancy set of words to disguise the fact that Android apps will soon all have DRM inserted in a way that’s difficult to remove and eventually your phone will need to be able to read it to install them. That’s smart — it kept the internet from erupting in a frenzy of pitchforks and furor normally reserved for lootboxes or Comcast.
But it is DRM, and Google has very good reasons to be adding it. Let’s all hope that everyone involved doesn’t get any ideas about abusing it.
Android P
- Android P: Everything you need to know
- Android P Beta hands-on: The best and worst features
- All the big Android announcements from Google I/O 2018
- Will my phone get Android P?
- How to manually update your Pixel to Android P
- Join the Discussion
Is there a single USB-C headphone adapter that works with all phones?

Part of what we do here is to look at all sorts of accessories you might want to use with your phone. Cases, screen protectors, headphones, you name it and we try to look and recommend a product so you know if you’re getting your money’s worth. Since the 3.5mm headphone jack is fading away (yes, Samsung will do it too, once it’s more affordable) that also means we need to look at USB Type C headphones and adapters.
What a mess.
First, the answer to the question in the title: No, there isn’t a single USB-C headphone adapter that will work with every phone. There’s a simple reason why, but it’s just silly that this has to be a thing in the first place.
Passive vs. Active

Cables designed to work with the USB standard can be active or passive. Active cables use copper wires and have a semiconductor of some sort to boost or amplify the signal strength. If you have an outrageously long USB cable for anything, it’s probably an active cable. Low-voltage data signals aren’t designed to cover eight or ten feet (or more) inside a cable, so they need a bit of a boost.
There are two ways audio can be sent out via the USB port. The onboard DAC and amplifier can convert the digital signal to analog (regular headphones only work with an analog signal) and send it out through the USB-C port. The adapter then passively transmits the analog signal from the USB port to the 3.5 mm port on the other end of the cable. This works exactly like your last phone with a 3.5 jack did, except there is now a dongle in the mix.
More: USB-C audio: Everything you need to know
Digital audio signals can also be sent out through the USB-C port. These signals bypass any DAC or amp that’s inside your phone and are a raw digital signal that something needs to convert before it can play through a set of speakers. That means they depend on a DAC and amplifier inline somewhere. That group of components can (and does) live inside an active USB-C to 3.5mm dongle in some most Android phones without a headphone jack.
All devices that can transmit audio and have a USB-C port that sends a signal out must be able to supply the digital signal for an active cable. Unfortunately, the changes that make a passive cable work are optional, and we all know what happens when something is optional — companies don’t like to do it.
Meet Audio Adapter Accessory Mode

Audio Adapter Accessory Mode is the name of the protocol that allows a USB-C port to send analog audio through its connector and into something that’s plugged in — like a 3.5mm adapter. A set of headphones with a USB-C connector will always support Accessory mode, so they can play music that was converted by the phone’s hardware or convert it themselves with circuitry inside them.
Audio Adapter Accessory Mode isn’t complicated. Four connections inside the USB port turn off any digital output and replace it with the four analog connections needed (Left audio, Right audio, Microphone, and Ground). Compliance means that every device that supports Audio Adapter Accessory Mode uses the same four connections in the USB-C plug so it just works if supported by both pieces.
Optionally, (there’s that word again: “optional”) a second set of connectors can be used to allow for charging at up to 500 milliamps.
- If your phone supports Audio Adapter Accessory Mode a cheap $3 USB-C to 3.5mm adapter works perfectly.
- If your phone supports Audio Adapter Accessory Mode and has the optional connectors for charging, a cheap adapter that splits into both a headphone jack and a USB charging port will work perfectly.
- If your phone doesn’t support Audio Adapter Accessory Mode you need a more expensive active adapter that has the circuitry inside to convert the digital signal and the phone will give you an error message that says “Accessory not supported” in some way.
Did I mention that this is a mess?
Phones that support Audio Adapter Accessory Mode

Here are the phones that are built to support Audio Adapter Accessory Mode. The dongle that came in the box is just a simple passthrough with no semiconductor inside, and you can order a cheap replacement adapter (or three) as a spare.
- Motorola Moto Z
- Motorola Moto Z Droid
- Motorola Moto Z Force
- Motorola Moto Z Play
- Motorola Moto Z2 Play
- Motorola Moto Z2 Force
This list probably isn’t complete and Chinese brands like Xiamoi may also support Audio Adapter Accessory Mode in some phones. This is an obsession of mine and I will find any other phones that need to be added to this list. If you know one that’s not there, hit the comments and tell me, please.
What should I buy?

Look at the list above. If your phone is on it, you can save about $10 when buying an adapter. I like this pack of two for $8 from Amazon but almost any type will deliver the same results — it’s just a short length of copper wire that sends the signal out and has little impact.
Motorola makes it easy — it just supports all of the USB-C audio spec so anything will work.
If your phone is not on this list and is not branded by HTC, you need an adapter with some circuitry inside. This means Pixels, Essential phones, Huawei phones, Samsung phones (if you want to use the USB port for audio. It works!) and even old Nokia Lumia phones. I bought this cable as a back up for $15 from Amazon and it sounds as good or better than the one that came in the box with my Pixel 2. Unlike the adapters above, these do have some circuitry inside and can have an impact on how things sound.
If you have an HTC phone, your best option is to use the JBL headphones that were made for it because they sound great and you don’t need a dongle. If you do need a dongle, try the kind made for phones like the Pixel 2 instead of the cheaper type made for Motorola phones. It might work, depending on how the accessory pins are used (preferably not used at all) in the dongle. Most active adapters will be fine.
You can do even more with the connectors inside a USB port, and HTC does.
One last thing — you can use an active dongle (the ones with circuitry made for phones like the Pixel 2) with your Moto Z Force. Your phone will know when it’s plugged in that it shouldn’t switch to Audio Adapter Accessory Mode and will send the digital signal out like a Pixel or OnePlus 6 does.
This mess will sort itself out. USB was also a mess when it first arrived way back in the 1990s and we had the same worries about cables construction (I fried a very expensive set of USB speakers with my Tangerine iMac because I used the cable from a USB Iomega Zip Drive) and compatibility between USB v1.0 and USB v2.0. It happens when something new that has any sort of optional ways to use it arrives. Everything will be fine eventually, and in the meantime, you have resources like this one made by people with an unhealthy obsession with cables and headphones.
See at Amazon
Like a vice principal in the sky, this A.I. spots fights before they happen
We live in a surveillance society: A U.S. citizen is reportedly captured on CCTV around 75 times per day. And that figure is even higher elsewhere in the world. Your average Brit is likely to be caught on surveillance cameras up to 300 times in the same period.
But a lot of existing CCTV networks still rely on people to operate them. Depending on the circumstances, there might be a human being at the other end, watching multiple camera feeds on a bank of monitors. Alternatively, there may be no-one watching at all, with the footage only ever viewed in the event that it needs to be.
Two cutting edge technologies may shake up surveillance as we know it, however. Researchers from the U.K.’s University of Cambridge and India’s National Institute of Technology and Institute of Science, Bangalore have published a new paper describing a drone-based surveillance system that uses UAVs as flying security cameras to keep an eye (or several) on large gatherings of people.
“Our system is able to identify the violent individuals real-time.”
What’s more, these drones are equipped with deep learning artificial intelligence algorithms that allow them to identify troublemakers in crowds and take the proper precautions.
The “Eye in the Sky” real-time drone surveillance system could be deployed at events like music festivals, marathons or other large gatherings, where it would be utilized to identify violent individuals — based on their aggressive posture — using the latest pattern recognition technology. It then promises to alert the authorities. Welcome to the future of surveillance!
Identifying attackers in real time
“Our system is able to identify the violent individuals real-time,” Amarjot Singh, one of the researchers on the project, told Digital Trends. “This was quite challenging, and we had to develop unique ways to achieve [it]. The problem was that the standard deep learning algorithms require tens of thousands of annotated images to train these systems. This would normally be fine, if one was to assign just one label to an image — for example, assign ‘cat’ to an image of a cat. But in order to design a system which can detect the human pose from aerial images, the system needs to be trained with aerial images annotated with 14 key points on the human body. Since there is no dataset available for this type of application, we ourself annotated the points on the human body, which was extremely time-consuming and expensive.”
The ScatterNet Hybrid Deep Learning neural network analyzing body language via a complex motion-tracking algorithm inside a drone. University of Cambridge/National Institute of Technology/Indian Institute of Science
By breaking the human body down into 14 different points, the system is able to work out which violent action — if any — is being performed with an accuracy of around 85 percent (and all the way up to 94.1 percent) depending on how many people are being surveilled.
The algorithm was trained by analyzing 2,000 annotated images gathered from a group of 25 participants, who were asked to act out violent attacks such as punching, kicking, strangling, shooting, and stabbing. Accuracy levels drop the more people are being monitored and the further away the drone is, although this could be improved in the future. The finished algorithm, called the ScatterNet Hybrid Deep Learning neural network, can learn using a relatively small number of training examples, while robustly extracting the body’s posture with consistent accuracy.
“Once, the system can do well in the test runs, we will be bringing it to market.”
Just as important, it can do it very, very quickly — within just a few frames of video. This is especially important for security applications, since any potential system designed for this purpose needs to be able to alert authorities of an escalating situation before it has erupted into violence.
“The system detects the violent individuals by first extracting and sending the aerial frame recorded by the drone to the Amazon cloud,” Singh continued. “The human detection algorithm detects each human in the image frame. Next, the pose is estimated for each individual. The pose of the individuals involved in the violent activity are jointly analyzed to identify the violent individual.”
But all of this is still quite far away, right? Not necessarily. “We will be flying the drone at the technical festival at NIT Warangal in Andra Pradesh in India, in October of this year,” Singh said. “The second author is from NIT Warangal, and I am an alumnus. The festival is attended by around 3,000 – 4,000 people and is extremely packed.” The pair also hoped to use the drone at at another event in India called “spring spree.”
A figure from the research paper showcasing how the A.I.-equipped drone processes body language by looking at visual cues based on specific points around the body. University of Cambridge/National Institute of Technology/Indian Institute of Science
It won’t be used for alerting the police to violent action at these events, but rather to prove that the technology can accurately predict the outbreak of violence. “Once, the system can do well in the test runs, we will be bringing it to market,” Singh said. “We are also planning to extend this system to monitor the borders of India.”
Possible ethical concerns?
Technology like this will, of course, prompt polarizing responses. Police in the U.S. are already using drones on a regular basis — and this will only ramp up in the years to come — although right now that it done primarily for tasks like assessing crime scenes.
Like predictive policing, A.I.-equipped surveillance drones carry the ominous suggestion of possible “pre-crime.”
The idea of using A.I.-equipped drones for long-term surveillance of crowds runs the risk of verging on the Orwellian. Like predictive policing, it additionally carries the ominous suggestion of possible “pre-crime.” While this particular project focuses on actions carried out, there are examples of smart CCTV cameras with the goal of intervening before an act is carried out.
In the U.K., for instance, an A.I. algorithm is used for spotting potential jumpers on the London Underground train service. This system works by analyzing the behavior of people waiting on the platform, and then looking for those who miss several available trains during that time. This is because such actions have been shown to precede suicide attempts, thereby allowing intervention to be made.
Researcher Amarjot Singh
It’s one thing to use A.I. more rapidly to stop a scuffle which has broken out; perhaps another to surveil someone on the basis of body language suggesting that they might do something. Such questions will need to be explored as technology such as this becomes mainstream. If it is able to avoid violent acts on the street, we may well consider the tradeoff to be worth it.
Ultimately, concepts such as this make us recall the dream of philosopher Jeremy Bentham’s Panopticon. This was a proposed prison idea in which the presence of a central guard tower makes the prisoners believe that they are being watched at all times. The result, writers like Michel Foucault have suggested, is that prisoners wind up behaving as though they are being watched at all times.
As drones increasingly take to the skies, the existence of tools like this could prompt us to act in a similar way. Is that a police drone or an Amazon delivery flying overhead? You’d better straighten up your posture and loosen your shoulders just in case!
Editors’ Recommendations
- Meet the man fighting plastic pollution with a fleet of A.I.-powered camera drones
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- 9 bizarre drones, from web slingers to lake hoppers
- Crime-predicting A.I. isn’t science fiction. It’s about to roll out in India
- Like a vice principal in the sky, this A.I. spots fights before they happen
Like a vice principal in the sky, this A.I. spots fights before they happen
We live in a surveillance society: A U.S. citizen is reportedly captured on CCTV around 75 times per day. And that figure is even higher elsewhere in the world. Your average Brit is likely to be caught on surveillance cameras up to 300 times in the same period.
But a lot of existing CCTV networks still rely on people to operate them. Depending on the circumstances, there might be a human being at the other end, watching multiple camera feeds on a bank of monitors. Alternatively, there may be no-one watching at all, with the footage only ever viewed in the event that it needs to be.
Two cutting edge technologies may shake up surveillance as we know it, however. Researchers from the U.K.’s University of Cambridge and India’s National Institute of Technology and Institute of Science, Bangalore have published a new paper describing a drone-based surveillance system that uses UAVs as flying security cameras to keep an eye (or several) on large gatherings of people.
“Our system is able to identify the violent individuals real-time.”
What’s more, these drones are equipped with deep learning artificial intelligence algorithms that allow them to identify troublemakers in crowds and take the proper precautions.
The “Eye in the Sky” real-time drone surveillance system could be deployed at events like music festivals, marathons or other large gatherings, where it would be utilized to identify violent individuals — based on their aggressive posture — using the latest pattern recognition technology. It then promises to alert the authorities. Welcome to the future of surveillance!
Identifying attackers in real time
“Our system is able to identify the violent individuals real-time,” Amarjot Singh, one of the researchers on the project, told Digital Trends. “This was quite challenging, and we had to develop unique ways to achieve [it]. The problem was that the standard deep learning algorithms require tens of thousands of annotated images to train these systems. This would normally be fine, if one was to assign just one label to an image — for example, assign ‘cat’ to an image of a cat. But in order to design a system which can detect the human pose from aerial images, the system needs to be trained with aerial images annotated with 14 key points on the human body. Since there is no dataset available for this type of application, we ourself annotated the points on the human body, which was extremely time-consuming and expensive.”
The ScatterNet Hybrid Deep Learning neural network analyzing body language via a complex motion-tracking algorithm inside a drone. University of Cambridge/National Institute of Technology/Indian Institute of Science
By breaking the human body down into 14 different points, the system is able to work out which violent action — if any — is being performed with an accuracy of around 85 percent (and all the way up to 94.1 percent) depending on how many people are being surveilled.
The algorithm was trained by analyzing 2,000 annotated images gathered from a group of 25 participants, who were asked to act out violent attacks such as punching, kicking, strangling, shooting, and stabbing. Accuracy levels drop the more people are being monitored and the further away the drone is, although this could be improved in the future. The finished algorithm, called the ScatterNet Hybrid Deep Learning neural network, can learn using a relatively small number of training examples, while robustly extracting the body’s posture with consistent accuracy.
“Once, the system can do well in the test runs, we will be bringing it to market.”
Just as important, it can do it very, very quickly — within just a few frames of video. This is especially important for security applications, since any potential system designed for this purpose needs to be able to alert authorities of an escalating situation before it has erupted into violence.
“The system detects the violent individuals by first extracting and sending the aerial frame recorded by the drone to the Amazon cloud,” Singh continued. “The human detection algorithm detects each human in the image frame. Next, the pose is estimated for each individual. The pose of the individuals involved in the violent activity are jointly analyzed to identify the violent individual.”
But all of this is still quite far away, right? Not necessarily. “We will be flying the drone at the technical festival at NIT Warangal in Andra Pradesh in India, in October of this year,” Singh said. “The second author is from NIT Warangal, and I am an alumnus. The festival is attended by around 3,000 – 4,000 people and is extremely packed.” The pair also hoped to use the drone at at another event in India called “spring spree.”
A figure from the research paper showcasing how the A.I.-equipped drone processes body language by looking at visual cues based on specific points around the body. University of Cambridge/National Institute of Technology/Indian Institute of Science
It won’t be used for alerting the police to violent action at these events, but rather to prove that the technology can accurately predict the outbreak of violence. “Once, the system can do well in the test runs, we will be bringing it to market,” Singh said. “We are also planning to extend this system to monitor the borders of India.”
Possible ethical concerns?
Technology like this will, of course, prompt polarizing responses. Police in the U.S. are already using drones on a regular basis — and this will only ramp up in the years to come — although right now that it done primarily for tasks like assessing crime scenes.
Like predictive policing, A.I.-equipped surveillance drones carry the ominous suggestion of possible “pre-crime.”
The idea of using A.I.-equipped drones for long-term surveillance of crowds runs the risk of verging on the Orwellian. Like predictive policing, it additionally carries the ominous suggestion of possible “pre-crime.” While this particular project focuses on actions carried out, there are examples of smart CCTV cameras with the goal of intervening before an act is carried out.
In the U.K., for instance, an A.I. algorithm is used for spotting potential jumpers on the London Underground train service. This system works by analyzing the behavior of people waiting on the platform, and then looking for those who miss several available trains during that time. This is because such actions have been shown to precede suicide attempts, thereby allowing intervention to be made.
Researcher Amarjot Singh
It’s one thing to use A.I. more rapidly to stop a scuffle which has broken out; perhaps another to surveil someone on the basis of body language suggesting that they might do something. Such questions will need to be explored as technology such as this becomes mainstream. If it is able to avoid violent acts on the street, we may well consider the tradeoff to be worth it.
Ultimately, concepts such as this make us recall the dream of philosopher Jeremy Bentham’s Panopticon. This was a proposed prison idea in which the presence of a central guard tower makes the prisoners believe that they are being watched at all times. The result, writers like Michel Foucault have suggested, is that prisoners wind up behaving as though they are being watched at all times.
As drones increasingly take to the skies, the existence of tools like this could prompt us to act in a similar way. Is that a police drone or an Amazon delivery flying overhead? You’d better straighten up your posture and loosen your shoulders just in case!
Editors’ Recommendations
- Meet the man fighting plastic pollution with a fleet of A.I.-powered camera drones
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- 9 bizarre drones, from web slingers to lake hoppers
- Crime-predicting A.I. isn’t science fiction. It’s about to roll out in India
- Like a vice principal in the sky, this A.I. spots fights before they happen
10 awesome movies that are leaving Netflix in July 2018

Get ’em before they’re gone.

July is nearly upon us, and that means a new round of movies is about to depart that big free-movie-plane-in-the-cloud we call Netflix. Some are gone come July 1, so you’ve got a week to go. Others will take a little longer.
In any event, here’s your last chance to catch these 10 awesome flicks before you’ll have to use someone else’s login on some other service instead.
- Alive: If you’ve never seen (or read) the story of the soccer team that crashed in the Andes and survived on little more than each other …
- Cocktail: Tom Cruise is eager and charming and good-looking and probably not into Scientology just yet. Also, he’s a bartender.
- Lethal Weapon 1-4: Mel Gibson is crazy. And a cop. And also acting in these movies. (Still gotta love Danny “Dad” Glover, too.)
- Breakfast at Tiffany’s: I recall we both kind of liked it.
- Michael Clayton: George Clooney is a fixer with a heart of gold.
- Scary Movie: The spoof that launched a thousand sequels.
- Scream 3: Scary movie.
- Terminator 3: Rise of the Machines: This is a movie that happened.
- Tropic Thunder: Welcome to the jungle.
- V for Vendetta: We’re basically one bad glass of water from this happening anyway.
See the full list at CordCutters.com
Introducing CordCutters.com
- The hardware you need
- All about streaming services
- What channels are on which service
- FREE over-the-air TV
- How to watch sports
- Join the discussion
Get the latest deals



