Self-correcting quadcopter can keep itself aloft even if one rotor fails
Compared to a vehicle with a single main rotor, like a helicopter, you’d think a quadcopter would be altogether safer. After all, hasn’t it got multiple possible failure points before it can no longer remain aerial? Unfortunately, that’s not correct. Most drones will struggle to fly unless all four rotors are operational. That’s something that researchers from the Netherlands’ Delft University of Technology have been working to solve.
At the recent International Conference on Intelligent Robots (IROS 2018) in Spain, the team demonstrated a “fault tolerance controller” which allows a quadrotor to continue flying at high speeds, even if a rotor has broken or a motor has failed. No, it doesn’t look quite as pretty doing so, but, crucially, it remains stable. Just as importantly, it is able to do so while retaining forward momentum. This is thanks to some smart math on the part of the Delft investigators, who were able to draw on information such as the drone’s built-in gyroscope and accelerometer to work out how flight should be achieved using the remaining three rotors.
“Imagine that when a quadrotor is delivering an important package over water where strong wind blows. All of sudden one motor malfunctions,” Sihao Sun, a researcher on the project, told Digital Trends. “Normally in this case, the drone will crash into the water together with the package. But with our technology, it is able to continue flying at a considerable speed to fly back to a safe landing place. This could save the package and the drone itself.”
In another use case, it’s possible to imagine how a drone with a damaged rotor could be set to return to its base of operations in a safe manner, without endangering people and property.
The team put the drone (a standard, off-the-shelf Parrot Bebop 2) through its paces in a wind tunnel to simulate forward flight without actual forward movement. The quadrotor reached a top speed of 9 meters per second, roughly half the maximum top speed of a Bebop with four functioning rotors. The team hopes to soon extend this to outdoor flights.
“The next step in this project is the combination of this fault tolerant control technology with real-time fault detection and envelope prediction and protection,” Coen de Visser, another researcher on the team, told us. “An important question that we are hoping to answer with our research is this: How can we make the drone ‘aware’ of its own physical limitations and capabilities after a failure has occurred? Without such an awareness, after a failure, the drone cannot make an informed decision about whether it should conduct an emergency landing, attempt to fly back to base, or even continue the mission with reduced performance.”
Editors’ Recommendations
- 12 awesome flying cars and taxis currently in development
- Parrot Anafi drone review
- The best drones under $500
- This is the result when a quadcopter strikes the wing of an aircraft
- This amazingly acrobatic winged robot moves just like a fruit fly
Google will start charging Android manufacturers to use its apps in Europe
It looks like Google is set to significantly change how it works with Android manufacturers, at least in Europe. To date, Android device makers have been able to load Google’s apps and services onto their Android phones free of charge. Now, however, Google will begin charging those device makers that want to use its apps.
The move comes as a response to a July ruling in which Google was ordered to stop “illegally tying” Google Chrome and some search-related apps to Android. Companies will now be able to license Chrome, the Play Store, and other Google mobile apps rather than having to bundle all of them together. Companies will also be able to license Google apps for forked versions of Android, which may make for more phones with alternative versions of Android.
Traditionally, Google hasn’t charged for the use of these apps because of how much money it makes from search and Chrome. Being required to stop tying them altogether changes how much money Google could potentially make from them. We don’t yet know exactly how much Google will charge for the licensing of its apps.
“Since the pre-installation of Google Search and Chrome together with our other apps helped us fund the development and free distribution of Android, we will introduce a new paid licensing agreement for smartphones and tablets shipped into the [European Economic Area],” Android head Hiroshi Lockheimer said in a blog post.
It’s important to note that Android as a whole will still be free — it’s just that now the apps that we most often associate with Android may not be. Not only that, but those apps may not come pre-installed on every Android device — so if you want them, you may have to download them separately.
Ultimately, Android device manufacturers may still be tied to Google. They will likely still need to license the use of the Google Play Store, where users can download all of the Google apps that otherwise would have come with their phone. Not only that, but it’s possible Google will be able to continue bundling all of its apps together in the future — the company is appealing the European Commission’s decision. Still, in the meantime, it has to comply with the decision, and as such the changes will go into effect starting on October 29.
Editors’ Recommendations
- Google tried to kill ‘www,’ until Chrome users protested the change
- Daydream VR users can browse with Google Chrome in virtual space
- Nokia 6.1 has Android One, takes funky ‘Bothie’ pictures, and is yours for $270
- Google will announce hardware on October 9, new Pixel phones expected
- Huawei’s new software and chip combo is designed to make the Mate 20 fly
MIT is building a new $1 billion college dedicated to all things A.I.
You would expect one of the United States’ premier tech universities to be on the very forefront of artificial intelligence (A.I.) research — and that’s exactly what the Massachusetts Institute of Technology (MIT) has demonstrated with a massive $1 billion planned investment. The impressive cash lump sum will go toward creating a new college of computing that is intended to offer the best possible education to future machine learning experts.
“As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” MIT President L. Rafael Reif said in a statement. “The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new A.I. tools. Just as important, the college will equip students and researchers in any discipline to use computing and A.I. to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.”
What makes the new center so exciting is that it will not just look to churn out single-discipline artificial intelligence graduates, but work to integrate machine learning into other fields — whether that’s history, politics, chemistry, or anything else. In his comments, Reif referred to the goal of educating “bilinguals,” referring to people who know about A.I. in addition to another discipline. This is a crucial step because, as even a cursory overview of the most exciting current A.I. projects will reveal, it’s the intersection of different fields where the really exciting stuff happens.
So far, around two-thirds of the $1 billion sum has been raised. About $350 million comes from private equity firm Blackstone CEO Stephen A. Schwarzman, who the college will be named after. In all, the MIT Schwarzman College of Computing will reportedly create 50 new faculty positions, half of which will focus on computer science. The other half will be appointed by the college and other MIT departments. The college will offer its first program starting in the fall semester of 2019. It will then move into its own dedicated space in 2022.
We’d advise you to get applying as soon as possible!
Editors’ Recommendations
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- By studying patient data, A.I. can limit toxicity in cancer treatment
- Just like an eagle, this autonomous glider can fly on thermal currents
- Replaced by robots: 10 jobs that could be hit hard by the A.I. revolution
- Smart Reply not smart enough? Desktop Gmail users can soon opt out
MIT is building a new $1 billion college dedicated to all things A.I.
You would expect one of the United States’ premier tech universities to be on the very forefront of artificial intelligence (A.I.) research — and that’s exactly what the Massachusetts Institute of Technology (MIT) has demonstrated with a massive $1 billion planned investment. The impressive cash lump sum will go toward creating a new college of computing that is intended to offer the best possible education to future machine learning experts.
“As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” MIT President L. Rafael Reif said in a statement. “The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new A.I. tools. Just as important, the college will equip students and researchers in any discipline to use computing and A.I. to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work.”
What makes the new center so exciting is that it will not just look to churn out single-discipline artificial intelligence graduates, but work to integrate machine learning into other fields — whether that’s history, politics, chemistry, or anything else. In his comments, Reif referred to the goal of educating “bilinguals,” referring to people who know about A.I. in addition to another discipline. This is a crucial step because, as even a cursory overview of the most exciting current A.I. projects will reveal, it’s the intersection of different fields where the really exciting stuff happens.
So far, around two-thirds of the $1 billion sum has been raised. About $350 million comes from private equity firm Blackstone CEO Stephen A. Schwarzman, who the college will be named after. In all, the MIT Schwarzman College of Computing will reportedly create 50 new faculty positions, half of which will focus on computer science. The other half will be appointed by the college and other MIT departments. The college will offer its first program starting in the fall semester of 2019. It will then move into its own dedicated space in 2022.
We’d advise you to get applying as soon as possible!
Editors’ Recommendations
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- By studying patient data, A.I. can limit toxicity in cancer treatment
- Just like an eagle, this autonomous glider can fly on thermal currents
- Replaced by robots: 10 jobs that could be hit hard by the A.I. revolution
- Smart Reply not smart enough? Desktop Gmail users can soon opt out
Remove photo bombs, other unwanted objects with Photoshop’s new Content-Aware Fill
Previous
Next
1 of 2

Before Hillary Grigonis / Digital Trends

After Hillary Grigonis / Digital Trends
Removing an object in a photograph is one of Photoshop’s most valuable tricks, but the task often requires a bit of finesse and a lot of time to master. To make it less time-consuming for users Adobe is relying more on artificial intelligence — making those disappearing objects feel a bit more like magic than a headache. Adobe Photoshop CC 2019 launched on October 15 with a newly redesigned tool called Content-Aware Fill, powered by Adobe’s A.I., called Sensei .
Content-Aware Fill isn’t an entirely new feature, but the earlier version didn’t always get the fix right and there wasn’t a way for photo editors to actually correct what the computer was getting wrong. The new Content-Aware Fill mixes A.I. and good old-fashioned human intelligence, allowing you to fix the software’s goofs. Besides removing objects, the tool can also be used to add in a missing piece, as Adobe demonstrated using a butterfly’s broken wing. We used Content-Aware Fill to remove the damaged areas on the leaf in the image above, and a tree branch photo-bombing the image below (click to switch between before and after images). Here’s how we did it.
1. Select the object
Previous
Next
1 of 2

Hillary Grigonis / Digital Trends

Hillary Grigonis / Digital Trends
With the photo open inside Photoshop, select the object that you want to remove. (Alternately, you can also select a gap or hole that you want to fill). You can use any selection tool, but the lasso tool (the third tool in the lefthand toolbar) is often fastest for selecting odd shapes. You don’t have to be exact with the selection — in fact, it’s probably better if you get a small portion of the background in the selection instead of selecting along the exact edges.
2. Open the Content-Aware Fill tool
With the object still selected, open Photoshop’s new Content-Aware Fill tool, which opens in a new window. Go to Edit > Content-Aware Fill.
Note: To use the new version of the Content-Aware Fill tool, make sure you’ve updated to the latest version of Photoshop CC. If you’re looking for the old tool though, you can still find it under Edit > Fill, then selecting Content-Aware from the drop-down menu in the pop-up. Know that the previous version of the tool leaves out several features, and if Photoshop doesn’t automatically get it right, you’re stuck with it unless you undo.
3. Sample the area of the image you would like to repeat
Previous
Next
1 of 2

Hillary Grigonis / Digital Trends

Hillary Grigonis / Digital Trends
Once the Content-Aware Fill window opens, you’ll see your photo with some green on parts of the image. (Don’t see green? Make sure the “Show Sampling Area” option is checked.) The green area indicates the reference area, or what you want Photoshop to sample from, to fill in the gaps where an object was removed. Photoshop will use whatever pixels inside that green reference area. By adjusting the reference area to include more, or less, of the image, you tell Photoshop what parts of the image to duplicate to fill that gap.
Once you open the tool, Photoshop has already selected what it thinks is a good area to sample from, but this is where the tool allows you to adjust those automatic errors.
To the right of that green-filled image, Adobe shows you a preview of the results as you make adjustments. Occasionally, this preview will look great from the start and you won’t need to apply any further adjustments.
If that preview isn’t accurate, you can adjust the sampling area to add some human intervention to that A.I. To make sure Photoshop is filling the gap with the right information, use the brush tool on the left. Select the + icon in the top toolbar if you want to add to the selection, and the – icon to erase from the selection. Next to the + and – icons, you can also change the brush size for faster or more precise work. Turn the parts of the image that you want Photoshop to sample from, to green; un-paint if you don’t want it included. For example, when we wanted to remove a branch sticking out into the sky, we turned the sky green and left the ground and the area around the branch blank (see example image above). Had we included the ground information, Photoshop would fill the area of the branch incorrectly.
If you’re having a hard time seeing the selection, you can change the color of the sampling and opacity under the sampling options on the right.
4. To add to the selection, use the lasso tool (optional)
If your original selection wasn’t quite accurate or you want to add to it, you can use the lasso tool on the left to select another area in the image or to expand the boundaries of the earlier section. Both sections are sampled from the same area, so if you want to eliminate two objects but you need to fill them with two different fills, it’s best to follow the Content-Aware Fill steps twice — once for each object to remove. If, on the other hand, you’re filling the gap with the same information, you can fill both gaps in the same process.
5. Fine-tune the results
Hillary Grigonis / Digital Trends
Besides using the brush tool to tell Photoshop what parts of the images to use and what parts to disregard, the new Content-Aware Fill tool also allows for a few more customizations inside the Fill Settings option on the far-right column (or wherever you’ve dragged it to).
Color adaptation allows Photoshop to make gradual color and contrast changes to help the fill better match the surroundings. The default settings work pretty well, but if the colors aren’t making a smooth transition you can adjust that via the Color Adaption drop-down option.
Sometimes, you want to remove an object but don’t have an exact reference point in the image — it’s not the right size, flipped the wrong way, or tilted the wrong way. The Scale, Mirror, and Rotation Adaptation tools can help. These tools tell Photoshop how to resize, tilt, or flip the reference point to fill the gap.
If you need to make that sample larger or smaller to fill the gap, check the box for Scale. If the sampled content needs to rotate to create a seamless fill, select one of the different intensities in the Rotation Adaptation menu, based on whether the fill needs to rotate just a bit or a lot.
Checking the Mirror option will flip the reference section (the green-painted section) horizontally to better fill in that gap. The mirror option is handy for filling a gap in a symmetrical object when you don’t have a reference area that matches. For example, to patch a hole in the leaf image above, the only place to take a sample is from the opposite side, but the lines on the opposite side are going in a different direction. By using the Mirror and Rotation options, Photoshop can fill the gap with the lines in the leaf pointing in the right direction. Note, however, that this tool only works for a horizontal flip.
Finally, check your output settings from the drop-down menu. You can choose to export that fill to a new layer, which will allow you to continue refining the adjustment, or you can export right to the background (which allows you to edit the image as a whole, but can’t work further with that gap you filled). Exporting to a duplicate layer will create two layers — one with the original, and one with the Content-Aware Fill adjustment applied to the image.
Once you are happy with the results in the preview, click OK. You’ll be taken back to the main Photoshop work area, where you can continue to make adjustments to the image.
Hillary Grigonis / Digital Trends
Content-Aware Fill is one of several different ways to remove objects from photos. For example, in the leaf image in this tutorial, after using the Content-Aware Fill on the largest blemishes on the leaf, the healing brush tool made quick work of the smaller spots. Content-Aware Fill is best for larger objects, while the healing brush works well on smaller blemishes like removing acne. Photoshop’s new Content-Aware Fill tool isn’t perfect (it had a hard time fixing a power line running through trees, for example) — but it makes quick work of removing many different types of objects in Photoshop.
Editors’ Recommendations
- Photoshop makes objects disappear with revamped Content-Aware Fill
- You can finally throw away your PC; Photoshop is coming to the iPad
- Adobe Premiere Pro uses A.I. to streamline audio cleanup and other tedious tasks
- PaintShop Pro 2019 is more well-rounded with 360 compatibility, speed boost
- The best free photo-editing software
Congratulations Palm, you’ve launched the stupidest product of the year
Brenda Stolyar/Digital Trends
Stop looking everyone, the stupidest product of the year has arrived. It’s the Palm (2018), a device that should never have got past the point of conception, and certainly never have actually been made. It’s so baffling, even those that made it don’t know what it’s for. However, heads were nodded, budgets allotted, boxes were ticked, and production lines were fired into life. And now it has been vomited into the world.
All those present at the time looked at it, and had absolutely no idea on what to do with it. A marketing exec, keen to make their mark, blurted: “I know, how about a phone … that you use instead of your phone?” Genius. Tie it into the whole digital wellbeing obsession, push it towards the hand-wringers so concerned about being present in the moment, and it would be a sure-fire hit.
Deliberately annoying
If Palm had stopped there, it would have just been a stupid product. But it didn’t. It ran with the idea, fleshing it out into something so utterly laughable, we’re still not sure they’re actually serious. It’s a short-sighted product with a marketing campaign built around it, to excuse the fact someone pressed the Go button on the production line, and now some of the money spent making it has to be recouped.
You’re probably thinking it’s not that bad, right? No, misguided ones, it’s worse. Let’s say you bizarrely really are looking for another phone to help you stop using your phone so much. How does Palm intend to do this? This is my favorite part. According to Digital Trends writer Brenda Stolyar, who spoke to Palm and wrote our hands-on with the device, “The creators believe its small screen will put you off from wanting to use the Palm for long periods of time.”
It’s called the Palm. It’s also made by Palm, kind of. So it’s the Palm Palm?
Read that one more time. This phone has been deliberately engineered to be annoying to use. Thinking that takes a special kind of genius. It’s not for active outdoor-types who don’t want to damage their expensive phone because it’s made of glass, so it’s going to break if you drop it. Don’t worry about the IP68 rating either, because that’s not exactly uncommon, and is found on phones that haven’t been specifically designed to irritate you.
Ill-conceived
You can make calls, send messages, upload all your contacts, take pictures, and download and install any Android app you like. That’s called a smartphone, and is no different from the one you already own, just smaller and more infuriating. If it didn’t run apps, or take photos, or do anything else that sucks our time, then maybe a case could be made for it lowering use. Instead, the only cursory feature vaguely related to lowering phone use is a jazzed-up Do Not Disturb mode, that every other phone already has.
Palm itself seems to know how foolish it looks, and appears to be backing away from the whole, “disconnect from technology,” aspect. In an interview with Bloomberg, Palm Ventures co-founder Howard Nuk says the tiny phone is actually for when you don’t want to carry a big phone around, but want to remain connected. Confused messaging is rarely the sign of a well-conceived product.
Completely senseless
What if you’re a masochist, want your daily life to be filled with pain, and plan to buy one? Not so fast. It has to sync up with your main phone so it’s not really made to be used on its own, has to come from Verizon, and you need to use the proprietary Verizon Message+ app. That, along with the pathetic eight hours of battery life, will certainly make me want to use the Palm 2018 less, if nothing else.
Absolutely no-one, until now, has thought about releasing a secondary smartphone to help reduce primary smartphone use.
Astonishingly, we’re not finished. As madness set in at Palm when the horror of what it had created was unfolding, someone came up with the idea of making a sleeve that holds the phone and you wear on your arm. You can go out and buy one of these cases already, because cyclists and runners sometimes use them. What you won’t do is go out and buy another phone at the same time.
Brenda Stolyar/Digital Trends
We’re going to have to stop, as it’s almost too much. But we’ve got to mention the name. It’s called the Palm. It’s also made by Palm, kind of. So it’s the Palm Palm? Did anyone not on day release have anything to do with this product? Clearly not when coming up with the price, as it’s $350. Or if you’re really committed, a two year contract is available. And committed is exactly what you should be.
Just buy a smartwatch
We’ll concede that reducing smartphone use is a worthy endeavor. Apple and Google, Facebook, Huawei, and others all have software solutions to help you do so. Absolutely no-one, until now, has once thought about releasing a secondary smartphone to help reduce primary smartphone use. Why? Because it’s beyond witless. In the same way that buying a second, small phone to take out with you when your large phone is too large. The Palm Palm (2018) is the product the rolling eyes emoji was created to represent.
You either want to step away from your phone, or you don’t.
Perhaps the software options to cut down on scrolling through Instagram don’t appeal. Interestingly there’s a product that already does what the Palm aims to do, and in a more attractive, more sensible, and far less ridiculous fashion.
It’s called a smartwatch, and you can buy one for less than half the price of the Palm, you won’t have to sign your life away to Verizon, and you won’t have to explain why you’re using it to everyone you meet. If you want autonomy, there’s the cellular Apple Watch Series 4, and a handful of cellular Wear OS watches too.
There’s genuine mileage in using a smartwatch to minimize smartphone use, if it truly bothers you. You’re still connected, but in a less intrusive manner. No, you probably won’t be able to perform complex tasks, but that’s the point. You either want to step away from your phone, or you don’t. Whatever your reasons, buying another phone will never, ever lower your smartphone use, and no-one who buys a big, expensive phone needs to buy a smaller second one.
Whatever the plan is, you’re way too early for April Fool’s Day, Palm.
The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.
Editors’ Recommendations
- Palm (2018) hands-on review
- How to take a screenshot on a Galaxy S9 and other Android phones
- Palm (2018): Everything you need to know
- The best smartphones of 2018
- MIT’s creepy-crawly robot can help monitor your health
The best YouTube channels for sports lovers
yobro10/123RF
Further reading
Best YouTube channels you’ve never heard of
How to create a YouTube channel
Biggest YouTube channels in existence
No matter what your hobbies are, you can find a YouTube channel dedicated to them, and if you’re a sports fan, you’ve got a ton of channels to choose from. There are so many sports channels, in fact, that you might have trouble finding some that are actually worth your time. Here are some of the best sports channels on YouTube, whether you’re looking for incredible highlights, expert commentary, or just funny skits.
BBallBreakdown
For in-depth, strategy-oriented discussion of basketball, BBallBreakdown is one of the best channels around. Host Coach Nick tackles a variety of topics, both timeless and timely. In one video, he examines why the 2017-18 Oklahoma City Thunder — fresh off trading for star players Paul George and Carmelo Anthony — were struggling to succeed despite having three stars (hint: Carmelo set weak screens and jacked up a bunch of terrible shots). Coach Nick makes extensive use of footage to help bolster his arguments, so viewers can see what he is describing in real time. The channel is also a good place to learn basketball history, with interviews with NBA legends and short videos about important events and players of days long past.
The Ringer
Founded by longtime ESPN writer and notorious Boston sports homer Bill Simmons, The Ringer is a site dedicated to covering both sports and pop culture. The site also produces a robust volume of videos and podcasts, which conveniently assembled on The Ringer’s YouTube channel. Sports fans will find a variety of videos, including video analysis of specific basketball players, humorous explainers for a host of NFL concepts, and “table reads,” in which pro athletes recreate scenes from famous movies. The Ringer staff tend to keep a casual tone, but the videos are generally well-produced.
Joseph Vincent
Joseph Vincent makes “mini-documentaries” about various athletes, chopping and splicing various bits of archival footage and audio to create remarkable short films. His video Dawn of the G.O.A.T., for example, compiles footage and commentary from Tom Brady’s college career, when Brady had to claw his way up the depth charts at Michigan to become the starting quarterback, fending off competition from younger athlete Drew Henson along the way. It’s enough to make even an ardent Patriots hater tear up a little and a testament to Vincent’s talent. His documentaries span a variety of sports and eras, too, including basketball, football, and even Muhammad Ali’s fight against Joe Frazier in Manila.
Bleacher Report
As one of the many sports sites to explode in popularity over the last decade, Bleacher Report produces a lot of video content, and while the quality varies, some of it is quite good. B/R covers all the major sports, and viewers will find recent highlights for basketball, football, soccer, and other sports with minimal editing (no obnoxious soundtracks here). Fans can also find video of the Simms & Lefkoe podcast on the B/R channel, but the crown jewel is undoubtedly Game of Zones, an animated series that mashes up news of the NBA season with Game of Thrones. At five seasons and counting, it’s a hilarious treat for even casual basketball fans, and since the drama in the NBA never ceases, the writers probably won’t run out of material any time soon.
Bundesliga
Professional leagues can be awfully stingy with footage of games; more of them should follow the example of Bundesliga, the premier German soccer league, which maintains a YouTube channel swollen with highlights, explainers, and the occasional bout of comedy. The videos all have a high level of polish, but Bundesliga isn’t afraid to let a little personality shine through. If you follow Bundesliga, this channel is a feast of good content; if you don’t, stop missing out on some of the best soccer in the world!
ESPN
Although ESPN has stumbled a few times as audiences increasingly flock away from cable to online entertainment, the sports network remains a titan in the industry, and home to some great sports media talents. In addition to highlights from various major league sports, ESPN’s channel includes clips from the Stephen A. Smith Show, The Jump with Rachel Nichols, and more.
Editors’ Recommendations
- The best free movies on YouTube
- How to download YouTube videos
- FuboTV: Everything die-hard sports fans need to know
- ‘FIFA 19’ review
- The best podcasts of 2018
This gadget lets you sleep on airplanes without snuggling a stranger
Ever drift off on a long flight, only to realize once you fell asleep, your head lolled to one side until it was resting on the shoulder of the stranger next to you? An odd Kickstarter gadget is vying to help travelers sleep comfortably on airplanes without snuggling with a stranger. The Napup Fly+ is a mix between a personal travel pillow, a sleep mask, and a pair of headphones all rolled into one without that head loll. It’s pretty much a “hug for your face,” Napup CEO and founder Ben Cohen-Gazit says.
The Napup Fly is an “in-flight personal sleep system,” says the New York-based company Napup. The sleep system attaches to the back of the headrest and surrounds your head on both sides, with a forehead strap at the front. The result is an ergonomic way to sleep while sitting upright on a plane, the company says.
Head flops aren’t the only way the odd gadget tries to create a sleep environment on a crowded airplane. The front of the Napup Fly also has a sleep mask to block out light. The sides of the more advanced Napup+ have a sound system built in with a 0.35mm headphone jack, replacing the noise of the airplane with your music.
NapUp
“Napup Fly is a unique, ergonomic, supportive and stable sleep system that makes flights comfortable and pleasant. Napup Fly is the first product in the market that combines comfort with a stable yet non-restraining solution for napping on board,” Cohen-Gazit says. “I’m happy to introduce travelers to the first enhanced in-flight sleep solution. Napup Fly is like a hug for your face.”
When not in use, the Napup Fly folds up for tucking in a carry-on.
Napup launched four years ago with an idea for a car seat strap designed to keep kids’ heads from bobbing around after falling asleep in the car. The Napup Fly is the company’s first Kickstarter.
If the project is successful and hits $30,000 by November 30, early backers could get the Napup Fly for about $35 and the Napup Fly+ with headphones for $49. The company expects delivery around April 2019.
Editors’ Recommendations
- The best sleeping pads
- Flying could feel more like going to the movies with VR headsets and headphones
- Sick of black? Ethnotek’s Raja camera bag lets you customize designs on a whim
- The best camping gifts
- Acer OJO 500 Windows Mixed Reality headset lets you straddle real, virtual worlds
Diving into Adobe’s cloud-based, edit-anywhere video app, Premiere Rush CC
Adobe has officially announced Adobe Premiere Rush CC, a cloud-first, cross-platform video editing app for Windows, MacOS, and iOS (with Android soon to follow).
The app, which is free to download and try, is Adobe’s next step toward its ongoing efforts to move creativity to the cloud and make content and creation accessible across devices, with an extra emphasis on sharing it across social media.
Ahead of the announcement on stage at Adobe MAX 2018, we were able to take it for a spin. Below are some of our thoughts on Adobe’s first iteration of Adobe Premiere Rush CC — essentially a distilled cloud-centric version of its more robust video editing program, Adobe Premiere Pro CC.
The first thing that stuck out with Adobe Premiere Rush CC was its onboarding process. Adobe has done a fantastic job creating a means of familiarizing users with the app. Using stock footage and onscreen prompts, Adobe walks through the process of creating a project, importing footage, editing footage, and sharing.
Even if you’re familiar with Adobe’s other Creative Clouds app, the onboarding process is a great starting point. If you get the gist of it though, there is an easy out to skip the intro.
Once familiarized with the app, it’s just a matter of getting content into a project to edit. Importing media — videos, photos, and audio — is as simple as selecting it from the provided media browser. Content can be added from local storage, such as on your computer’s hard drive or an external hard drive, as well as from cloud services, including Adobe’s Creative Cloud and Apple’s iCloud Drive.
Importing went quick on both an iPhone XS and first-generation iPad Pro, even with large 4K video files. Once content is imported into Adobe Premiere Rush CC, you can start editing while it seamlessly syncs in the background to your Creative Cloud account. This made it easy to keep the focus on editing, with the confidence that the content would be accessible on other devices should we choose to edit elsewhere.
As for the available editing tools, Adobe provides almost everything you could ask for in a consumer-centric app: multi-timeline video and audio editing, title screens, transitions, presets, basic color adjustment tools, audio processing, and transform tools.
As to be expected for a consumer app, none of the controls offer the fine-tuned adjustments you’d find in Adobe’s professional Premiere Pro CC app. There are only three transitions to choose from, there’s no key-framing for edits and effects, and overall, it’d be nice to see a few more options and capabilities throughout the editing and settings modules. However, you can open Premiere Rush CC files in Premiere Pro CC, which means pro users can start a project in Rush and fine-tune it later in Premiere Pro.
That said, for a 1.0 product, it’s incredibly proficient for what it is. We used it on a MacBook Pro, iPad Pro, and iPhone XS, and on each device, the same tools were available and the content never seemed to struggle to render in real time, even when multiple effects were added.
If you’ve ever used Adobe Premiere Pro, you’ll know how difficult it can be to export content. Adobe Premiere Rush CC manages to take away all of the unnecessary commotion and keep it simple.
Adobe is promoting Premiere Rush as a video editor designed for social media, so there are multiple means of sharing your content online. At launch, it offers support for YouTube, Facebook, Instagram, or Adobe’s Behance, as well as Twitter, Vimeo, and Snapchat. Having these integrations out of the gate means you won’t have to worry about saving files on various devices and uploading them one-by-one — you can post all your content straight from the app with minimal effort.
Wrapping up, Adobe Premiere Rush CC is a brilliant dilution of the most important features from Adobe’s full-fledged video editing app, Adobe Premiere Pro. On both desktop and mobile the app feels snappy and responsive. The interface takes a little getting used to, especially if you haven’t used Adobe’s other cross-platform CC apps, but the onboarding process does a great job to lower the barrier to entry.
Put simply, Adobe Premiere Rush CC is to Adobe Premiere Pro CC what Adobe Lightroom CC is to Adobe Lightroom Classic CC. It’s a great mobile-first app that seamlessly keeps your content across devices and makes it easy to create and share content across social on the go without all of the unnecessary fluff.
Adobe Premiere Rush is available on Windows, MacOS, and iOS, however there are system requirements you’ll need to meet. The Android version is due out in 2019. While it’s free to download and try, you will be limited to exporting three projects before you have to shell out some money.
Adobe Premiere Rush CC is available for $10 per month to individuals, $20 per month to teams, and $30 per month to enterprise customers. It’s also available to Adobe Creative Cloud subscribers with the All Apps, Student, and Premiere Pro CC single app plans. It includes 100GB of Creative Cloud storage space with the option to upgrade up to 10TB of cloud storage.
Editors’ Recommendations
- Adobe MAX 2018: What it is, why it matters, and what to expect
- Adobe’s Premiere Rush is a video-editing app designed for social media projects
- Adobe’s next big software secret? An augmented reality tool called Project Aero
- You can finally throw away your PC; Photoshop is coming to the iPad
- With Premiere Elements’ new A.I. editor, you may finally finish that video project
Huawei and Leica’s monochrome lens is dead, so we celebrate its life
Julian Chokkattu/Digital Trends
While both the Huawei Mate 20 and the Mate 20 Pro have three camera lenses, none of them shoot only in monochrome, marking the end of a fantastic era for smartphone photography. Ever since Huawei first partnered with camera experts Leica, a monochrome lens has sat alongside the regular camera lens, giving us the wonderful ability to shoot photos in pure, unadulterated black and white. On the Mate 20 series, this lens has been replaced by an ultra-wide camera lens, and while we’re excited for the creative possibilities, the monochrome lens will be missed.
For many people, me included, smartphones have been our entry into the world of photography. While we may have experimented with regular cameras in the past, any untapped photographic talent or burgeoning love will have only manifested because of the phone in our pocket. This means the phone’s camera features will have prompted experimentation, and influenced growth. Hyperbole? No, because that’s what the monochrome lens on the Huawei P9, Huawei P10, Mate 10 Pro, the Mate RS, and P20 Pro did for me, and why I wanted to write what could be called an obituary.
However, like all the best obituaries, it will really be a celebration.
Monochrome and me
My first monochrome photo taken with the Huawei P10 was snapped on February 25, 2017. Two days after, a professional photographer taught me a little about how to capture beautiful shots in black and white. That session resulted in a former member of the DT Mobile team taking what I consider to be one of the best photos of me in existence, and it’s in black and white.
Andy Boxall/Digital Trends
Despite dabbling with the mode on the Huawei P9, this was the start of my love affair with the Huawei’s monochrome camera.
Since then, a Huawei camera phone with a monochrome lens has accompanied me on many trips, and my favorites pictures taken are often those taken in black and white. In China, it captured the super modern buildings in stark detail, while the neon in Las Vegas seemed to shine more brightly. The Grand Canyon looked like the surface of the Moon, the sharp lines of Wall Street were emphasized even more, and the Louvre in Paris took on a magical look. From a late night whisky bar in London, to a nighttime photo tour of Frankfurt, a unique atmosphere was always captured, or created, photographing in black and white.
On a photo tour of Frankfurt, photographer Bobby Anwar was a teaching a group about how to take low-light photos, and monochrome was heavily featured. A life-long Leica fan, Anwar commented at the time how the monochrome lens made Huawei’s camera phones stand out. Seeing what was possible at the time was so influential personally, and it became a running joke that if I was taking a photo, there was a good chance it was a monochrome shot.
Andy Boxall/Digital Trends
I’m not saying monochrome worked everywhere, at all times. For every monochrome photo, a color one was usually taken at the same time, because the atmosphere changed with each one — and that was its appeal. For me, stripping away the color sometimes revealed more about the scene than keeping it in, and made me want to try it out, just to see what the end result would be like. It sparked creativity in an organic, immensely satisfying way.
I’m not the only one. Digital Trends’ photography writer Daven Mathies used the P20 Pro on a visit to Leica’s headquarters, and wrote, “We enjoyed the monochrome mode so much that we probably used it for well over half of the photos we made on the trip.”
Monochrome, and Leica
The photographers among you will already know the appeal of black and white photos, and most of us will know the many famous, striking images — especially portraits — that have been published in black and white over the years. This is not an article about the history, influence, or impact of monochrome photography. It is a celebration of Huawei and Leica bringing a true monochrome lens to a smartphone.
Leica and black and white photography are synonymous with each other, to the point it produces a digital, black-and-white-only camera. Those classic black and white portraits we’re all familiar with? Many will have been taken with a Leica camera. Through my own personal love of using it, I can easily see how it has sparked the creativity of all those master photographers.
Huawei clearly put its trust in Leica rather than push for an alternative feature that’s more headline friendly.
It also illustrated the close working relationship between Leica and Huawei. While there are technical reasons for including the lens, it’s not essential, and Huawei clearly put its trust in Leica rather than push for an alternative feature that’s more headline friendly. It was right to, as the pair created what is one of the best smartphone cameras we’ve ever used.
However, change was afoot at Huawei. The monochrome photo mode, once an easily accessed feature, was becoming less important with every new version of the software, eventually resulting in the mode being hidden in a submenu. It still worked as usual, and could even shoot in aperture mode for spectacular bokeh-style photos. But you wouldn’t stumble across it. You had to go looking.
What happened?
Leica and Huawei have a long-term partnership and as we’ve mentioned, monochrome is an important part of the company’s products. On Huawei phones it’s not just there as a feature, the monochrome lens worked in conjunction with the rest of the camera system to gather visual data and enhance its photos. Why has the monochrome lens been removed from the Mate 20 and Mate 20 Pro? Huawei told Digital Trends the feature simply wasn’t used much, and that it can now collect more image data, and crucially better data, from the other lenses. The monochrome lens had outlived its usefulness. Swapping it out for an ultra-wide lens makes sense.
It’s the end, but not entirely. Dig into the camera app on the Mate 20 and Mate 20 Pro and you’ll still find a monochrome mode, only this time it enables a filter, just like the black and white mode on every other smartphone camera out there. Huawei promises us that Leica has tuned the filter exactly to its liking, and there is very little difference between the filter and a dedicated lens. We believe this, and find it highly unlikely that Leica would put its name in a substandard monochrome filter.
Julian Chokkattu/Digital Trends
But it’s not quite the same. Put a shot taken with the P20 Pro’s monochrome lens alongside a black and white filter shot taken with another phone, and although the differences are slight, they are there. It mostly has to do with crispness, depth, and pin-sharp contrast. Plus, there’s knowing you’re not taking a picture with a filter. It’s the real thing. Sweeteners make your tea taste sweet, but there’s no substitute for dropping in a real cube of actual sugar.
It’s the end of an era, and the beginning of a new one. Shooting monochrome photos with the P20 Pro, the P10, Mate RS, and the Mate 10 Pro has genuinely changed my appreciation of photography. All the pictures you see in this article are ones I’ve taken. They’re not masterpieces, but they do represent the amount of enjoyment this one feature brought.
I’m sorry to see it go, but excited for the future. Ultimately, if Leica approves of its new monochrome filter on the Mate 20, I’m sure I will too.
The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.
Editors’ Recommendations
- Huawei Mate 20, Pro, and X: Everything you need to know
- Huawei Mate 20 Pro hands-on review
- Huawei Mate 20 hands-on review
- The Huawei P20: Here’s everything you need to know
- Huawei is not-so-subtly trolling Friday’s iPhone launch



