Skip to content

Archive for

6
Jan

Blade Runner 2049 Speakerhat review: I guess I wear music now


speakerhat-hero.jpg?itok=KaGiVIkB

I’m a sucker for cyberpunk, and this thing was screaming my name.

Living in a world where nostalgia is practically weaponized against young adults with functioning wallets, I’m used to skipping over the latest quick attempt to make a buck. I’m a collector of nerdy things, but I try really hard to avoid the cheap stuff that hops on the latest bandwagon. The release of a new Blade Runner movie guarantees a resurgence of poorly-made glowing umbrellas and t-shirts with Replicant stamped across the chest, but something from that noise caught my attention in a weird way. It’s a fairly ordinary-looking hat with the Japanese Atari logo stitched across the crown, but when you put it on you are able to fully immerse yourself in sound from your phone.

This unassuming hat comes with a fairly obvious name, Speakerhat, but what you get out of this experience is fairly unique.

See at Amazon

Wearing your sound

speakerhat-speakers.jpg?itok=ibYZejIq

The Blade Runner 2049 Speakerhat is one of several variations made with tech from a company called AudioWear in collaboration with Atari. If you dig the Atari logo but aren’t a fan of the Blade Runner logo on the back, there are other designs to buy. If you’d prefer something a little more focused, there’s a slick Pong version as well. The design of these hats all include the same basic tech underneath, a pair of speakers on the underside of the visor and a single button on the back near the adjustable strap.

Essentially, this hat is a Bluetooth speaker you wear on your head. The pair of speakers point at your face, and due to their angles, it feels like surround sound. A single microphone sits next to the speakers so you can use your hat for calls. The battery is tucked under the sweatband on the back of the hat, opposite the control button. You charge your hat via Micro-USB, and there’s a 2.5mm jack just in case you feel like using a cable instead of Bluetooth.

Considering all of the tech stuffed in this hat, I was surprised to find it didn’t really feel heavier than any of my other hats when sitting on my head. I could tell it was heavier when I picked it up, but the way everything is balanced around the hat it distributes that weight very well when you’re actually wearing it. The weight isn’t enough to cause the hat to be any more prone to falling off your head in a heavy wind, either. It really does just feel like a solid ball cap.

This hat is not waterproof at all, which makes sense since it’s just an ordinary baseball cap on the outside but presents a unique challenge given all of the tech on the inside. It’s OK for the hat to get a little bit of spray on it or a mild drizzle, but getting caught in a rainstorm probably means it’s not going to function as a Bluetooth speaker anymore.

Audio experience

speakerhat-music.jpg?itok=O2KdgtYd

Let’s get this out of the way real quick — these are not headphones. These are speakers. You are not using this hat to play music just for yourself. When you press play, everyone around you will hear whatever it is you are playing. This is great if you’re outside and you want to hear the world around you while you are listening to music or catching a podcast, but on a bus or a train you’re going to be blasting your sound to everyone and it is unlikely to be widely appreciated.

I’ve been pleasantly surprised by the reactions from those around me when using the Speakerhat.

That having been said, the audio quality of these speakers is decent for the price point. These speakers are on par with your average thin $60-80 Bluetooth speaker. There’s essentially no bass, but the mids and highs come through without being overly tinny. It’s great for spoken word podcasts or streaming a TV show or movie, but playing your favorite song will probably make you want to tweak the EQ on your phone a bit to get a sound you are happy with.

I’ve been pleasantly surprised by the reactions from those around me when using the Speakerhat. Even though the speakers are pointed at me, the audio is loud and clear enough for others in the room or nearby to enjoy. The audio doesn’t sound muffled or distorted, like you’d get from a cheaper Bluetooth speaker pointed away from you. The same is true of phone calls; everyone I spoke to said I came through clearly and they were unable to hear themselves despite the speakers being so close to the microphone.

The Speakerhat itself has no volume controller, since there’s only the one button on the hat. You have to jump in and control volume from the buttons on your phone, which is a little on the inconvenient side. That button does three things, but it does them all exactly the way you’d expect. You can turn the Speakerhat on and off with a long press, pause and play a media file with a button press, and answer or hang up a call with a button press. It’s super simple, just like the audio cues for letting you know when you’re paired to a device. Other than these things, you’re using your phone for control.

Should you buy it? Sure

speakerhat-atari.jpg?itok=rc73PCYI

Overall, I’m thrilled with my Speakerhat. It’s a stupidly nerdy hat that fills the world around me with sound when I’m folding laundry or out by the grill, but the design is subtle enough that I can wear it anywhere and no one but me will know it’s a gadget. I wouldn’t go so far as to call it practical, but it is fun and I dig the look.

That having been said, it’s also a $100 hat that I have to stuff under my shirt if I’m caught out in the rain, something I would never need to do with my favorite headphones. It’s clearly something that appeals more to my desire to show off some nerd culture more than my desire to own “the best” gadget, and I’m ok with that. If you are also ok with that, you should grab one of these for yourself.

See at Amazon

6
Jan

How to make a fitness app part of your daily routine


Technology is transforming fitness.

samsung-health-5k-conclusion.jpg?itok=d5

It used to be that fitness apps were primarily used to count calories, and check in after workouts. That’s no longer the case. There are dozens of fantastic apps out there, and they cater to what you are specifically looking for. Whether you’re always looking for a new fun app to help motivate you along, or you’ve never been inclined to look into them at all, fitness apps can help make your average day healthier.

Keeping fit is easier than ever

lg-watch-sport-google-fit-workout.jpg?it

Most of us get some amount of exercise every day, just by living our lives. We walk around, in some cases all day long. Plenty of fitness apps will track this and let you know what your activity level is like each day, including the number of steps taken and calories burned based off of your height and weight. There are apps which do this without ever even needing to be opened after you initially set them up.

Fitness doesn’t have to be a chore, at least not with these apps.

If you’re just starting a fitness routine, there are apps that can help build workout plans, count calories, give you videos so you can workout at home and much more. It might seem a little strange at first to have an app tracking your fitness level. By integrating these apps into your day you can see how active you already are, without ever having to hit the gym. That isn’t to say that all fitness apps are made for that purpose. They’ve branched out, and the abundance of choice lets you determine what you need out of a fitness app.

More: 4 interactive apps that will keep you entertained

If you’re looking for something that turns fitness into a game there is Zombies, Run! The Walk, or even Pokémon Go. Charity Miles donates money to a charity of your choice for the distance you run. Fitness doesn’t have to be a chore, at least not with these apps. They take what you’re already doing, and put a spin on it to make it fun and encourage you to do more.

Assistants can help

samsung-health-accessory.jpg?itok=_resFi

When it comes to working out, maybe you need to fit things in right in the middle of a busy day. If you have a Google Home, a Samsung phone with Bixby and Samsung Health, or an Amazon Alexa your digital assistants can be of help.

That’s because each different Assistant can launch a workout for you. With Samsung devices, you can build a workout plan right from inside of Samsung Health, whether that be drinking more water or training for a 5K program. Alexa has workout based skills that you can enable, and Google Home can talk to several workout programs. This means that you can trigger a quick workout just by talking to your connected device, making it easy to fit in some activity even when you only have a few short minutes to spare.

One size fits all fitness is a thing of the past

pokemon-go-bike.gif?itok=UXv44sx6

Even if you aren’t particularly fitness-minded, integrating an app into your life can be a benefit. With unobtrusive tracking apps, you can see your daily activity levels. While that might not seem like much if you’re a city dweller you could be walking miles every day without ever realizing it. These apps can help with your health as well, outlining when you have more energy for activity, or what your stamina is like. Some games even have fitness benefits that are purely accidental, like Ingress where walking around to capture portals is a game mechanic. Apps like Aqualert can even help to make sure you’re drinking enough water. Fitness apps are no longer just for the people who live and breathe getting and staying in shape. They’re built now to be friendly to everyone no matter what your level of motivation might be.

With the ways that fitness apps have diversified, there really is something out there for absolutely everyone.

  • Sleep Trackers: Sleep as Android, Sleepbot
  • Calorie and Water Trackers: Aqualert: Water Reminder H20, Cron-O-Meter
  • Activity Trackers: Google Fit, Moves, Samsung Health.
  • Pedometers: Noom Walk Pedometer: Fitness, Accupedo Pedometer
  • Games that require walking around: Ingress, The Walk, Pokémon GO
  • Run Trackers: Run Keeper, Runtastic
  • Apps that make fitness fun: Zombies,Run!

These aren’t the apps from years ago which were tailored for a specific type of person to use. Rather, they have spread their influence and tried to find new niches for people who might not usually use a fitness app. The analytics and data can be fantastic if you’re a fan of graphs and charts, but even better is the fact that using these apps can actually help you to live a healthier life. We only get one body, so why not treat it right with the help of technology?

Questions?

It doesn’t matter what your activity level is like on a day to day basis. Everyone can benefit from having a fitness app in their life. It can be something small like simply tracking your activity levels, or detailed down to your caloric intake and workout intensity. No matter where you sit on the fitness spectrum, there is an app for you. So are you using any of these apps, or is there a fitness app that you stand by already? Tell us all about it in the comments!

January 2018: We’ve updated this post with information about using the Assistant on your phone to help make fitness a part of your daily routine!

6
Jan

Ben Heck’s Raspberry Pi-based portable MAME arcade


5a3bc6251c41492702e12b2a_o_F_v0.jpg

Can you drive an LCD screen directly from the general-purpose input/output pins (GPIO) on the Raspberry Pi? Find out with the team as they build a portable MAME arcade machine using the Model A+ of the Raspberry Pi. There’s no HDMI in use here! Ben takes us through setting up RetroPie on the A+ and its configuration. Would you have approached this build differently? Let us know over on the element14 Community.

6
Jan

Computers saw Jesus, graffiti, and selfies in this art, and critics were floored


The Barnes Foundation art gallery, located in Philadelphia, boasts a collection of famous work you’d probably recognize, whether that’s Vincent van Gogh’s Portrait of the Postman Joseph Roulin and Paul Cézanne’s The Card Players.

But its founder, Albert C. Barnes, had a more unconventional perspective on how to interpret the art — and that perspective is now being reflected in machine learning, of all places. Rather than getting caught up in the minutiae of dates and movements, Albert C. Barnes extolled the virtues of making visual connections between different artworks. Apparently computers agree with him.

A century after he started assembling his collection, machine learning is helping push the boundaries of his unorthodox ideas about how art should be viewed.

The Gallery Comes to You

The Barnes Foundation has always been an essential stop for any art fans in the area — and now, you can enjoy its contents from anywhere in world.

“Dr. Barnes had a very specific point of view about how you view work.”

Under the supervision of chief experience officer Shelley Bernstein, The Barnes Foundation has embarked upon an effort to make its collection available online. This is far more than just a digitization effort; it’s a technologically enhanced expansion of the ethos that the organization was born out of in the early 20th century.

“I’m a big believer in what I call mission-driven tech — thinking about the goals of the institution, like the fact that this was an early teaching institution,” Bernstein explained when she spoke to Digital Trends about the work she’s doing at the Barnes. “Dr. Barnes had a very specific point of view about how you view work, and that you might not necessarily need to know the history behind it.”

Over a period of years, Barnes amassed a collection of paintings, sculptures, jewelry, and furniture. Each wall of each room is meticulously arranged in order to forge connections between pieces that may have been made years apart, halfway across the world from one another.

Barnes tweaked each of his “ensembles” as he obtained new pieces, but they have remained in the exact same configuration since his death in 1951. It makes sense for the physical space to be preserved in this manner, both for practicality’s sake and as a tribute to the foundation’s founder. However, the website allows individuals to forge new links for themselves.

It’s here where machine learning is being used to foster some unusual responses to great art.

An interface on the website allows visitors to access digitized versions of any of the works in the collection. Clicking on a particular piece offers up the opportunity to see it in context as part of an ensemble. There’s also a method of seeing visually related pieces using a slider toggle that stretches from “more similar” to “more surprising.” It’s here where machine learning is being used to foster some unusual responses to great art.

Several different computer vision and machine learning experts were called upon to help the Barnes feed its collection into several different systems. Each one came up with very different results.

“I remember that a developer who was working on the project came to me and said, ‘Shelley this is insane, it’s all wrong,’” Bernstein remembered. “A curator looked at it and said ‘Actually, no — that’s kind of interesting.’”

Automated Art Critic

The machine learning systems that were being fed the Barnes’ collection made observations that might never have come from humans. Given the organization’s history of encouraging visual connections between otherwise disparate works, this seemed like a huge opportunity.

The computers would label a classical artwork as graffiti. They might see soft nudes by Renoir as teddy bears and stuffed animals. Frequently, they would tag bearded faces and other similar shapes as images of Jesus Christ.

The Barnes Foundation

It would be easy to write off these observations as limitations of the computer vision software, but time and again, curators found that the computers had a point. They were simply making visual connections without any of the second-guessing that a human mind might introduce to the process.

Berstein’s team uses six different tools to analyze the art. She describes Microsoft Azure as offering up the “coolest” results, referring to the most interesting, surprising categorizations. IBM Watson and Google’s offering were apparently successful as well — but much more conservative.

So while machine learning hasn’t completely replaced human curation from the equation, it does offer another perspective that has proven to be pretty unique.

Open Source Art

Obviously, certain parts of its digitization effort are very closely related to its specific ethos and mandate — but all the code is open source, so other institutions can pick and choose the parts that they can make use of and build from an established foundation.

The computers would label a classical artwork as graffiti.

Of course, the open source nature of the digitization isn’t just for other museums and galleries. Just weeks after the digitized collection went online, there were already some interesting projects at play on Twitter that take the ideas even further.

Andrei Taraschuk describes himself as a software developer by day and an art hacker by night. When he saw the online version of the Barnes collection, he knew that he had to put it to good use. Having already created Twitter bots that share a particular artist’s work, he created a Barnes bot that would post images of artwork from the foundation’s collection.

“It’s so cool,” said Bernstein with a grin when we spoke about the Barnes bot. The account doesn’t work in a vacuum — it retweets images shared by other institutions and artists that are part of his ever-growing bot network, forging links between seemingly disparate pieces. Albert C. Barnes never got to experience Twitter, but it seems fair to assume that he would approve of this kind of usage.

Roughly half of the pieces that comprise the Barnes collection are in the public domain, and the foundation makes high quality TIFF files of those works available online. This makes it easy for Taraschuk’s bots to share them, or for UK-based illustrator Sarah McIntyre to run a ‘portrait challenge’ using a particular painting, where Twitter users are invited to produce their own take on an existing artwork.

At their core, these efforts are all about opening up art to a broader swathe of people. If you’re not close to Philadelphia, you can still enjoy and appreciate the collection. if you’ve ever felt that the traditional gallery experience was too stuffy, maybe having a computer show you which classic works most resemble a selfie, or getting a drip feed of fine art via Twitter will be more palatable.

Perhaps more importantly though, it shows that bringing historical fine art into the digital world fundamentally changes how we view and interpret it. Dr. Barnes just may have been ahead of his time.

Editors’ Recommendations

  • Why is Intel into GPUs now? It was about to get stomped
  • Advertisers drop YouTube again after ads show up beside inappropriate comments
  • MIT built an A.I. bot that writes scary stories — and some are terrifying
  • Apple has the world’s two top-selling phones, but how well is iPhone 8 selling?
  • Microsoft fires back against reports that the Surface line is unreliable




6
Jan

Computers saw Jesus, graffiti, and selfies in this art, and critics were floored


The Barnes Foundation art gallery, located in Philadelphia, boasts a collection of famous work you’d probably recognize, whether that’s Vincent van Gogh’s Portrait of the Postman Joseph Roulin and Paul Cézanne’s The Card Players.

But its founder, Albert C. Barnes, had a more unconventional perspective on how to interpret the art — and that perspective is now being reflected in machine learning, of all places. Rather than getting caught up in the minutiae of dates and movements, Albert C. Barnes extolled the virtues of making visual connections between different artworks. Apparently computers agree with him.

A century after he started assembling his collection, machine learning is helping push the boundaries of his unorthodox ideas about how art should be viewed.

The Gallery Comes to You

The Barnes Foundation has always been an essential stop for any art fans in the area — and now, you can enjoy its contents from anywhere in world.

“Dr. Barnes had a very specific point of view about how you view work.”

Under the supervision of chief experience officer Shelley Bernstein, The Barnes Foundation has embarked upon an effort to make its collection available online. This is far more than just a digitization effort; it’s a technologically enhanced expansion of the ethos that the organization was born out of in the early 20th century.

“I’m a big believer in what I call mission-driven tech — thinking about the goals of the institution, like the fact that this was an early teaching institution,” Bernstein explained when she spoke to Digital Trends about the work she’s doing at the Barnes. “Dr. Barnes had a very specific point of view about how you view work, and that you might not necessarily need to know the history behind it.”

Over a period of years, Barnes amassed a collection of paintings, sculptures, jewelry, and furniture. Each wall of each room is meticulously arranged in order to forge connections between pieces that may have been made years apart, halfway across the world from one another.

Barnes tweaked each of his “ensembles” as he obtained new pieces, but they have remained in the exact same configuration since his death in 1951. It makes sense for the physical space to be preserved in this manner, both for practicality’s sake and as a tribute to the foundation’s founder. However, the website allows individuals to forge new links for themselves.

It’s here where machine learning is being used to foster some unusual responses to great art.

An interface on the website allows visitors to access digitized versions of any of the works in the collection. Clicking on a particular piece offers up the opportunity to see it in context as part of an ensemble. There’s also a method of seeing visually related pieces using a slider toggle that stretches from “more similar” to “more surprising.” It’s here where machine learning is being used to foster some unusual responses to great art.

Several different computer vision and machine learning experts were called upon to help the Barnes feed its collection into several different systems. Each one came up with very different results.

“I remember that a developer who was working on the project came to me and said, ‘Shelley this is insane, it’s all wrong,’” Bernstein remembered. “A curator looked at it and said ‘Actually, no — that’s kind of interesting.’”

Automated Art Critic

The machine learning systems that were being fed the Barnes’ collection made observations that might never have come from humans. Given the organization’s history of encouraging visual connections between otherwise disparate works, this seemed like a huge opportunity.

The computers would label a classical artwork as graffiti. They might see soft nudes by Renoir as teddy bears and stuffed animals. Frequently, they would tag bearded faces and other similar shapes as images of Jesus Christ.

The Barnes Foundation

It would be easy to write off these observations as limitations of the computer vision software, but time and again, curators found that the computers had a point. They were simply making visual connections without any of the second-guessing that a human mind might introduce to the process.

Berstein’s team uses six different tools to analyze the art. She describes Microsoft Azure as offering up the “coolest” results, referring to the most interesting, surprising categorizations. IBM Watson and Google’s offering were apparently successful as well — but much more conservative.

So while machine learning hasn’t completely replaced human curation from the equation, it does offer another perspective that has proven to be pretty unique.

Open Source Art

Obviously, certain parts of its digitization effort are very closely related to its specific ethos and mandate — but all the code is open source, so other institutions can pick and choose the parts that they can make use of and build from an established foundation.

The computers would label a classical artwork as graffiti.

Of course, the open source nature of the digitization isn’t just for other museums and galleries. Just weeks after the digitized collection went online, there were already some interesting projects at play on Twitter that take the ideas even further.

Andrei Taraschuk describes himself as a software developer by day and an art hacker by night. When he saw the online version of the Barnes collection, he knew that he had to put it to good use. Having already created Twitter bots that share a particular artist’s work, he created a Barnes bot that would post images of artwork from the foundation’s collection.

“It’s so cool,” said Bernstein with a grin when we spoke about the Barnes bot. The account doesn’t work in a vacuum — it retweets images shared by other institutions and artists that are part of his ever-growing bot network, forging links between seemingly disparate pieces. Albert C. Barnes never got to experience Twitter, but it seems fair to assume that he would approve of this kind of usage.

Roughly half of the pieces that comprise the Barnes collection are in the public domain, and the foundation makes high quality TIFF files of those works available online. This makes it easy for Taraschuk’s bots to share them, or for UK-based illustrator Sarah McIntyre to run a ‘portrait challenge’ using a particular painting, where Twitter users are invited to produce their own take on an existing artwork.

At their core, these efforts are all about opening up art to a broader swathe of people. If you’re not close to Philadelphia, you can still enjoy and appreciate the collection. if you’ve ever felt that the traditional gallery experience was too stuffy, maybe having a computer show you which classic works most resemble a selfie, or getting a drip feed of fine art via Twitter will be more palatable.

Perhaps more importantly though, it shows that bringing historical fine art into the digital world fundamentally changes how we view and interpret it. Dr. Barnes just may have been ahead of his time.

Editors’ Recommendations

  • Why is Intel into GPUs now? It was about to get stomped
  • Advertisers drop YouTube again after ads show up beside inappropriate comments
  • MIT built an A.I. bot that writes scary stories — and some are terrifying
  • Apple has the world’s two top-selling phones, but how well is iPhone 8 selling?
  • Microsoft fires back against reports that the Surface line is unreliable




6
Jan

The Morning After: Weekend Edition


Hey, good morning! You look fabulous.

Welcome to the weekend. The next time you hear from us CES 2018 will be underway, so check out our preview one more time and check out some of the early pre-show announcements below.

The internet is suing the FCC.The Internet Association joins lawsuit supporting net neutrality

internetassocmembers_640.jpg

A lobbying group representing Amazon, Facebook, Google, Netflix, Twitter and other heavy hitters will join a lawsuit arguing against the FCC’s move to undo Title II net neutrality protections. In a statement, its CEO said: “IA intends to act as an intervenor in judicial action against this order and, along with our member companies, will continue our push to restore strong, enforceable net neutrality protections through a legislative solution.”

An interesting hybrid.Samsung gives the 13-inch Notebook 7 Spin a few modest updates

notebook-7-spin-2018-4-1_640.jpg

The original 13.3-inch Spin debuted in 2016 as a relatively inexpensive laptop that turned into a slightly unwieldy tablet, and this refreshed version doesn’t stray far from Samsung’s original formula.

Some are more equal.Twitter: Banning world leaders would ‘hide important information’

In a blog post on Friday night, Twitter didn’t mention Donald Trump by name, but it responded to people calling for the suspension of his account. The company has decided that “we review Tweets by leaders within the political context that defines them, and enforce our rules accordingly.”

If anyone is listening.Meltdown and Spectre are wakeup calls for the tech industry

intcoredims_640.jpg

Instead of rushing to deliver the fastest chips possible, the next race for Intel, AMD and ARM is to come up with new architecture that will bust Spectre for good.

But wait, there’s more…

  • NBC will stream the Golden Globes live this weekend
  • ‘PUBG’ is quietly changing video games with its 3D replay technology
  • CES 2018: What to expect
  • Pixel 2’s ‘Portrait Mode’ unofficially makes it to non-Google phones
  • Bad Password: Don’t pirate or we’ll mess with your Nest, warns East Coast ISP
  • CBS All Access is now available on Amazon video
  • F-35 may see combat in 2018

The Morning After is a new daily newsletter from Engadget designed to help you fight off FOMO. Who knows what you’ll miss if you don’t Subscribe.

Craving even more? Like us on Facebook or Follow us on Twitter.

Have a suggestion on how we can improve The Morning After? Send us a note.

6
Jan

Vuzix is launching the first Alexa-enabled AR glasses at CES


Vuzix has been making smartglasses for years, but one upcoming model will apparently be extra special. According to Bloomberg, the New York-based company is debuting the first Alexa-enabled augmented reality glasses at CES 2018. You’ll apparently be able to ask Alexa questions the way you usually do, and the glasses will show the results on the display à la Google Glass. If you ask Alexa for directions, for instance, the glasses can show a map on the AR screen, as long as you have an Amazon account. The company told the publication that it’s excited about the device’s “ability to bring Alexa to customers in a new way.”

This device represents the future Amazon envisioned for Alexa when it opened up the voice assistant’s technology. Just recently, Alexa gained the ability to talk to microwaves and electric vehicles, and you can probably expect that list to grow even further. The e-commerce titan also released a mobile accessory kit, making it easy for developers to give their Bluetooth headphones and wearables the power to use the voice-controlled service.

If a pair of shades that can talk to Alexa sounds the most intriguing of the bunch, you’ll have to be willing to shell out $1,000 for them. Vuzix chief Paul Travers knows that’s a pretty high price point, but the company hopes to be able to sell them for under $500 by 2019. He believes “everyone is going to come out with [Alexa-enabled] glasses sooner or later,” so the company’s offering needs to be at a competitive price.

Click here to catch up on the latest news from CES 2018.

Source: Bloomberg

6
Jan

Samsung adds another ally in its battle over HDR standards


This isn’t exactly taking it back to the days of HD-DVD vs. Blu-ray, but Samsung’s fight to push HDR10+ as an alternative to Dolby Vision is heating up. We have more details on how the two standards compare right here, but one main feature is that both improve on regular HDR10 by allowing content makers to dynamically adjust settings from one scene to another, or even from one frame to another.

While Dolby Vision been licensed by many TV manufacturers, Samsung isn’t one of them and has chosen instead to push HDR10+ as a royalty-free alternative. Now Warner Bros. is joining Samsung, Fox and Panasonic in supporting HDR10+ on its 4K video releases.

We don’t have a lot of specific information, but the team insists that other companies are also interested in using HDR10+, and soon they’ll have access when its certification and logo program opens up. If it takes off, then that could mean there’s an extra sticker/setting to look for on your next 4K TV, Ultra HD Blu-ray player, or movie.

Click here to catch up on the latest news from CES 2018.

Source: Samsung

6
Jan

At long last, researchers develop a wearable fit for plants


Wearables today can tell you just about everything you want to know about your body, and some stuff you’d rather put off until tomorrow. They can monitor your steps, your heart rate, and even let you know how drunk you are by analyzing the alcohol molecules in your skin. There are entire lines of wearables designed for kids and pets. What’s next? A wearable for plants?

Yep.

A team of engineers from Iowa State University has developed wearable sensors, some specially designed for our photosynthetic friends, allowing growers to measure how their crops use water. The innovative new device — which its creators are calling “plant tattoo sensors” — is designed to be low-cost, using the revolutionary material graphene, which allows it to be thin and adhere to surfaces like tape.

“Wearable sensor technologies have been researched and applied in biomedicine, healthcare, and related industries, but are still relatively new and almost unexplored for applications related to agriculture and crops,” Liang Dong, an Iowa State electrical engineer who helped develop the technology, told Digital Trends. “Tape-based sensors can be simply attached to plants and provide signals related to transpiration from plants, with no any complex installation procedures or parts required.”

The graphene-on-tape technology developed by Dong and his colleagues can be used to monitor a plant’s thirst, tracking how leaves release water vapor by measuring changes in conductivity. The technology can be used beyond horticulture as well. In a paper published in December in the journal Advanced Materials Technologies, the researchers demonstrated how similar technology can be applied to other wearable sensors to monitor strain and pressure, including in a smart glove capable of monitoring the movement of hands.

Water is key to crop productivity, so it is a top concern for farmers who want to make sure their plants are properly quenched. Given the value of the resource and its scarcity in many regions, efficient water use is vital to a functioning and sustainable operation.

“Water is a seriously limited factor in agriculture around the world,” Patrick Schnable, an Iowa State plant scientist who worked on the technology, said. “A first approach to overcoming this challenge of a water-limited world is to breed crops that are more drought tolerant and water efficient. Current practice is to conduct expensive replicated yield tests under various levels of drought stress. Our ‘plant tattoo sensors’ will enable breeders to identify hybrids that are likely to perform better under drought stress prior to conducting large-scale yield tests.”

By identifying the plants that perform best under these stressful conditions, breeders have a better chance of developing more drought-tolerant crops, which will come in handy as climate change sweeps the globe.

Editors’ Recommendations

  • A language for legumes: Can the Internet of Food help us know what we eat?
  • Gene-edited corn has nutrients usually found in meat — here’s why that’s huge
  • Great balls of graphene: New Samsung tech could charge phones five times faster
  • GardenSpace will water your garden and keep away pests with its robotic head
  • It’s alive! These ‘living tattoos’ may someday monitor your health




6
Jan

XYZprinting’s da Vinci Nano is a cute plug-and-play 3D printer for the masses


The 2018 Consumer Electronics Show (CES) hasn’t even kicked off yet, and already some exciting announcements are being made. The latest comes from leading 3D printer manufacturer XYZprinting, which is using the event to debut three new products.

Of these, the one we’re most hyped about is the da Vinci Nano, a $229 portable single-color, plug-and-play 3D printer that promises to lower the barrier to entry for those wanting to hop on the additive manufacturing bandwagon. In addition to a cutesy design (in some ways reminiscent of the friendly original 1984 Macintosh), it boasts features like auto-calibration and an autofeeding filament system to make it as easy to use as possible.

The device also comes packaged with an XYZmaker Mobile app, allowing users to print directly from their mobile devices (initially just tablets), via a Wi-Fi connection. In order to make it appeal to the education market, XYZprinting is setting up a microsite with teaching materials for use in school or the home.

In addition to the da Vinci Nano, XYZ is also unveiling a $3,999 full-color, fused filament fabrication (FFF) desktop 3D printer called the da Vinci Color AiO (no, we don’t know how to say its name — but we imagine it sounds a lot like an excited yell!). Aimed at a more professional audience than the Nano, the AiO boasts a combination of inkjet and color-absorbing PLA filament for creating whatever color filament you need for a tiny fraction of the cost of other multicolor printers.

“By seamlessly integrating an asymmetrical, full-color laser scanning unit, and a 360-degree rotating platform without compromising its full-color 3D printing capabilities, the da Vinci Color AiO has accomplished a feat that no other prosumer 3D printer manufacturer has,” XYZprinting U.S. director Vinson Chien told Digital Trends.

“[It also] provides a solution for designers who have not yet mastered the digital modeling skill-set needed to begin the production cycle. With the da Vinci Color AiO, they can full color scan prototypes and have a 3D file that is easy to modify.”

Finally, the company is hoping to snare some young future 3D printing fanatics with a $45 device called the da Vinci 3D Pen Cool, which allows youngsters (or, well, anyone) to add extra dimensions to their images with a 3D pen. Although earlier editions are already available through this company, this version has an extra temperature modification feature thrown in to improve safety.

All three products will be out later this year.

Editors’ Recommendations

  • The best 3D printers you can buy for under $1,000 right now
  • The best 3D printers you can buy (or build) in 2017
  • The new-and-improved Mod-T 3D printer isn’t just better — it’s cheaper, too
  • Ultimaker 3 review
  • Monoprice Mini Delta review