Skip to content

Archive for

3
Aug

Gravity-measuring quantum device could help find buried oil or shale gas


Why it matters to you

Gravity-measuring quantum device could be used to help locate all kinds of materials underground.

Want to make ultra-precise measurements of the strength of gravity? There is an app for that — or at least a quantum device called the “gravimeter,” which functions as a scaled-down version of the technology used to detect gravitational waves triggered by collisions between black holes.

As sci-fi as the technology might sound, however, it does come with practical uses. The main one is helping work out unknown underground conditions — such as finding sinkholes or buried pipes for construction workers, carry out oil or mineral exploration, or establishing ground proof and long-term monitoring of shale gas and carbon sequestration sites.

“If you go even deeper, our sensors might open new possibilities to monitor magma flows, and provide input to earthquake and volcanic activity models, playing a role in the prevention of natural disasters,” Kai Bongs, a professor of Cold Atom Physics at the University of Birmingham, who helped develop the device, told Digital Trends.

Bongs explains that the gravimeter makes us of something called the superposition principle, allowing atoms to travel on two trajectories at once (consider the Schrödinger’s cat thought experiment, where the cat is simultaneously alive and dead). Laser pulses are then used to separate and recombine the trajectories along the direction of gravity, leading to interference fringes which are highly sensitive to the value of gravity.

“The exciting thing about our instruments are that they promise to speed up gravity measurements by over 100 times as compared to conventional technologies, and at the same time allow [us] to see things not currently visible,” Bongs continued. “The reason for these improvements is that they allow absolute, drift-free, measurements of gravity, and can be built to interrogate two vertically separated atomic clouds with the same laser beam, leading to an outstanding common-mode suppression of vibrational noise.”

At present, the gravimeter prototype is around one cubic meter, although the researchers hope that this could be shrunk down to make it more portable — and, therefore, versatile. According to Bongs, the first commercial prototype should be available in the next two years. Future versions could even be mounted on drones, which is something the team has already explored.

While this is not the only example of innovative technology we have come across that allows for the detection of information not readily accessible to the human eye, it certainly represents a significant step forward. Quantum technologies may not be mainstream just yet, but the gravimeter shows we are definitely moving toward a place in which robust quantum technologies are used in real-world environments.

3
Aug

Vizio SmartCast TV update lets users stream their shows with no app required


Why it matters to you

Vizio’s SmartCast TV update lets you access your favorite streaming apps using the included remote, rather than requiring you to cast from a mobile phone or tablet.

On Wednesday, Vizio SmartCast TV hit select models in the Vizio’s 2017 TV lineup, an interface update that now allows viewers to access their favorite streaming services straight from their remote control. Previously, users had to cast content to their Vizio TVs from an included tablet or their smartphone via the Vizio SmartCast Mobile app, a mobile control interface that our reviewers felt was clunky when we were testing the 2016 Vizio M-Series.

The return to a more classic remote-driven TV lineup for 2017 came with the news that Vizio was planning on launching this built-in Smart TV interface as an update, and buyers are likely very excited that it is finally here. That said, new Vizio SmartCast models still feature built-in Chromecast functionality (accessed via the Vizio Smartcast Mobile app), so those fond of casting content to their TV still have that option.

“The introduction of Vizio SmartCast TV is all about giving consumers additional options when it comes to accessing their favorite content,” Vizio’s Chief Technology Officer Matt McRae said in a press release. “We focused on developing a best-in-class smart platform for all users; from traditional TV fans who enjoy a dedicated remote control, to digital natives and second screen users who love using their mobile devices to search and control their viewing experience. 2017 is all about expanding SmartCast to give users multiple ways to stream the entertainment they love, the way they want it.”

One cool element of the new SmartCast TV interface is that it pairs up with the SmartCast Mobile app, meaning that viewers can pick a show from their remote control in the living room, and play and pause the show on their mobile device or tablet from any room in the house. SmartCast TV also lets users of multiple streaming apps search for content across multiple apps at once, making it easy to find that film or TV series you have been looking for, regardless of what service is hosting it.

The automatic internet update is currently rolling out to eligible 2017 Vizio SmartCast P-Series and M-Series models, but it will hit other models — including the 2017 Smartcast E-Series — later this summer. It is worth noting that some owners will need a new SmartCast remote, which features a new button for quick access to the SmartCast TV interface. The good news is that owners of eligible TVs can get that remote for free or for a nominal fee, via a special page on Vizio’s website.

If you are looking for more information about whether your Vizio TV will get the new functionality, more information can be found on Vizio’s SmartCast TV website.

3
Aug

Graphene made out of wood could help solve the e-waste problem


Why it matters to you

So long e-waste? Rice University’s wood-based graphene could pave the way for biodegradable electronics.

When you think about electrical superconductors, the first material that comes to mind probably isn’t wood. That could soon change, however, thanks to the pioneering work of scientists at Rice University, who have successfully made wood into an electrical conductor by transforming its surface into all-around wonder material graphene.

To do this, a team led by Rice chemist James Tour used an industrial laser to blacken a thin film pattern onto a block of pine. The specific pattern is something called laser-induced graphene (LIG), a method for creating flexible, patterned sheets of multilayer graphene without the need for hot furnaces and controlled environments. It was discovered at Rice in 2014, but was initially applied only to sheets of inexpensive plastic called polyimide. This marks the first time the technique has been applied to wood. Pine works as a substitute for polyimide because of a similar mechanical structure, courtesy of an organic polymer called lignin.

The LIG process (or, in this case, pine laser-induced graphene, aka P-LIG) is carried out in an inert argon or hydrogen atmosphere. The lack of oxygen means the heat from the laser does not burn the pine, but instead transforms its surface into wrinkled flakes of graphene foam bound onto the wood’s surface. Following experimentation, 70 percent power was discovered to be the optimal amount of laser power to produce the highest-quality graphene possible.

Compared with polyimide, the advantage of turning wood into graphene is that wood is an abundant and renewable resource. Given the astonishing number of potential applications for graphene– from highlighting structural defects in buildings to creating new types of speakers to, yes, detecting cancer — this could prove to be a significant breakthrough.

For now, however, it seems the main application the Rice researchers are interested in relates to electronics. Specifically, they are hoping to harness the conductive properties of the pine laser-induced graphene to create supercapacitors for energy storage. With the massive amount of electronic waste that is produced every year, the idea of biodegradable, eco-friendly wooden electronics carries obvious benefits. When the project gets a little further down the line, someone needs to hook the Rice researchers up with these hardwood PC makers, stat!

A paper describing Rice University’s research was recently published in the academic journal Advanced Materials.

3
Aug

Graphene made out of wood could help solve the e-waste problem


Why it matters to you

So long e-waste? Rice University’s wood-based graphene could pave the way for biodegradable electronics.

When you think about electrical superconductors, the first material that comes to mind probably isn’t wood. That could soon change, however, thanks to the pioneering work of scientists at Rice University, who have successfully made wood into an electrical conductor by transforming its surface into all-around wonder material graphene.

To do this, a team led by Rice chemist James Tour used an industrial laser to blacken a thin film pattern onto a block of pine. The specific pattern is something called laser-induced graphene (LIG), a method for creating flexible, patterned sheets of multilayer graphene without the need for hot furnaces and controlled environments. It was discovered at Rice in 2014, but was initially applied only to sheets of inexpensive plastic called polyimide. This marks the first time the technique has been applied to wood. Pine works as a substitute for polyimide because of a similar mechanical structure, courtesy of an organic polymer called lignin.

The LIG process (or, in this case, pine laser-induced graphene, aka P-LIG) is carried out in an inert argon or hydrogen atmosphere. The lack of oxygen means the heat from the laser does not burn the pine, but instead transforms its surface into wrinkled flakes of graphene foam bound onto the wood’s surface. Following experimentation, 70 percent power was discovered to be the optimal amount of laser power to produce the highest-quality graphene possible.

Compared with polyimide, the advantage of turning wood into graphene is that wood is an abundant and renewable resource. Given the astonishing number of potential applications for graphene– from highlighting structural defects in buildings to creating new types of speakers to, yes, detecting cancer — this could prove to be a significant breakthrough.

For now, however, it seems the main application the Rice researchers are interested in relates to electronics. Specifically, they are hoping to harness the conductive properties of the pine laser-induced graphene to create supercapacitors for energy storage. With the massive amount of electronic waste that is produced every year, the idea of biodegradable, eco-friendly wooden electronics carries obvious benefits. When the project gets a little further down the line, someone needs to hook the Rice researchers up with these hardwood PC makers, stat!

A paper describing Rice University’s research was recently published in the academic journal Advanced Materials.

3
Aug

Apple returns to pro market, will be exclusive seller of $15K Red Raven camera


Why it matters to you

With this new partnership, Apple seems to be making a statement that it is still serious about the professional video market.

Red Digital Cinema has announced that it is partnering with Apple to allow it to be the exclusive seller of its Raven cinema camera. The Raven, which has been unavailable for the past few months, will be sold in a single, $15,000 kit configuration that gives users everything they need to get shooting right out of the box, including a monitor, memory, batteries, and even a Sigma 18-35mm f/1.8 lens. It also comes with a license key for Final Cut Pro X, Apple’s professional editing software which retails for $300 on its own.

The $15,000 price is likely to give sticker shock to the average Apple customer, but the Raven is actually Red’s least expensive cinema camera. It is also Red’s most compact camera “brain,” weighing 3.5 pounds, but can still shoot 4.5K footage at up to 120 frames per second. Final Cut Pro has native support for Redcode RAW files and the Raven can also shoot simultaneously to the Apple Pro Res format.

There was a time when Apple computers were the machines of choice for all manner of creative professionals and Final Cut Pro was the de-facto standard of professional video editors. While the Mac has remained a prominent fixture in the creative industry, many professional users have criticized Apple for losing focus on the pro market in the wake of the iPhone and iPad (which make much more money for the company). The Mac Pro went years without an update, and Final Cut Pro X initially rubbed professional editors the wrong way with its iMovie-like interface and slimmed down feature set. More recently, however, FCP X has gained many features, and with the powerful iMac Pro and secretive new Mac Pros on the way, an exclusive deal with Red leaves no doubt that Apple has returned to form, of sorts, with a renewed focus on video professionals.

Still, offering a single kit (not to mention only one camera) is a bit of a head-scratcher. It’s as if Apple wants to advertise its pro status without diving too deep into the world of high-end video gear. Apple is a brand built on simplicity, after all, something that Red has never been known for. The Raven kit is a solid option for advanced users looking for their first digital cinema camera, but it does not make much sense for those who already own lenses, accessories, or other editing software.

3
Aug

Instagram Stories overtakes Snapchat in daily users as well as active users


Why it matters to you

Instagram may not have gotten there first but it appears to be growing faster than Snapchat and that will only draw more users.

Instagram only launched Stories last August and yet in April of this year, the company reported more than 200 million users of the feature, which was developed to take on Snapchat. While it may have started off as a “me too” play, now that a year has passed since Instagram copied Snapchat with Stories, the feature has become a “me first” component of the app. Instagram announced that 50 percent of businesses on the platform created a Story in the last month, and more importantly, people are now spending more time on Instagram than they are on Snapchat. So while Snapchat might have done it first, it looks like Instagram is doing it better.

Today, the average 25-and-under Instagram user spends 32 minutes a day on the app, while those over 25 spend 24 minutes browsing photos and videos on the Facebook-owned platform every day. There are now 250 million people using Instagram Stories every day, whereas Snapchat boasts just 166 million. What’s more, Snapchat’s monthly active user growth rate has seen a rather precipitous decline of 17.2 percent per quarter to just 5 percent, and its stock price is following suit — when the company went public earlier this year, it was trading at $17 a share. Now, after this latest Instagram news, it’s fallen to just $12.67, an all-time low.

The news of Stories’ success is not necessarily surprising, but it is impressive, considering Snapchat claimed around 150 million users when it filed for its initial public offering in February. It took the startup five years to get to that point and growth reportedly slowed over the fourth quarter of 2016 — one would imagine, in no small part, thanks to its Facebook-owned competitor.

Instagram commemorated its 200 million user milestone by adding an assortment of new stickers and sticker-related features. Now selfies can be made into stickers, and users can decorate them with different frame styles. Stickers can also be pinned to three-dimensional objects in videos — a feature the app has borrowed from Snapchat. It works with text as well.

Another Snapchat-similar feature is Instagram’s ‘recent’ page, that puts your favorite stickers more easily within reach.

None of these features are unique to Instagram. Snapchat boasts a seemingly endless number of geostickers, for example, and allows you to make new stickers out of anything you can see (in addition to your face) through its scissors tool. The fact that Stories has been able to accomplish the reach that it has, and continue to grow despite being routinely late to the party, speaks to the power of Instagram’s install base — which was estimated at 400 million back in February.

Update: 50 percent of businesses created an Instagram Story last month as the feature continues to surpass Snapchat in popularity. 

3
Aug

Photo industry turnaround? Image sensors propel Sony to record profits


Why it matters to you

Photographers concerned over the industry’s decline can rest a little easier with Sony’s latest numbers, which show growth in image sensors largely influenced by smartphone photography.

As camera sales stabilize after a multiyear decline, the imaging sensors inside everything from cameras and smartphones to cars are propelling Sony to record profits. Sony’s first quarter profits for 2017 are the best-ever numbers for that quarter reported by the company, according to a statement released on Tuesday, August 1, due in part to camera sensor sales.

Imaging sensors were responsible for about 55.4 billion yen of the company’s total 157 billion yen, or over a third of Sony’s $1.43 billion profit in the first quarter. Sony is now estimated to be responsible for about 45 percent of the image sensor market, according to DigiTimes.

The record numbers are important to note because just five years ago, Sony was struggling before making the decision to focus on three areas, including image sensors alongside gaming and mobile. According to the latest numbers, that focus appears to be working for the company.

Image sensors are amounting to around a third of the company’s profits — but these numbers don’t count just the sensors inside Sony’s popular cameras like the a7 series and the new a9. Sony attributes much of that growth to the sensors powering smartphone photography, and in particular, dual camera smartphones.

The numbers are also good news for the industry after an earthquake last year near a major manufacturing center in Japan disrupted several imaging companies, delaying camera releases, as the factories that produced those cameras became inoperable due to the catastrophe. Sony was one of several companies affected and the company says its latest numbers show a healthy recovery from the earthquake.

Sony, however, is being careful not to rely too heavily on these new record profits. As more smartphone companies, particular those from brands based in China, move toward using cheaper hardware, the company expects to see a decline of around three percent in image sensor sales.

Sony researchers are responsible for a number of innovations affecting digital sensors, including the stacked sensor that gives the Sony a9 its headlining feature: speed. The company also recently shared the development of a two-millimeter sensor that could bring cameras to smartwatches, a 1,000 fps sensor for robots and autonomous cars, and a prototype sensor that features a built-in polarizer.

If Sony continues in the same pattern, the company could post their best profit margin in about 20 years.

3
Aug

Watch this AI learn to walk through trial and error, just like a child


Why it matters to you

The ability to learn how to walk in a variety of environments could be crucial for future robots.

Do you remember the adorable scene in Bambi where Thumper the rabbit teaches Disney’s lovable deer how to walk? Well, computer scientists from the University of British Columbia and National University of Singapore just did that with a bipedal computer model (read: essentially a pair of animated legs) — only instead of a cute cartoon rabbit, the teacher is a deep reinforcement learning artificial intelligence algorithm.

Called DeepLoco, the work was shown off this week at SIGGRAPH 2017, probably the world’s leading computer graphics conference. While we have had realistic CGI that is capable of mimicking realistic walking motions for years, what makes this work so nifty is that it uses reinforcement learning to optimize a solution.

Reinforcement learning, for those unfamiliar with it, is a school of machine learning in which software agents learn to take actions that will maximize their reward. Google’s DeepMind, for example, has used reinforcement learning to teach an AI to play classic video games by working out how to achieve high scores.

In the case of DeepLoco, the reward is getting from Point A to Point B in the most efficient manner possible, all while being challenged by everything from navigating narrow cliffs to surviving bombardments of objects. As it does this, it learns from its environment in order to discover how to balance, walk, and even dribble a soccer ball. It’s like watching your kid grow up — except that, you know, in this case, your kid is a pair of disembodied AI legs powered by Skynet!

Nonetheless, it is another intriguing example of the power of reinforcement learning. While the technology could be applied in any number of ways (such as by animators wanting to more easily animate giant computer-generated crowd scenes in movies), its most game-changing use would almost certainly be in robotics. Applied to some of the cutting-edge walking robots we have seen from companies like Boston Dynamics, DeepLoco could help develop robots that are able to more intuitively move through a range of environments.

A paper describing the work, titled “DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning” was published in the journal Transactions on Graphics.

3
Aug

Watch this AI learn to walk through trial and error, just like a child


Why it matters to you

The ability to learn how to walk in a variety of environments could be crucial for future robots.

Do you remember the adorable scene in Bambi where Thumper the rabbit teaches Disney’s lovable deer how to walk? Well, computer scientists from the University of British Columbia and National University of Singapore just did that with a bipedal computer model (read: essentially a pair of animated legs) — only instead of a cute cartoon rabbit, the teacher is a deep reinforcement learning artificial intelligence algorithm.

Called DeepLoco, the work was shown off this week at SIGGRAPH 2017, probably the world’s leading computer graphics conference. While we have had realistic CGI that is capable of mimicking realistic walking motions for years, what makes this work so nifty is that it uses reinforcement learning to optimize a solution.

Reinforcement learning, for those unfamiliar with it, is a school of machine learning in which software agents learn to take actions that will maximize their reward. Google’s DeepMind, for example, has used reinforcement learning to teach an AI to play classic video games by working out how to achieve high scores.

In the case of DeepLoco, the reward is getting from Point A to Point B in the most efficient manner possible, all while being challenged by everything from navigating narrow cliffs to surviving bombardments of objects. As it does this, it learns from its environment in order to discover how to balance, walk, and even dribble a soccer ball. It’s like watching your kid grow up — except that, you know, in this case, your kid is a pair of disembodied AI legs powered by Skynet!

Nonetheless, it is another intriguing example of the power of reinforcement learning. While the technology could be applied in any number of ways (such as by animators wanting to more easily animate giant computer-generated crowd scenes in movies), its most game-changing use would almost certainly be in robotics. Applied to some of the cutting-edge walking robots we have seen from companies like Boston Dynamics, DeepLoco could help develop robots that are able to more intuitively move through a range of environments.

A paper describing the work, titled “DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning” was published in the journal Transactions on Graphics.

3
Aug

Astronomy Photographer of the Year contest attracts out-of-this-world images


Why it matters to you

Whether you’re a photographer or not, the shortlist from the Insight Astronomy Photographer of the Year contest will take your breath away.

The Insight Astronomy Photographer of the Year contest may attract thousands of entries every year, but the 2017 event brought in several shots judges had never seen during the event’s nine-year history. The Royal Observatory Greenwich recently announced the shortlist for the competition, including the first images of asteroids and Uranus ever submitted. The final winner will be announced on September 14.

Along with selecting the photographer of the year, the contest also awards prizes in nine different categories, as well as two special prizes. The 2017 categories include skyscapes, aurorae, people and space, our sun, our moon, planets, comets and asteroids, stars and nebulae, galaxies, and the Young Astronomy Photographer of the Year for participants under 16.

The Sir Patrick Moore Prize for Best Newcomer will award a photographer that only started shooting astrophotography within the past year. A separate prize will also offer awards for shots taken with computer-controlled telescopes.

This year’s contest attracted more than 3,800 images from amateurs and pros in 91 different countries. Shortlisted entries range from the supermoon and the Northern Lights to telescopic images of the Jellyfish Nebula.

Judges for the event include Rebecca Roth, NASA Goddard Space Flight Center; Jon Culshaw, comedian and amateur astronomer; Chris Branley, editor of BBC Sky at Night Magazine; and Dr. Marek Kukula, Royal Observatory public astronomer.

The top photographer will receive 10,000 British pounds. Category and special prize winners will also receive cash prizes. The winning shots will be included in a gallery display at the observatory’s Astronomy Centre.

Both the shortlisted photographers and the final winners will be recognized in the contest’s official photo book, which will be available beginning in November. The book includes more than 140 images, along with the technical details and judges’ comments, and will be available from the Royal Observatory Greenwich’s online store.

The Royal Observatory Greenwich is located at the home of Greenwich Mean Time and the Prime Meridian, which have long been part of important discoveries in space. The observatory is partnering with Insight Investment and BBC Sky at Night Magazine for the contest.

Additional images from the shortlisted winners can be found on the contest’s website.