How to upload 360-degree video to Facebook, YouTube, and Vimeo
For consumers, it’s becoming easier and more affordable than ever to capture 360-degree video. Thanks to pocket-sized devices like Nikon’s KeyMission 360 and the upcoming second-generation of the Samsung Gear 360, a few hundred dollars will get you 4K, 360-degree video. Capturing the video is only half of the equation, however, and arguably the least important half. After all, what’s the point of capturing 360-degree footage if no one is able to watch it?
More: 5 cool new 360-degree cameras that will turn your head
Thankfully, three major platforms currently offer support for 360-degree videos: YouTube, Facebook, and Vimeo. And we’re going to explain how to share your 360-degree video with the world by uploading your content to either of the three platforms. For the sake of brevity, we’re going to assume you’ve already captured and edited the 360-degree video you want to upload. Then, enjoy them on a computer, mobile device, or even virtual reality headset. (Twitter’s Periscope supports live-broadcasting of 360-degree videos through a compatible 360-degree camera, as do YouTube and Facebook. It’s a nascent technology that we will talk about in a future article.)
YouTube
Uploading 360-degree video to YouTube is a bit more convoluted than Facebook (see below), as there are a few extra steps involved. Most notably, YouTube doesn’t support 360-degree video that doesn’t already have the 360-degree metadata embedded in the file. That means, if your 360-degree camera doesn’t automatically include this information, you’ll need to download the Spatial Media Metadata Injector app from YouTube, which is available for both MacOS and Windows.
Once downloaded and installed, launch the app and select the video file you wish to add the metadata to. A dialogue box will appear, and you want to select the checkbox for Spherical video and click Save As. YouTube says to make sure you don’t select the 3D Top-bottom checkbox, otherwise, your video won’t be formatted as intended. After clicking Save As, give your video a name and save it. The new video, complete with the required metadata, will then be saved in its original location.

From here, the process for uploading your 360-degree video to YouTube is no different than any other video. Make your way to the YouTube homepage, click the Upload button in the upper-right corner, choose your newly-created video file, and include the title and tags you see fit.
It can take an upward of an hour for your 360-degree video to be formatted, so consider uploading the video ahead of time to ensure everything is in working order before the video goes live.

It’s also worth noting that 360-degree video on YouTube is supported only within Chrome, Opera, Firefox, and Internet Explorer. So, if you’re a Safari user, consider downloading Chrome or Firefox for MacOS. If you plan on viewing the video on a phone or tablet, make sure you’ve downloaded the latest update for the YouTube app on your respective device.
Panasonic’s latest super-zoom camera also shoots selfies
Point-and-shoot cameras have had a bad rep lately, since they’re neither as convenient as a smartphone nor as powerful as a mirrorless cam or DSLR. However, there are a few areas where they still excel: they can include ridiculously long-zoom lenses in small bodies, and take selfies that would be difficult or impossible to manage on your phone. And Panasonic, at least, is determined to make the most of those advantages. The company has just introduced the Lumix ZS70, a successor to the ZS60 that crams more into its compact frame. It now has a 20.3-megapixel sensor (up from 18MP), an even longer-ranged 24-720mm equivalent lens, and — most importantly — a flip-out 3-inch touchscreen. If you’ve ever wanted to take a high-quality selfie (including a new 4K mode) without sacrificing your ability to shoot far-off subjects, you might want to consider this model.
Otherwise, you’re looking at some fairly familiar hardware… not that this is entirely a bad thing. It can still shoot 4K video at 30 frames per second, and take 8MP photos at a similar speed. You’ll also find RAW capture, a very fast autofocus that promises a lock-on in about 0.1 seconds, and a slew of modern camera tricks such as after-shot focus selection, a beauty mode and creative filters. WiFi helps you share photos to your phone for those all-important Instagram posts.
As is often the case with super-zoom cameras, you’ll be paying a fair amount. Panasonic will ask $450 when the ZS70 arrives at the end of May. That’s competitive for the class, but consider this: Canon’s upcoming PowerShot SX730 HS will offer a similarly sharp sensor, 40x zoom and a swiveling LCD for $50 less. And if you’re more interested in image quality than distance, you could spring for either the PowerShot G9 X Mark II or an older camera like Sony’s original RX100. You might be happy with the ZS70 — just know that it’s not the only game in town.
Source: Panasonic
Tumblr’s new video app essentially lets you watch YouTube with friends
Tumblr has a new app, and strangely, it’s all about YouTube videos.
The Yahoo-owned company was apparently interested in catering to YouTube users, because it has launched Cabana, an app that allows you to watch YouTube videos with up to six friends simultaneously. The idea is that, instead of having to pass your phone or tilt your laptop toward a friend so that they can watch an online video with you, you can now watch it with friends who aren’t even in the same room.
- Tumblr instant messaging: Here’s how the threaded conversations work
- YouTube launches live mobile streaming: Here’s how it works
Strangely, at launch, Cabana isn’t linked to Tumblr. So, to find all your friends, you must look up their phone numbers. The app launches users into their own broadcast and sends an alert to their friends via push notifications. You can jump in and out of video chats, watch videos with friends, or just watch videos of people watching videos. The app only integrates with YouTube for now.
It’d be neat to see other services added in the future. We could see ourselves using the app to watch a Netflix show with family from across the world, but who knows if Netflix would even allow that. Also, the ability to record video chats would be neat, but then that brings up the issue of copyrights and what not. Anyway, if Cabana interests you and your friends, it is now available for free in Apple’s App Store.
An Android version is due to release in a few weeks.
Tumblr
HBO Go makes it easier to binge watch on your phone
You’d think that HBO would make marathon viewing a priority in all its apps when its shows practically beg for it, but no. Until now, the HBO Go mobile app has asked you to play episodes one at a time. At last, though, the network has seen the light. Both the Android and iOS HBO Go apps have introduced the binge watching features you take for granted with Netflix and other streaming services. The software will offer the next unseen episode for a given show (or ask you to resume a paused show), and it’ll automatically play the next installment without asking for your input.
To say this is overdue would be an understatement. The stand-alone HBO Now service has had comparable functionality for a while, let alone rival services. Still, it’s a big deal if you’re a conventional TV subscriber who wants to watch HBO Go away from the TV — you won’t have to touch your phone while you catch up on Game of Thrones.
Via: The Verge
Source: App Store, Google Play
Facebook Spaces finally delivers on social VR
Three years ago, when Facebook bought Oculus for $2 billion, many scratched their heads in befuddlement. Social networks and virtual reality seem like such strange bedfellows; one is about connecting you to the world, while the other appears to do the opposite. But CEO Mark Zuckerberg envisioned a world where VR would be a place for communication, not isolation. And, many years later, that vision is much closer to reality. Facebook Spaces is the company’s answer to social VR and it is, as I was amazed to find, surprisingly compelling.
I had the chance to try out Spaces for myself a few hours after it was announced at F8. As soon as I placed the Oculus Rift headset on my head and the Touch controllers were in my hands, I was transported to what appeared like a beautiful park with cherry blossom trees. On my right was my virtual helper, Justin, who appeared in the form of an animated cartoon avatar. In front of me was a tableau of sorts, with a little dashboard in front of me.
Justin told me to select Appearance, and voila, I could customize the appearance of an animated cartoon avatar of myself. You could design one from scratch by customizing individual features like your nose or your hair, but I decided to just have one automatically generated for me. I grabbed one of my profile photos, which were already on display, and Spaces was smart enough to translate it into a cartoon version of me. From there, I also changed the color of my glasses and my shirt, which you can do in the Appearance tab too.

What I noticed almost immediately is how real it seemed, which is really weird considering I was speaking to an animated avatar. Mike Booth, the product manager leading the Spaces development team, says that’s because the Rift and the Touch creates a little motion capture studio. “You get your actual body language,” he said. “It captures head movements, even hand gesticulation.”
What’s more, Spaces also infers what your eyes are looking at, creating what appears to be eye contact, which is integral to face-to-face communications. One of the reasons Spaces can do this so well is because these avatars are stylized and cartoon-like. “They’re not hyper-realistic, where you can find every little flaw,” says Booth.
The same goes with mouth movement. It actually listens to the voice coming through the Rift microphone and then it tries to guess what mouth shapes you’re making. It’s not always accurate — it sort of snaps the mouth around like a Wallace & Gromit cartoon — but that’s entirely on purpose. “We’re not trying to be super photorealistic,” says Booth. “We just want to show that you’re talking.”

There are other fun things you can do with your avatar too. If you point both thumbsticks up, your avatar will laugh. Point them out, and you’ll smile. Place both controllers on your face and you’ll make an “OH” face. Turn them outward, and your avatar will shrug and look confused. Gestures are basically to VR as emojis are to text. “You have to invoke them,” said Booth. “They’re not supposed to be accidental.”
Next, Justin showed me how to share different kinds of media. I could share photos and videos from my own album, or I could share ones from my newsfeed, or I could just share whatever on the web I found interesting. Once selected, I could resize them anyway I want and have them displayed in the background. What’s especially neat is that when you share 360-degree photos and videos, you can sort of throw them in the middle of the tableau and the image will completely envelop the world. It sounds sort of silly perhaps, but when Justin shared a 360-degree video of a CNN documentary of Iceland, I almost felt like we were right there, taking a tour of the glaciers.
You also have the option of using a marker to draw silly doodles, turning them into three-dimensional art. You can toss them around, make duplicates of them or just play silly games with them. And of course, I couldn’t not take a selfie. Yes, there’s a virtual selfie stick, and yes it works just as it sounds. Simply grab the selfie stick, frame the shot you want and snap the perfect shot. From there, you can share it with your friends just like you can with any other photo.

Last but not least, Spaces is also tied in with Messenger. So you can initiate video calls right when you’re in VR. Your friend’s video chat screen will show up in a little floating square (No, they don’t need a VR headset to participate) and only you can hear and see what he or she says — none of your other VR buddies can eavesdrop in the conversation. They can still hear what you say of course, but they can’t hear what your Messenger friend is saying. Facebook tells me there’s no way to enable that just yet, but that functionality might come later in the future.
As cool as all of that is though, what really sold me the most on Spaces are those animated avatars. And they also happen to be one of the factors that was really hard for Facebook to get right. “The biggest challenge was how the avatars were going to look,” said Booth. “You know the uncanny valley? Well the uncanny valley in VR is a lot wider. Anything that’s attached to your head is going to have biological motions; it’s going to seem alive.” That means that anything that was too life-like would look weird.
“Finding the right balance of charming and being human recognizably without being too realistic and creepy, was harder than expected,” said Booth. “One of our early experiments was to have the lips smoothly blend with the shapes, thinking it would be realistic. And it was just way too creepy.” There were also some experiments on non-human avatars, like you’d see on Zootopia, but that was nixed as well. “Facebook is about authentic identity, which is fundamentally about humans.” Still, that doesn’t mean that costumes won’t be an option later on.

“The core of Spaces, the reason it exists, is so you can feel like you’re in person with your friends,” said Booth. “And then it’s having interesting things you can do with your friends. It’s not a chatroom where you’re just talking.
Of course, Facebook Spaces isn’t the only social VR app out there. Oculus even has its own version called Oculus Rooms. The difference between the two, Booth says, is that Oculus’ version is made just for Oculus hardware and is made to drive that particular platform. Facebook Spaces, on the other hand, is supposed to be much more widespread. That’s why even though it’s an Oculus exclusive right now, Booth wants it to be on all VR hardware. Yes, even the Vive.
When I asked what Booth would say to the skeptics of social VR, he said he doesn’t quite know. But what he thinks would really happen, is that VR skeptics would finally be persuaded to use VR because of social. “There are people that are looking at VR, thinking it’s not for me, because they think it’s all about gaming,” he said. “But what we’re trying to build is for everyone.”
“We want to bring you and your friends to VR,” said Booth. “I hope it’ll make more people look at VR as someone that people will actually want.”
Ay caramba! Scientists think they’ve found Bart (and hidden ice) on asteroid
Why it matters to you
By studying these formations on celestial bodies, scientists can uncover more about geologic processes in our solar system.
Scientists have discovered a variety of landslides on the Texas-sized asteroid Ceres thanks to images captured by NASA’s Dawn spacecraft. Among the geological flow features was a familiar face.
Led by Dawn researcher Britney Schmidt, the study has categorized three types of landslides, each with unique characteristics. Large and round landslides fall into Type I, identifiable by thick trunks and toe-like formations at the end. The most common landslides, Type II, are thinner, shallower, and longer than Type I, resembling avalanches found on Earth and Mars. Finally, Type III landslides occur at low latitudes and in the asteroid’s impact craters, seeming to take shape due to melting ice.
“When we first started seeing all the landslides, it was well before we had crater names and good maps for everything,” Schmidt told Digital Trends. “So Heather Chilton, my Ph.D. student, gave all the big ones nicknames so that we could discuss them a bit more easily among the team.”

NASA/JPL-Caltech/UCLA/MPS/DLR/IDA
Chilton noticed that one of the landslides was adorned by two impact craters side by side and jagged edges at the top. A rough outline revealed a familiar character: Bart Simpson. “[The name] stuck,” said Schmidt.
Ceres contained more landslides than researchers expected. Landslides were detected in 20 to 30 percent of craters wider than six miles. These features of mixed rock and ice have previously only been seen on Earth and Mars. By analyzing the formation and distribution of these landslides, Schmidt and her team estimate that the asteroid’s upper layer may contain as much as 50 percent ice by volume.
“By looking at their shapes, we can learn something about how the landslides move, which helps us understand what the materials inside the landslide are made out of, and in this case, suggests that ice is involved in how the landslides are moving,” Schmidt said. “While they don’t directly tell us anything about Earth landslides, it gives us one more place to look to try to understand how this fundamental geologic process occurs across the solar system.”
Graphene-based coating that changes color as it fractures is ideal for bridges
Why it matters to you
A graphene coating that can change color when it cracks isn’t just cool materials science — it could actually save lives.
All-round miracle material graphene has yet another transformative use case to add to the collection: Quickly and easily revealing when a man-made structure is in need of repairs.
That’s thanks to researchers at Germany’s Leibniz Institute of Polymer Research, who have developed a smart graphene coating which shows breaks and fractures by changing color.
“Extensive research efforts all over the world focus on single-layer graphene, and graphene enables a wide array of functional coatings and paints for many possible applications,” Shang-Lin Gao, a scientist in Leibniz Institute’s Department of Composite Materials, told Digital Trends. “However, our work only considers multilayer graphene nano-platelets with a widely distributed size and thickness. Variable structural coloration is achieved for the first time by overlapping these graphene nanoplatelets. The color changing is sensitive to nanoscale mechanical deformation. It provides the possibility for the early warning of microcracks prior to a material’s failure.”
Inspired by the way that fish scales reflect light, the scientists involved in the project designed a coating which amplifies particular wavelengths of light, but dulls others. The graphene flakes are then placed at certain angles so that, if compromised in some way, they’ll bounce back red, yellow, and green light, while noncompromised areas do not. Cleverly, the color of light can change according to the severity of an area’s stress, so structures could conceivably be color-coded to show how severe a particular area of damage is.
“The potential industrial applications for this graphene coating could be not only structural materials [on] vehicles, ships [etc.] for checking nanoscale deformation, but also smart house, textiles for fashion, [and more,]” Gao continued.
At present, the work is still in its early stages. Gao noted that there is still much to be done in terms of research and investment in order to solve challenges related to scaling-up manufacturing, parameter control, and more in order to make this a reality.
However, if these are solved effectively, color-changing graphene coatings could potentially be an invaluable tool in the arsenal of designers, structural engineers, and more.
Eizo’s 31.1-inch DCI-4K display is perfect for video and film professionals
Why it matters to you
If you’re a video and film professional who needs DCI-4K support and the best HDR around, then Eizo’s newest monitor is for you.
The PC monitor market has produced some quality products lately, with a number of new technologies like 4K resolutions and high dynamic range (HDR) grabbing much of the limelight. That does not mean there is no room for improvement, however, as merely including a buzzword technology does not guarantee the best image quality.
Eizo would likely agree with that, given the company’s focus on producing high-performance displays. It has taken things to a new level, however, with the new ColorEdge Prominence CG3145 monitor that offers the qualities needed for a truly professional post-production workflow.
First up in the CG3145‘s specs list is its DCI-4K support, which offers 4,096 x 2,160 resolution in a 31.1-inch display that meets the needs of film and video professionals. In addition, Eizo upped the ante with high brightness hitting 1,000 nits and offering the world’s first LCD monitor to offer 1,000,000:1 contrast ration and deep, true blacks.

Next up is the monitor’s HDR support, with Eizo avoiding the Auto Brightness Limiter (ABL) and local dimming methods used by other HDR displays and creating a “true” HDR experience. ABL is used in OLED monitors to extend their lifespans, but it can limit a monitor’s ability to show light scenes while maintaining full tonal range. Local dimming adjusts brightness by display sections, which can result in a “halo” effect when an on-screen object falls outside the backlit area.
Further, the ColorEdge Prominence CG3145 utilizes two gamma curves to create HDR video, hybrid log-gamma (HLG) and perceptual quantization (PQ). HLF is optimized for live television broadcasting, while PQ matches human vision in terms of color and light perception. Between the two gamma curves, the CG3146 is designed for the best film, streaming, and other video content experiences.
Other specifications include a wide color gamut with support for 98 percent of the DCI-P3 color space. Smooth gradations are provided by a 10-bit display generated from a 24-bit look-up table. The included light shielding helps video professional isolate their content during production. Finally, the CG3145 supports a wide range of video formats via HDMI supporting 10-bit 4:2:2 at 50/60p, and DisplayPort with support for 10-bit 4:4:4 at 50/60p.
Eizo has not yet released pricing for the ColorEdge Prominence CG3145, although availability is set for late 2017. Anyone interested in seeing the monitor in person can visit Eizo’s booth at the National Association of Broadcasters Show 2017 in Las Vegas from April 24 to 27.
Latest Windows Insider update targets battery hogs with ‘Power Throttling’
Why it matters to you
The Windows Insider Program acts as a test bed for new features, so it’s important to keep an eye on what Microsoft has in the works.
The Creators Update is live, but work continues behind the scenes, with Windows Insiders getting a taste of some brand-new features on the horizon. Today’s Insider build includes the usual assortment of bug fixes, and just might extend your laptop’s battery life.
Insiders installing Build 16176 can expect some slight improvements in overall battery life, thanks to Microsoft’s new “Power Throttling” feature. Designed to make background applications more efficient, Power Throttling can reportedly decrease your CPU power consumption by about 11 percent in strenuous use cases.
“To give great performance to the apps you’re using, while at the same time power throttling background work, we built a sophisticated detection system into Windows,” said Bill Karagounis, director of program management for the Windows Insider Program.
Windows will automatically detect which applications are most important to you — foreground applications and the like — and start throttling the power consumption of other, less important processes. Power Throttling works as an add-on to the existing battery saver settings, and users can control it through the Power Slider, scaling throttling up and down depending on their workload.
Aside from the Power Slider, users can also control how Power Throttling works through the Battery Settings menu. From there, it’s easy to exempt certain applications if Power Throttling is misbehaving or throttling an important application’s CPU usage.
It’s important to note that Power Throttling only works on systems running processors with Intel’s Speed Shift technology — which is available in sixth-generation Intel Core processors and beyond. Microsoft has stated, however, that it has plans to roll out support to other processors in the future.
Microsoft also set about fixing a few major bugs introduced in the previous Insider build. For instance, some apps and games were crashing unexpectedly due to a misconfigured advertising ID caused in a previous build, and Microsoft has reportedly fixed the issue.
For a full list of all the bug fixes and known issues in this latest Insider build, check out the Microsoft blog here.
New technology uses light instead of electricity to move information quickly
Why it matters to you
A material that enables more power-efficient and faster light-based memory makes all sorts of electronic devices more potent.
The hunt for new and improved technologies never ceases, which is a good thing for the future of computing. As long as our hunger for faster and more powerful PCs continues, then it appears that researchers are up to the task of the continued advancements that we demand.
One area where work continues unabated is in the quest for new and faster memory, where mundane electronic technologies remain at the heart of even the fastest random-access memory (RAM). Researchers at the University of Victoria in British Columbia, Canada, hope to change all of that with their development of a new material that could make RAM perform faster and more efficiently.

The material is dubbed light-induced magnetoresistive RAM (LI-RAM), and it essentially “allows computer chips to exist at a molecular level,” as the university’s blog describes it. Chemist Natia Frank is behind the effort, which aims to reduce the power consumed and heat produced by modern PC processors — to break through the “power wall,” as it is called.
LI-RAM utilizes 10 percent less power than current RAM, produces virtually no heat, and is more durable. On top of all that, it is also faster. What makes LI-RAM special is that it uses light rather than electricity to move information around a system.
As Frank puts it, “The material in LI-RAM has the unusual quality of rapidly changing magnetic properties when hit with green light.” Information is processed and stored on single molecules, making once hypothetical “universal memory” technology possible.
According to the researchers, about 10 percent of all electricity is consumed by information communications technology, while discarded ewaste amounts to 3 million tons of hazardous materials on a global scale. LI-RAM would help alleviate some of those concerns by utilizing less power and lasting longer.
Researchers plan to use the LI-RAM for more than just mobile phones, PCs, and consumer electronics. As Frank explains, “Potentially, this material could have other uses in medical imaging, solar cells and a range of nanotechnologies. This is just the beginning.” It’s anticipated that the technology could make its way into consumer products within the next decade and it is already in the hands of international electronics manufacturers with just that goal in mind.



