Skip to content

Archive for

13
May

A smart home mega sensor can track what goes on in a room


Creating a smart home currently requires either linking every connected device one-by-one or adding sensor tags to old appliances to make a cohesive IoT network, but there might be an easier way. Researchers at Carnegie Mellon developed a concept for a hub that, when plugged into an electrical outlet, tracks ambient environmental data — essentially becoming a sensor that tracks the whole space. With this in hand, savvy programmers can use it to trigger their own connected home routines.

The researchers introduced their sensor nexus — dubbed Synthetic Sensors — this week at ACM CHI, the human-computer interaction conference. As the video demonstrates, just plug it into a USB wall port and it automatically collects information about its surroundings, uploading it to a cloud back-end over WiFi.

Machine learning on the device parses results into recognizable events, like recognizing a particular sound pattern as “dishwasher is running” — making them “synthetic” sensors. Folks can use them as digital triggers for other IoT behaviors. For example, one could use “left faucet on” to activate a room’s left paper towel dispenser — and automatically schedule a restock when its supply runs low.

There’s one sensor missing from the device’s suite, though: A camera. Its creators are sensitive to privacy issues, which is also why raw environmental data isn’t uploaded to the cloud — just the analyzed results. The Synthetic Sensor is still in a prototype phase, but it’s a promising replacement for the jumble of individual tags needed to hook up old appliances or proprietary smart devices.

Source: Wired

13
May

12 countries hit in massive cyber-heist


England’s healthcare system came under a withering cyberattack Friday morning, with “at least 25” hospitals across the country falling prey to ransomware that locked doctors and employees out of critical systems and networks. The UK government now reports that this is not a (relatively) isolated attack but rather a single front in a massive regionwide digital assault.

#nhscyberattack pic.twitter.com/SovgQejl3X

— gigi.h (@fendifille) May 12, 2017

The attack has impacted hospitals and transportation infrastructure across Europe, Russia and Asia. Britain and 11 other countries including Turkey, Vietnam, Spain, Portugal, the Philippines, Japan and Russia, have all been hit with the same ransomware program, a variant of the WannaCry virus, spouting the same ransom note and demanding $300 for the encryption key.

The virus’s infection vector appears to through a known vulnerability, originally exploited and developed by the National Security Agency. That information was subsequently leaked by the hacking group known as Shadow Broker which has been dumping its cache of purloined NSA hacking tools onto the internet since last year. The virus was spread via email as compressed file attachment so, like last week’s Google Docs SNAFU, make sure you confirm that you email’s attachments are legit before clicking on them.

Source: NY Times

13
May

Windows 10 will seamlessly run legacy apps on ARM


As we learned last month, ARM-powered Windows 10 devices should start hitting the market by the end of 2017. Unlike previous mobile-friendly versions of Windows though, Microsoft is working hard to make sure the ARM release will be able to properly support full-fledge desktop apps, rather than the mish-mash of apps that showed up in Windows RT and devices like the Surface 2. At the Build 2017 conference this week, Microsoft showed off the new seamless experience by downloading, installing and running x86 Win32 applications on an ARM machine.

For end users and app developers, there’s effectively no difference between an Intel-based machine and one with a Snapdragon processor under the hood. As PC Magazine notes, the ARM build of Windows 10 works its magic using a built-in emulator that translates instructions in realtime. Those translations are also cached so Win32 apps should get a performance boost over time. The setup also means that users with ARM-based Windows 10 machines won’t be restricted to Windows Store apps, so they’ll get a bit more variety than even the limited Windows 10 S platform. If manufacturers are able to hit the right price point when the devices debut later this year, an ARM-based Windows machine could even become a more attractive low-cost alternative to Chromebooks and tablets.

Click here to catch up on the latest news from Microsoft Build 2017.

Via: PC Magazine

13
May

Google expedites Android updates with Project Treble


We’re about a week away from Google probably revealing the next version of its mobile OS, but as most Android users already know, actually receiving an update might take forever. The company is aware of this, and has unveiled a new system called Project Treble that could eliminate some major delays in pushing updates to existing devices. Project Treble will come to all new devices launched with Android O and beyond, and is already running on the developer preview for Pixel phones.

Google is calling this a “modular base for Android,” and says it is the biggest change to the low-level system architecture of the operating system to date.

The company explained in a blog post that feedback from its device-maker partners shows that “updating existing devices to a new version of Android is incredibly time consuming and costly.” It doesn’t just involve implementing new code and making sure the software works with the hardware: there are several rounds of approval involved.

After Google publishes the open-source code for the latest release, chip manufacturers have to modify the script for their hardware. They then pass the edited code to device makers, who incorporate the new software, making changes as needed. When a final version is ready, device makers have to get approval from carriers before pushing that update out to your phone.

Project Treble cuts down the amount of hardware-accessing code that needs to be written by wrapping all of it in one package that developers can just re-use across each new version of Android. This eliminates the need for chip makers to modify the original open-source code that Google releases and speeds up the approval process.

Since the company is calling this a “modular base,” it is likely that more bundles are coming to the base Android code. There aren’t many other details on how this system will work just yet, but given Google I/O is happening next week, we expect to hear more very soon.

Via: 9to5Google

Source: Android Developers Google Blog

13
May

I finally believe in Microsoft’s mixed reality vision


For years, Microsoft has been talking about its dream of mixed reality — the idea that AR and VR headsets can work together in harmony across virtual and physical environments. But until now, it really has just been talk. At its Build developer conference this week, I finally saw a mixed reality demo that made Microsoft’s ambitious vision seem achievable. It was more than just shared VR — something we’ve seen plenty of already — it was a group experience that brought together HoloLens and Acer’s virtual-reality headsets in intriguing ways.

To be clear, I was running through Microsoft’s Mixed Reality Academy at Build, a simplified version of a developer class meant for journalists and analysts. It goes over the basics of building a virtual experience in Unity, while also giving you time to actually explore those environments. I went through an earlier version of the class two years ago that was entirely focused on HoloLens, but this week’s version showed how HoloLens (and future AR headsets) can coexist with Acer’s Windows 10 VR gear.

The centerpiece of this year’s demo was a puzzle-filled island, which featured a beach, a small forest and a large volcano. Like all-seeing gods, the HoloLens users could see the VR headset wearers’ tiny avatars running through the island. Everyone wearing an Acer headset, meanwhile, had an on-the-ground view, with the HoloLens viewers appearing as menacing clouds in the horizon. The demo is basically an example of how these different types of devices could be used to observe a single environment.

After stepping through the Unity setup once again, which mostly involved enabling a bunch of scripts and templates that Microsoft set up for us, I was able to slip on a HoloLens headset and view the floating island. I used the Air Tap gesture to select it, which involves holding my fist up and raising my index finger, and then gazed at different locations around the room to move the island around. While it was cool, it was also standard HoloLens fare.

Things got a bit more interesting once I put on the Acer VR headset, which let me see the island much more clearly than HoloLens. That’s partially because of the higher resolution screen on the Acer headset, but VR also has the advantage of letting you focus more clearly on virtual objects than HoloLens’ AR. Microsoft didn’t have its new Mixed Reality Controllers on-hand for the demo, so I was stuck using an Xbox One gamepad to move around the environment.

Moving into the shared mixed reality experience made the differences between the two headsets even more palpable. Wearing a HoloLens headset felt oddly powerful, for example. We learned how to throw lightning bolts into the island by bringing two fingers together, a gesture that mainly seemed to exist to freak out the VR headset wearers on the ground. When I donned the Acer device, I found myself on the beach and was immediately struck by the shift in scale. Instead of peering down from high above, I was now walking amongst the trees.

I made my way to a passcode-protected door with a not-so-subtle hint: “I love pie.” After entering 3.14, the door opened and I was presented with another task, which involved stacking colored blocks in the right order. While I was solving puzzles, other VR-headset-wearers on the island were going through trials of their own. Eventually, a doorway opened into the volcano at the center of the island, where a large rocket was housed. Technically, I was supposed to work together with my teammates to launch the rocket. But, somehow, I ended up teleporting underneath it, which made it impossible for us to complete the puzzles.

I’ve been through several shared-VR experiences, most recently at this year’s Tribeca Film Festival, but Microsoft’s demo felt completely different. The mixture of HoloLens augmented reality and traditional VR allows for multiple perspectives that wouldn’t be possible with virtual-reality headsets alone. The shift in scale, in particular, could be particularly helpful. I could see remote teams using a combination of the two technologies to collaborate on projects together.

Each headset also has its strengths and weaknesses. HoloLens gives you a wireless digital lens over the real world, but it’s expensive and, due to its limited field of view, it’s not ideal for letting you explore immersive environments. The Acer VR headset, meanwhile, is far cheaper at $299 and lets you completely surround yourself in VR worlds, but it also needs to be tethered to a PC. Throughout the demo, I was mostly struck by the Acer VR headset’s comfort and clarity. I’ve already had some hands-on time with it, but I’m still surprised at just how light it is.

The big takeaway, at this point, is that there’s plenty of potential for AR and VR headsets to work together. And compared to others dabbling in these fields, Microsoft is in a unique place. It’s the only company with a single platform that can support both AR and VR, and it also has plenty of hardware experience that others don’t. The company is already planning to bring MR into living rooms next year via the Project Scorpio console. The big question for Microsoft: Can it make developers believe in its mixed reality vision, too?

13
May

The eerie stop-motion game that’s ‘better than sex with Jesus’


When I first talked with Anders Gustafsson and Erik Zaring in 2012, they promised their creepy, psychedelic, stop-motion game, The Dream Machine, was going to be “better than sex with Jesus.” They had a lot of work ahead of them — they were building the game by hand, with physical materials, and the stop-motion process was inherently time-consuming. Plus, they had to wrangle episodic installments of an intimate yet sprawling story inspired by LSD trips and theories of alternate realities.

Five years later, as the sixth and final installment of The Dream Machine finally lands on Steam, I ask Gustafsson and Zaring if they think their game delivers on its sacrilegious promise.

“I think we under-promised and over-delivered as far as sexual congress with the lord and savior is concerned,” Gustafsson says. “Playing The Dream Machine is clearly better than that. The Dream Machine is more like being baby Jesus, on Christmas Eve, just lying there in the crib, toasty warm, getting gold and myrrh poured all over yourself, then realizing people will still be celebrating this very moment 2,000-plus years into the future.”

This is how Gustafsson and Zaring approach game development: with a dry humor that pushes the boundaries of what’s accepted, what’s traditionally done. The Dream Machine is an award-winning testament to this perspective. It’s a point-and-click adventure game following a newlywed named Victor as he moves into a mysterious apartment complex and discovers a literal machine that infiltrates the dreams of people all around him. His wife falls victim to the machine and Victor fights to shut it down the only way he knows how — by solving puzzles in the connected dreams all around him.

In the fifth episode, Victor explores the minds of two neighbors: One a Tron-like digital space and the other a mossy forest filled with mushrooms, fairies and a thief who steals the villagers’ organs while they sleep. The Dream Machine is an exercise in balance, blending the macabre with beauty and comedy.

The game’s tone is inspired by the drug-fueled exploits of famed neuroscientist, author and philosopher John C. Lilly, and it plays like a page ripped out of House of Leaves. This isn’t a game about shock value, though — it’s smoothly paced and introspective, transforming extreme, drug-addled inspiration into an eerie and beautiful fantasy world.

Back when he was in animation school, Gustafsson became enthralled by Lilly’s experiments with LSD and Ketamine, and his attempts to draw the human subconscious. Lilly believed he visited an alternate reality while he was tripping, and he was determined to map out its geography. He kept a pen and paper beside his bed so he could jot down coastlines and landmarks as he came down, and he encouraged his friends to do the same.

“They believed that if they had enough of these drawings, they could start piecing them together in an effort to create a coherent chart of the human subconscious,” Gustafsson says. “I thought that was one of the best ideas I’d ever heard, beautifully concrete and wonderfully naive. I just had to make something of it. In the game we treat dreams in a similar fashion.”

The Dream Machine was episodic before episodic gaming was cool. Gustafsson and Zaring launched the first installment in 2010, nearly two years before the debut season of Telltale’s The Walking Dead, the series that thrust episodic games into mainstream consciousness. But, The Dream Machine had a slower update cycle than other episodic games, largely because of its daunting stop-motion process.

Luckily, Zaring already had expertise in this field — he once ran a stop-motion animation studio in Sweden. At first, Gustafsson was hesitant to build a hand-made game, but Zaring convinced him it could be done by crafting four or five miniature sets in a single night. They got to work shortly afterward.

The Dream Machine came to life on a workbench the size of a standard IKEA desk, so set space was limited. The developers at times had to cheat, using Photoshop to create depth in some scenes, and any major set or story changes had to be carefully considered.

“I sometimes miss the possibility to build really vast landscapes or to be able to easily multiply trees or whatnot,” Zaring says. “Of course you can achieve a lot by using Photoshop, but sometimes I kind of miss other mediums such as good ‘ol 2D or 3D. We also ended up with a physical look that can be a tad overwhelming at times, all the wrong angles and imperfections makes me a bit tired.”

But those imperfections are a large part of The Dream Machine’s unsettling charm. The developers used a vast array of physical media to build these imperfect angles; Zaring lists a handful of the game’s strangest materials as follows: pubic hair, broccoli, skull of an elk, dried peas, pasta penne, pork trimmings, baby tomatoes, tonic water, lichen, driftwood and condoms.

Despite nearly a decade of development and a difficult medium, Gustafsson and Zaring say they never felt the urge to give up on The Dream Machine. Quitting simply wasn’t an option.

“Once we started selling the whole game — after Chapter 1 and 2 were finished — that option was taken off the table,” Gustafsson says. “If we’d bailed after that, the game would be seen as a scam. That would be our legacy. We’d be those guys.”

Now that the journey is complete, Gustafsson and Zaring are mostly just excited to see their completed masterpiece out in the world. Mostly.

“I feel both sad and elated at the same time,” Zaring says. “I’m not even sure if I’m going to be the same person when this baby is finally delivered. However, there is quite a lot of stuff to do post-release that’ll keep me busy and keep my Post Production Depression at bay.”

13
May

NASA study finds first SLS launch should be unmanned for safety


It’s an exciting time for spaceflight, for sure. Both NASA and SpaceX have plans in place to send rockets and humans into our solar system. Elon Musk’s company wants to use the moon as a pit stop on its way to Mars, and NASA wanted to include a human crew on its now-delayed launch to test a new rocket and companion capsule. Today, however, a study by NASA has concluded that sending astronauts on the first flight is not feasible as the costs of keeping them safe are just too high.

NASA’s acting administrator Robert Lightfoot Jr. requested the review back in February when he announced the agency’s intention to add the crew to its 2018 launch. “I know the challenges associated with such a proposition,” he wrote in a memo to NASA employees, “like reviewing the technical feasibility, additional resources needed, and clearly the extra work would require a different launch date.” In April, the launch date was pushed back to 2019 due to technical problems.

Whether NASA or the Trump administration will take the recent study into account or not is still up in the air, according to a Bloomberg source. Still, adding a human crew to a potentially catastrophic maiden voyage like this can only increase the chances of a national tragedy, which would definitely slow down the current rush to return to space.

Source: Bloomberg

13
May

Your personality may predict whether you choose a job that gets automated


Why it matters to you

If your worried about automation, you might be better off developing new personality traits than learning new skills.

Workers have worried about automation for generations, but with the coming of the “fourth industrial revolution,” circumstances seem more dire now than before. Algorithms are getting smarter, robots are becoming more capable, and humans from the factory to the newsroom are already being replaced with machines.

Interestingly, the best defense against automation may not be a particular skill set, but personality traits, according to a new study. The researchers found that, although education was important, character traits, vocational interests, and intelligence played a major role in determining whether a person will select an easily automated job.

“There has been a lot of research in economics recently about the dangers of automation and what it can do to the labor market, but no psychological research had yet examined how individual differences in intelligence, personality traits, and vocational interests predict job computerizability outcomes,” Rodica Damian, lead author of the study and psychologist from the University of Houston, told Digital Trends. “This is important because if we want to enhance workforce readiness — for example, train the new generations to be prepared for the future labor market — we need to know where we must intervene.”

Damian and her team analyzed data on 346,660 people, looking at things like personality traits in adolescence and socioeconomic status over a 50-year period. They found that, regardless of social background, a person was more likely to select a less computerizable job if they displayed higher levels of intelligence, maturity, and extroversion, while being more interested in the arts and sciences.

The results are perhaps not that surprising. After all, intelligence goes hand in hand with higher levels of education, as well as more complex and creative professions that aren’t as easily done by machines. Extroverted people meanwhile tend to select jobs that require more social skills, which chatbots haven’t quite mastered yet.

There are things susceptible people can do to prepare, said Damian, but it could mean changing some pretty fundamental parts of who they are.

“I would try to obtain as high of an education level as possible,” she advised. “I would try to develop complex social interaction skills and leadership, artistic and scientific interests, creativity, and in general a mindset out to solve complex problems and be flexible, a mind that likes to learn constantly.”

The job market will change drastically in the coming decades. It’s not something easily predicted or prepared for, and so the best security may simply be flexibility. Indeed, Damian even admits her study may become outdated as new technologies emerge.

“These results can certainly change if we look forward 50 years,” she said, “because no one knows what technological revolutions await.”

A paper detailing the study was published this week in the European Journal of Personality.




13
May

Your personality may predict whether you choose a job that gets automated


Why it matters to you

If your worried about automation, you might be better off developing new personality traits than learning new skills.

Workers have worried about automation for generations, but with the coming of the “fourth industrial revolution,” circumstances seem more dire now than before. Algorithms are getting smarter, robots are becoming more capable, and humans from the factory to the newsroom are already being replaced with machines.

Interestingly, the best defense against automation may not be a particular skill set, but personality traits, according to a new study. The researchers found that, although education was important, character traits, vocational interests, and intelligence played a major role in determining whether a person will select an easily automated job.

“There has been a lot of research in economics recently about the dangers of automation and what it can do to the labor market, but no psychological research had yet examined how individual differences in intelligence, personality traits, and vocational interests predict job computerizability outcomes,” Rodica Damian, lead author of the study and psychologist from the University of Houston, told Digital Trends. “This is important because if we want to enhance workforce readiness — for example, train the new generations to be prepared for the future labor market — we need to know where we must intervene.”

Damian and her team analyzed data on 346,660 people, looking at things like personality traits in adolescence and socioeconomic status over a 50-year period. They found that, regardless of social background, a person was more likely to select a less computerizable job if they displayed higher levels of intelligence, maturity, and extroversion, while being more interested in the arts and sciences.

The results are perhaps not that surprising. After all, intelligence goes hand in hand with higher levels of education, as well as more complex and creative professions that aren’t as easily done by machines. Extroverted people meanwhile tend to select jobs that require more social skills, which chatbots haven’t quite mastered yet.

There are things susceptible people can do to prepare, said Damian, but it could mean changing some pretty fundamental parts of who they are.

“I would try to obtain as high of an education level as possible,” she advised. “I would try to develop complex social interaction skills and leadership, artistic and scientific interests, creativity, and in general a mindset out to solve complex problems and be flexible, a mind that likes to learn constantly.”

The job market will change drastically in the coming decades. It’s not something easily predicted or prepared for, and so the best security may simply be flexibility. Indeed, Damian even admits her study may become outdated as new technologies emerge.

“These results can certainly change if we look forward 50 years,” she said, “because no one knows what technological revolutions await.”

A paper detailing the study was published this week in the European Journal of Personality.




13
May

25 million PC gamers now have systems that are ready for virtual reality


Why it matters to you

Building a VR-capable PC costs less than ever, and the growing number of people with those systems reflects that.

There are now 25 million Steam PC gamers who have systems that are capable of hitting recommended specifications for consumer-grade virtual reality headsets like the HTC Vive and Oculus Rift. This is almost double the cited figure of around 13 million at the start of 2016 and shows how much consumer graphics have improved since then.

There have been several hurdles faced by virtual reality developers when it comes to actually getting people to buy their games, and having a large enough audience with headsets is only one of them. For that audience to exist in the first place, people need PCs that can actually meet the minimum specifications, which are reasonably steep.

Fortunately, though, 2016 saw a massive drive from the likes of Nvidia and AMD in producing not only faster and more powerful graphical processors, but more affordable ones, too. That’s why the likes of HTC’s head of Vive, Daniel O’ Brien, said that graphical hardware advances have had some of the biggest impact on VR adoption.

Looking at Steam hardware statistics today, we can see that more than 14.5 percent of all Steam users have a DirectX12-compatible graphics card that’s above the minimum threshold for VR gaming (thanks RoadtoVR). With a little bit of speculative math based on official Valve numbers from a couple of years ago, we can estimate just shy of 170 million Steam users.

That works out to just shy of 25 million people with VR-ready PCs.

Of course there are a lot of guesses and estimations going into that figure, as RoadToVR highlights, but it’s certainly an interesting number to consider. It shows that although virtual reality was once seen as a relatively high bar — and to some extent, its 90FPS mandate still is — it’s coming down very quickly.

The launch of AMD’s RX series forced the price down on graphics cards pushing for that 1080P-plus resolution gaming. With Nvidia’s continued drive at the top end with Pascal, too, and AMD’s upcoming Vega graphics chips, we may see even further improvements.

Although most expect big leaps in virtual reality screen resolution in the years to come and therefore a requirement for even more powerful graphics to support it, entry-level virtual reality is becoming cheaper by the day. Now, with a potential audience that stretches into the tens of millions, the job falls on the VR hardware developers and software content creators to bring them on board.