Skip to content

Archive for

15
Feb

IMAX opens first VR theater in Los Angeles


The first of many planned IMAX theaters dedicated to virtual reality has opened in Los Angeles. Trading large, wraparound screens for small, immersive headsets, the facility allows anyone to experience VR without buying a high-end gaming PC or video game console. As UploadVR reports, the LA center has a mixture of HTC Vive and Starbreeze StarVR headsets. They’re stored in 14 isolated “pods” which also contain a Dbox cinema chair, a vibration-emitting Subpac vest and a variety of physical controllers. You can buy experiences individually, such as John Wick Chronicles, or grab a “sampler” if you want a broader taste of VR.

IMAX has been teasing its VR centers for almost a year now. Since then, we’ve seen other cinema chains, such as MK2 in Paris, construct similar installations dedicated to virtual immersion. The idea is simple: most people have never tried the Oculus Rift or HTC Vive, due to a lack of funds or space in their living room. There’s also a group of people who just want to try the occasional experience — if you’re a climbing fan, for instance, you might want to play Sólfar Studios’ Everest VR, but little else. With IMAX VR, you can now access these experiences on a pay-as-you-go basis.

Credit: IMAX

At launch, IMAX has a mixture of immersive games and movies including Star Wars: Trials on Tatooine, Eagle Flight and The Walk. To stand out, IMAX is also working on exclusive content. It’s not ready just yet, but the company has raised $50 million to aid its development and the construction of additional VR centers. These are planned for New York, California, China and the UK. Before they’re built, IMAX needs to prove that the business model works. While there’s interest in VR, it’s not clear how many people want to try and pay for the experience on a regular basis.

Source: UploadVR, Variety

15
Feb

Microsoft hikes UK Surface prices because Brexit


Thanks to the Brexit vote, the weakened pound is causing many companies to adjust prices to cover the shortfall, and today you can add Microsoft to that list. The cost of the company’s enterprise software and cloud services increased at the start of the year, but this morning Microsoft quietly hiked prices of some of its consumer devices and software too, as spotted by a TechCrunch tipster.

On Microsoft’s online store, you’re now expected to pay at least £150 more for a Surface Book, though the price of the top, Core i7 configuration has risen by £400 — over 15 percent — from £2,649 to £3,049. That’s quite the jump for an already expensive product. While not as significant, Surface Pro 4 prices have also increased by up to £150 depending on configuration.

For whatever reason, Microsoft isn’t sharing a complete list of affected products, but the official line is: “In response to a recent review we are adjusting the British pound prices of some of our hardware and consumer software in order to align to market dynamics. These changes only affect products and services purchased by individuals, or organisations without volume licensing contracts and will be effective from February 15, 2017. For indirect sales where our products and services are sold through partners, final prices will continue to be determined by them.”

If you were considering a Surface purchase, then, you might want to shop around now before third-party retailers have a chance to react to Microsoft’s adjustments.

Just a few days ago, Sonos confirmed it would be hiking the prices of its products by up to 25 percent next Thursday. Sonos and Microsoft are reacting fairly late to the post-vote currency fluctuations, though. OnePlus kicked things off last July, followed by HTC in August and Apple in September. Apple has actually reacted twice, first by increasing hardware prices and last month, putting app costs up by 25 percent. Already this year, Tesla’s EVs have become that bit more expensive, and we’d put money on more companies following suit in the near future. Not that we have any left.

Via: TechCrunch

Source: Microsoft

15
Feb

‘Team Fortress 2’ patch fixes decade-old bug


Video games with a dedicated developer team periodically release software patches to fix broken things. Sometimes these come at the behest of the title’s community, and dedicated users can be counted on to pick apart janky or erratic flaws faster than developers can address them. Unless everyone misses something for, say, a decade. That’s how long a particular bug had been in the shooter Team Fortress 2 — since it was released in 2007 — if a pair of modders are to believed, an issue that studio Valve finally fixed in yesterday’s game update.

The phenomenon would’ve flown under history’s radar if Reddit user sigsegv__ hadn’t called attention to a particular fix nestled in this recent patch’s unusually dense list of addressed issues. In a comment, sigsegv__ pointed out that he’d reported this bug to Valve after a developer of the TF2 Classic mod, Nicknine, first exposed the issue in a video a few weeks ago and claimed it had been in the game since its original release. sigsegv__ made his own video exploring its severity:

To explain the bug simply, if someone started the game as one of three particular classes and then shifted to any of the six others, their character would have one set of hitboxes and the server would register a slightly different one to other players. The mismatch between animated model and actual hittable zones means a decent amount of shots taken over the game’s ten-year lifespan were likely near-misses when they should’ve been hits, assuaging the ego of anyone who swore they’d gotten that sweet headshot that one time.

Fixing it was as simple as changing one line of code, sigsegv__ claims in the Reddit comment, but knowing which line and where was the real trick. We’ve reached out to both modders and Valve, and will update when we know more about the bug. For now, rest easy that your shots are slightly more accurate and a long-overlooked wrong has been righted.

Source: Reddit

15
Feb

Google Assistant can share your personal info in Allo chats


For now, Allo is the one place where regular Android users can get a taste of Google Assistant, the AI helper that’s otherwise reserved for Google’s own Pixel phones. You can call on it during a chat with one or more folks, and it can do a search, set reminders and even tell a joke. Google has just given it a new trick that should make it more useful — letting you share contacts, calendar appointments and other personal info.

To use it, you just type “@google,” and then tap the assistant when it pops up. You can then ask for a meeting date or airline reservation, for instance, and it will privately show you any of the information that it can find. From there, you can tap “share now” to show the info to other parties in the Allo chat.

That makes it easy to quickly give pals a contact number without having to look it up, for instance. Or, if you’re trying to coordinate a work trip, you can have Google Assistant give your hotel reservations to colleagues during a chat.

Google Hangouts, the predecessor to Allo (and still the preferred chat app of many Google users), has had similar capabilities for a few years now — it’ll even listen in to conversations and let you share your location if someone asks where you are. Facebook recently launched similar capabilities for its “M” Messenger AI assistant, too. It’s debatable whether they’ll make you more productive, but Google needs to draw interest to Allo however it can.

Via: Android Police

Source: Google

15
Feb

AppGratis Shuts Down


AppGratis today announced that, after seven years of operations and more than 50 million app installs globally, it has shut down today.

The service offered hand-picked apps for free or up to 90% off on a daily basis through its website and Android app, and formerly through its iPhone app, but its popularity had seemingly faded over the years.

Apple removed AppGratis from the App Store in April 2013 as part of a broader crackdown on apps which might be “similar to or confusing with the App Store.” AppGratis said it was “far from finished” despite the huge blow, and it went on to last nearly four more years until its shutdown today.

App discovery remains an issue on the App Store, which has some two million apps according to our sister website AppShopper. Last year, Apple introduced search ads on the App Store to help developers promote their apps, but the feature is not particularly helpful to those with limited marketing budgets

(Thanks, Joe!)
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

15
Feb

Apple in ‘Early Stages’ of Discussions for Apple Pay in South Korea, Android Pay Likely to Launch First


A few Apple executives have had meetings with South Korean financial authorities, spurring a rumor that the Cupertino company is beginning to make headway in its attempt to launch Apple Pay in the country. Sources speaking with The Korea Herald said that Apple’s legal director and senior counselor met with the country’s financial authorities back in November to discuss Apple Pay, but the company has yet to have another meeting with the government.

As it stands, Android Pay is believed to launch in South Korea ahead of Apple Pay. According to an anonymous source from a local card company, “the technology development with Google for Android Pay is in full swing,” with Google having already partnered with card companies like KB Kookmin, Shinhan, Lotte and Hyundai in order to develop online and offline payment systems.

For Apple, the country’s involvement and work on Apple Pay “is still in an early stage,” potentially due to South Korea’s lack of wide NFC terminal adoption in retail stores. Google is said to be developing an online payments feature for Android Pay that could circumvent the need of an NFC terminal. For Apple Pay to be widely adopted, the company may have to wait for more NFC support — which its mobile wallet requires — in the country.

Apple’s executives recently held a meeting with South Korean financial authorities, a move that can be viewed as the company testing the waters before fully reviewing a potential launch here, sources said Wednesday.

“Apple said they will partner with local credit card companies in the future but did not elaborate on the specific details,” the source said.

To start such a mobile payment service in Korea, the company should have another meeting with the financial authorities to decide whether it will be registered as an electronic financial business operator. Apple is not yet scheduled to have such a meeting with the government.

In participating retail locations and apps, Apple Pay is currently available in the U.S., UK, China, Australia, Canada, Switzerland, France, Hong Kong, Russia, Singapore, Japan, New Zealand, and Spain. Apple previously had trouble introducing another of its services, Apple Music, in South Korea due to the country’s strict copyright laws.

Related Roundup: Apple Pay
Tag: South Korea
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

15
Feb

Amazon’s delivery drones could use parachutes to drop off packages


Why it matters to you

Although some of its delivery-drone ideas may seem a bit wacky, they show Amazon’s determination to explore all facets of the platform in order to make it a reality.

Amazon has been working on the design of its delivery drone for several years now as it continues its bid to launch a regular service for customers living close to its fulfillment centers.

At the end of last year the company trumpeted the completion of its very first drone delivery in the United Kingdom. This was a significant milestone, for sure, though strict regulations — similar to those in the U.S. — mean the service is currently limited to only a few customers and is unlikely to be rolled out more widely anytime soon.

One of the challenges continuing to occupy engineers is how to drop off the packages safely. The current design of Amazon’s Prime Air delivery drone sees it landing on the ground and automatically plopping out the ordered item from a compartment. However, in urban areas, with their myriad of obstacles, such a delivery method may prove more difficult, and could also expose the flying machine to additional risks such as excited pets or even ne’er-do-wells looking to intercept deliveries as the drone nears its destination.

Amazon’s drone team has evidently been exploring different ways of dropping off ordered goods, with a recently granted patent focusing on landing guidance systems such as parachutes, compressed air, and landing flaps. Put simply, the drone would assess the landing site from up high before releasing the package from the optimum position. With the package headed presumably for someone’s yard, the drone would monitor its descent. If a sudden breeze kicked up, the drone could send a signal to technology on the package to deploy one of the aforementioned systems in order to steady the consignment, or steer it to its precise landing point if it begins to veer off course.

With all the extra kit involved, it sounds like a costly solution, though the concept may ultimately lead to a refined, more efficient method along similar lines. Yes, this is just a patent, so it’s far from certain that Amazon will be parachuting products into people’s yards anytime soon. Or possibly ever.

More: Drones are now delivering pizza to paying customers in New Zealand

It should be noted that Amazon has patented a range of ideas for its drone technology, some wackier than others. One idea is to turn church steeples and the top of street lights into docking stations so the machine can recharge and therefore fly greater distances to customer addresses, while another proposed a seemingly off-the-wall idea involving an “airborne fulfillment center” — effectively a flying warehouse suspended from a blimp — that its drones could also use as a base.

15
Feb

Robots and AI are coming for our jobs, but can augmentation save us from automation?


The American truck driver is soon to be an endangered species. Some 3.5 million professionals get behind the wheel of these vehicles in the United States every year, making it one of the most common jobs in the country. In a couple decades, every last one may be out of work due to automation.

Industry giants around the world are investing in autonomous vehicles. In Australian mines, Rio Tinto employs hundred-ton driverless trucks to transport iron ore. Volvo wants to ferry volunteer passengers around London’s winding streets. MIT researchers recently determined the most efficient way for driverless trucks to transport goods. The guy behind Google’s first self-driving car now runs an autonomous trucking startup called Otto in San Francisco.

Truckers may be among the most vulnerable to automation but they’re certainly not alone. Over the past year we’ve seen an AI attorney land a job at a law firm, Hilton hire a robotic concierge, and even — ahem — “robojournalists” cover the U.S. election. As far as we know, none of these bots have caused a human to get laid off — but they’re telling of things to come.

“We’re trying to blur the distinction between electronic circuits and neural circuits.”

The so-called Fourth Industrial Revolution will transform the job market, eliminating over five million jobs in the next five years, according to the World Economic Forum.

So what do we, as humans, do? Augment ourselves.

Augmentation was the running theme of this year’s Bodyhacking Conference in Austin, Texas. Attendees lined up for RFID implants, speakers demonstrated bionic body parts, grinders exhibited artificial senses, and an entire fashion show put “smart” apparel on display. Most of the augmentations were idiosyncratic and wouldn’t make a potential employee more competitive in the future job market (except, perhaps, for documentary filmmaker Rob “Eyeborg” Spence’s prosthetic eye camera). With this in mind, we explored the ways in which augmentation may safeguard us from automation.

Augment our brains

Humans have extraordinary brains — the best in the animal kingdom — but in AI we’ve created minds that exceed our own in many ways. Sure, humans still hold the title for outstanding general intelligence, as today’s AI systems excel at the specific tasks they’re designed for, but algorithms are advancing fast. Some are even learning as they work. A year ago, AI experts thought it would take at least another decade for an algorithm to defeat a top-tier Go player. And then this happened.

Entrepreneur, futurist, and headline-staple Elon Musk is so concerned about AI he co-founded the billion-dollar nonprofit OpenAI to promote “friendly” AI in December 2015. Six months later, he told a crowd at Recode’s annual Code Conference about his want to develop a digital neural layer — colloquially called a “neural lace” — to augment humans on par with AI. He echoed these comments at the World Government Summit in Dubai on Monday, suggesting that such a symbiosis could potentially solve the “control problem and the usefulness problem” likely to face future humanity.

Digital Neural Lace

This rolled electronic mesh can be injected through a glass needle.

Harvard University

The concept is relatively simple: A neural lace is some sort of material that boosts the brain’s ability to receive, process, and communicate information. It’s an extra layer, perhaps a kind of electronic mesh, that physically integrates with the brain and turns the mind into a kind of supercomputer.

If this sounds like science fiction, that’s because it is. Or it was. The term was first coined by sci-fi author Iain M. Banks in his Culture series.

But almost exactly one year before Musk made his comment at the conference, a team of nanotechnologists at Harvard University published a paper called “Syringe-injectable electronics” in the journal Nature Nanotechnology, in which they described an ultra-fine electronic mesh that can be injected into the brains of mice to monitor brain activity and treat degenerative diseases. The possibility for such a material to augment the brain’s input-output capacity was too enticing to overlook.

“We’re trying to blur the distinction between electronic circuits and neural circuits,” co-author Charles Lieber told Smithsonian Magazine. “We have to walk before we can run,” he added, “but we think we can really revolutionize our ability to interface with the brain.”

Elon Musk
Heisenberg Media

Musk hasn’t kept completely quiet about his neural lace aspirations either. In August he told an inquisitive Twitter follower that he was “Making progress” on the project. In January he said an announcement may come this month.

A functioning neural lace is still realistically many years off but, augmented by such a device, humans could conceivably compete with AI at computational tasks currently left to machines, while maintaining our high levels of intuition, decision making, and general intelligence. We’re already cyborgs. With smartphones and the internet as external brains, we boast superhuman intelligence. But analog outputs like typing and speech are slow compared to digital speeds. Imagine listing under the skills section on your résumé the ability to query a database, receive a response, and relay that information to a colleague in the fraction of a second it takes Google to display search results. It would make you a desirable candidate, indeed.

Augment our bodies

As robust as we are in mind, humans are desperately delicate in body. We’re fleshy, fragile things, prone to break and tear under pressure. Robots, on the other hand, are rugged, and capable of tackling strenuous tasks with relative ease.

But robots are also fairly inflexible. Where a human can seamlessly transition from one action to another, machines tend to do just one thing well and need to be recalibrated to perform new tasks.

Enter exosuits. Fitted with these powered external skeletons, humans assume superhuman strength while limiting risk of injury associated with bending and lifting. Think Iron Man or the metallic gear worn by Tom Cruise and Emily Blunt in Edge of Tomorrow.

We’re fleshy, fragile things, prone to break and tear under pressure. Robots are rugged and capable of tackling strenuous tasks with relative ease.

But, like the neural lace, these suits aren’t stuck in science fiction. Engineers at Hyundai, Harvard, and the United States Army are actively developing systems to serve paraplegics, laborers, and soldiers alike.

“What I’ve been working on in my lab for years is to combine the intelligence of the [human] worker with the strength of the robot,” Hoomayoon Kazerooni, director of the Berkeley Robotics and Human Engineering Laboratory, told Digital Trends. “Robots are metal, they have more power than a human. Basically, the whole thesis is to combine human decision making, human intelligence, and human adaptability with the strength and precision of a robot.”

Through his robotics research, Kazerooni founded SuitX, a company that created the PhoeniX medical exoskeleton for patients with spinal cord injury and a modular, full-body exosuit called the Modular Agile Exoskeleton (MAX).

“We use robotic devices where we have repetitive tasks,” Kazerooni said. “Anything that’s dangerous we also automate. These are structured jobs.”

MAX features three components: backX, shoulderX, and legX, each of which assists its titular region, minimizing torque and force by up to 60 percent.

“These machines reduce forces at targeted areas,” Kazerooni said. “So, it’s basically supporting the wearer, not necessarily from a cognitive point of view by telling workers how to do things, but by letting the workers do whatever tasks they’ve done in the past with reduced force.”

can augmentation save workers from job automation suitx backx

can augmentation save workers from job automation suitx phoenix

can augmentation save workers from job automation suitx shouldx

Kazerooni recognizes that machines may someday be so cheap and efficient that human workers simply become an expensive liability. However, until then, the best way to keep laborers safe, productive, and employed may be to augment their physicality.

“The state of technology in robotics and AI is not to the point that we can employ robotics to do unstructured jobs,” he added, “which require a [human] worker’s attention and decision making. There are a lot of unstructured work we can’t yet fully automate.”

Across the country, in the Harvard Biodesign Lab, a team of researchers are developing a softer side of exosuits.

Packed with small motors, custom sensors, and microprocessors, these soft wearable robots are designed to work in parallel with the body’s muscles and tendons to make movement more efficient. In a recent paper published in the journal Science Robotics, the interdisciplinary Harvard team demonstrated an almost 23 percent reduction in effort with its exosuit compared to unaided walking.

“It’s going to be a very difficult time for all human workers.”

The Biodesign Labs has so far been working with DARPA to develop exosuits to help soldiers carry heavy loads over long distances. However, project lead Ignacio Galiana thinks the suit can find applications beyond the battlefield.

“Factory workers in the automotive, naval, and aircraft industry have to move around very large and heavy parts,” he told Digital Trends. “Having a simple system they can wear under their normal pants can give them an extra strength.

“There’s now even a need for people to get packages delivered the next day, and so postal service personnel have a burden to move heavy packages around quickly,” he added. “If they could wear an exosuit that makes them faster and stronger, that could make their work much easier.”

Galiana doesn’t think humans and robots will compete directly for the same jobs. Instead, he sees them working in parallel — so long as humans can keep up with increasing physical demands.

“Human intelligence and decision making is critical in a lot of factory jobs, and the human brain is really hard to imitate in robots,” he said. “That will be key to keeping workers in the workplace. If you give extra strength to a factory worker who has that decision making and intelligence capabilities, you could see them being more effective and staying in work for longer, working alongside robots.”

Augment our skillset

Despite the progress that’s been made in the past few years, superhuman strength and intelligence lie somewhere in the hazy futurescape, inaccessible to most of today’s workforce and not exactly helpful when trying to figure out what humans should do now to safeguard themselves against automation.

For an immediate answer, we turned to Tom Davenport, co-author of Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. In 2015, Davenport and co-author Julia Kirby published “Beyond Automation” in the Harvard Business Review, in which they laid out five practical steps workers may take to improve their employability against machines.

In their list, Davenport and Kirby encourage humans to stand out, whether by developing skills outside the realm of “codifiable cognition” (such as creativity) or learning the ins-and-outs of the machines themselves. (After all, someone needs to fix these things when they break down.) However, the authors’ advice is primarily meant for “knowledge workers” not physical laborers, whom Davenport thinks will have a much more challenging transition in the future job market.

“I try to be optimistic,” Davenport told Digital Trends, “because I do think there are some valuable roles that humans can still play relative to these smart machines, but I don’t think it’s a time to be complacent about it. Any type of worker will need to work hard to keep up the right kinds of skills and develop new skills.”

Freightliner Inspiration

Freightliner was the first truck manufacturer to obtain the right to test an autonomous vehicle in Nevada.

As an example, Davenport points to our friends the truck drivers. “I don’t know how many of them will be willing to develop the computer-oriented skills to understand how autonomous driving works,” he said. And, even if they did take an entry course in programming, what good would it do? Driving in general is a dying profession.

“I think it’s going to be a very difficult time for all human workers,” Davenport said. “I’m optimistic that many of them will make the transition but not all of them will. I’m definitely more pessimistic about certain jobs than others. Even for knowledge workers there will be some job loss on the margins but I believe there are a number of viable roles that they can play. That’s what a lot of my writing has been about — roles that knowledge workers can play that either involved working alongside smart machines, or doing something they don’t.”

Note, when Davenport says “smart machines,” he means narrow AI: systems that do a few specific things really well, such as recognizing faces, playing board games, and creating psychedelic art.

There’s another evolution of AI though, the kind that keeps Elon Musk and Stephen Hawking up at night: artificial general intelligence, which basically do anything a human can intellectually.

What happens when these arise?

“All bets are off,” Davenport said.

15
Feb

Robots and AI are coming for our jobs, but can augmentation save us from automation?


The American truck driver is soon to be an endangered species. Some 3.5 million professionals get behind the wheel of these vehicles in the United States every year, making it one of the most common jobs in the country. In a couple decades, every last one may be out of work due to automation.

Industry giants around the world are investing in autonomous vehicles. In Australian mines, Rio Tinto employs hundred-ton driverless trucks to transport iron ore. Volvo wants to ferry volunteer passengers around London’s winding streets. MIT researchers recently determined the most efficient way for driverless trucks to transport goods. The guy behind Google’s first self-driving car now runs an autonomous trucking startup called Otto in San Francisco.

Truckers may be among the most vulnerable to automation but they’re certainly not alone. Over the past year we’ve seen an AI attorney land a job at a law firm, Hilton hire a robotic concierge, and even — ahem — “robojournalists” cover the U.S. election. As far as we know, none of these bots have caused a human to get laid off — but they’re telling of things to come.

“We’re trying to blur the distinction between electronic circuits and neural circuits.”

The so-called Fourth Industrial Revolution will transform the job market, eliminating over five million jobs in the next five years, according to the World Economic Forum.

So what do we, as humans, do? Augment ourselves.

Augmentation was the running theme of this year’s Bodyhacking Conference in Austin, Texas. Attendees lined up for RFID implants, speakers demonstrated bionic body parts, grinders exhibited artificial senses, and an entire fashion show put “smart” apparel on display. Most of the augmentations were idiosyncratic and wouldn’t make a potential employee more competitive in the future job market (except, perhaps, for documentary filmmaker Rob “Eyeborg” Spence’s prosthetic eye camera). With this in mind, we explored the ways in which augmentation may safeguard us from automation.

Augment our brains

Humans have extraordinary brains — the best in the animal kingdom — but in AI we’ve created minds that exceed our own in many ways. Sure, humans still hold the title for outstanding general intelligence, as today’s AI systems excel at the specific tasks they’re designed for, but algorithms are advancing fast. Some are even learning as they work. A year ago, AI experts thought it would take at least another decade for an algorithm to defeat a top-tier Go player. And then this happened.

Entrepreneur, futurist, and headline-staple Elon Musk is so concerned about AI he co-founded the billion-dollar nonprofit OpenAI to promote “friendly” AI in December 2015. Six months later, he told a crowd at Recode’s annual Code Conference about his want to develop a digital neural layer — colloquially called a “neural lace” — to augment humans on par with AI. He echoed these comments at the World Government Summit in Dubai on Monday, suggesting that such a symbiosis could potentially solve the “control problem and the usefulness problem” likely to face future humanity.

Digital Neural Lace

This rolled electronic mesh can be injected through a glass needle.

Harvard University

The concept is relatively simple: A neural lace is some sort of material that boosts the brain’s ability to receive, process, and communicate information. It’s an extra layer, perhaps a kind of electronic mesh, that physically integrates with the brain and turns the mind into a kind of supercomputer.

If this sounds like science fiction, that’s because it is. Or it was. The term was first coined by sci-fi author Iain M. Banks in his Culture series.

But almost exactly one year before Musk made his comment at the conference, a team of nanotechnologists at Harvard University published a paper called “Syringe-injectable electronics” in the journal Nature Nanotechnology, in which they described an ultra-fine electronic mesh that can be injected into the brains of mice to monitor brain activity and treat degenerative diseases. The possibility for such a material to augment the brain’s input-output capacity was too enticing to overlook.

“We’re trying to blur the distinction between electronic circuits and neural circuits,” co-author Charles Lieber told Smithsonian Magazine. “We have to walk before we can run,” he added, “but we think we can really revolutionize our ability to interface with the brain.”

Elon Musk
Heisenberg Media

Musk hasn’t kept completely quiet about his neural lace aspirations either. In August he told an inquisitive Twitter follower that he was “Making progress” on the project. In January he said an announcement may come this month.

A functioning neural lace is still realistically many years off but, augmented by such a device, humans could conceivably compete with AI at computational tasks currently left to machines, while maintaining our high levels of intuition, decision making, and general intelligence. We’re already cyborgs. With smartphones and the internet as external brains, we boast superhuman intelligence. But analog outputs like typing and speech are slow compared to digital speeds. Imagine listing under the skills section on your résumé the ability to query a database, receive a response, and relay that information to a colleague in the fraction of a second it takes Google to display search results. It would make you a desirable candidate, indeed.

Augment our bodies

As robust as we are in mind, humans are desperately delicate in body. We’re fleshy, fragile things, prone to break and tear under pressure. Robots, on the other hand, are rugged, and capable of tackling strenuous tasks with relative ease.

But robots are also fairly inflexible. Where a human can seamlessly transition from one action to another, machines tend to do just one thing well and need to be recalibrated to perform new tasks.

Enter exosuits. Fitted with these powered external skeletons, humans assume superhuman strength while limiting risk of injury associated with bending and lifting. Think Iron Man or the metallic gear worn by Tom Cruise and Emily Blunt in Edge of Tomorrow.

We’re fleshy, fragile things, prone to break and tear under pressure. Robots are rugged and capable of tackling strenuous tasks with relative ease.

But, like the neural lace, these suits aren’t stuck in science fiction. Engineers at Hyundai, Harvard, and the United States Army are actively developing systems to serve paraplegics, laborers, and soldiers alike.

“What I’ve been working on in my lab for years is to combine the intelligence of the [human] worker with the strength of the robot,” Hoomayoon Kazerooni, director of the Berkeley Robotics and Human Engineering Laboratory, told Digital Trends. “Robots are metal, they have more power than a human. Basically, the whole thesis is to combine human decision making, human intelligence, and human adaptability with the strength and precision of a robot.”

Through his robotics research, Kazerooni founded SuitX, a company that created the PhoeniX medical exoskeleton for patients with spinal cord injury and a modular, full-body exosuit called the Modular Agile Exoskeleton (MAX).

“We use robotic devices where we have repetitive tasks,” Kazerooni said. “Anything that’s dangerous we also automate. These are structured jobs.”

MAX features three components: backX, shoulderX, and legX, each of which assists its titular region, minimizing torque and force by up to 60 percent.

“These machines reduce forces at targeted areas,” Kazerooni said. “So, it’s basically supporting the wearer, not necessarily from a cognitive point of view by telling workers how to do things, but by letting the workers do whatever tasks they’ve done in the past with reduced force.”

can augmentation save workers from job automation suitx backx

can augmentation save workers from job automation suitx phoenix

can augmentation save workers from job automation suitx shouldx

Kazerooni recognizes that machines may someday be so cheap and efficient that human workers simply become an expensive liability. However, until then, the best way to keep laborers safe, productive, and employed may be to augment their physicality.

“The state of technology in robotics and AI is not to the point that we can employ robotics to do unstructured jobs,” he added, “which require a [human] worker’s attention and decision making. There are a lot of unstructured work we can’t yet fully automate.”

Across the country, in the Harvard Biodesign Lab, a team of researchers are developing a softer side of exosuits.

Packed with small motors, custom sensors, and microprocessors, these soft wearable robots are designed to work in parallel with the body’s muscles and tendons to make movement more efficient. In a recent paper published in the journal Science Robotics, the interdisciplinary Harvard team demonstrated an almost 23 percent reduction in effort with its exosuit compared to unaided walking.

“It’s going to be a very difficult time for all human workers.”

The Biodesign Labs has so far been working with DARPA to develop exosuits to help soldiers carry heavy loads over long distances. However, project lead Ignacio Galiana thinks the suit can find applications beyond the battlefield.

“Factory workers in the automotive, naval, and aircraft industry have to move around very large and heavy parts,” he told Digital Trends. “Having a simple system they can wear under their normal pants can give them an extra strength.

“There’s now even a need for people to get packages delivered the next day, and so postal service personnel have a burden to move heavy packages around quickly,” he added. “If they could wear an exosuit that makes them faster and stronger, that could make their work much easier.”

Galiana doesn’t think humans and robots will compete directly for the same jobs. Instead, he sees them working in parallel — so long as humans can keep up with increasing physical demands.

“Human intelligence and decision making is critical in a lot of factory jobs, and the human brain is really hard to imitate in robots,” he said. “That will be key to keeping workers in the workplace. If you give extra strength to a factory worker who has that decision making and intelligence capabilities, you could see them being more effective and staying in work for longer, working alongside robots.”

Augment our skillset

Despite the progress that’s been made in the past few years, superhuman strength and intelligence lie somewhere in the hazy futurescape, inaccessible to most of today’s workforce and not exactly helpful when trying to figure out what humans should do now to safeguard themselves against automation.

For an immediate answer, we turned to Tom Davenport, co-author of Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. In 2015, Davenport and co-author Julia Kirby published “Beyond Automation” in the Harvard Business Review, in which they laid out five practical steps workers may take to improve their employability against machines.

In their list, Davenport and Kirby encourage humans to stand out, whether by developing skills outside the realm of “codifiable cognition” (such as creativity) or learning the ins-and-outs of the machines themselves. (After all, someone needs to fix these things when they break down.) However, the authors’ advice is primarily meant for “knowledge workers” not physical laborers, whom Davenport thinks will have a much more challenging transition in the future job market.

“I try to be optimistic,” Davenport told Digital Trends, “because I do think there are some valuable roles that humans can still play relative to these smart machines, but I don’t think it’s a time to be complacent about it. Any type of worker will need to work hard to keep up the right kinds of skills and develop new skills.”

Freightliner Inspiration

Freightliner was the first truck manufacturer to obtain the right to test an autonomous vehicle in Nevada.

As an example, Davenport points to our friends the truck drivers. “I don’t know how many of them will be willing to develop the computer-oriented skills to understand how autonomous driving works,” he said. And, even if they did take an entry course in programming, what good would it do? Driving in general is a dying profession.

“I think it’s going to be a very difficult time for all human workers,” Davenport said. “I’m optimistic that many of them will make the transition but not all of them will. I’m definitely more pessimistic about certain jobs than others. Even for knowledge workers there will be some job loss on the margins but I believe there are a number of viable roles that they can play. That’s what a lot of my writing has been about — roles that knowledge workers can play that either involved working alongside smart machines, or doing something they don’t.”

Note, when Davenport says “smart machines,” he means narrow AI: systems that do a few specific things really well, such as recognizing faces, playing board games, and creating psychedelic art.

There’s another evolution of AI though, the kind that keeps Elon Musk and Stephen Hawking up at night: artificial general intelligence, which basically do anything a human can intellectually.

What happens when these arise?

“All bets are off,” Davenport said.

15
Feb

Meizu’s first phone of 2017 is here, and it’s very cheap and cheerful


Why it matters to you

The older Meizu M5 wasn’t the phone for international buyers looking for a bargain. The updated M5s is.

It’s one small step for a man… Well, it’s just a small step period.

Popular Chinese manufacturer Meizu has announced its first smartphone of 2017: The Meizu M5s. If the name sounds familiar, that’s because Meizu announced the M5 in October this year, making the M5s a slightly — and we really do mean slightly — updated version of that phone. One feature in particular makes this a more desirable version for potential international buyers, however.

The design is typically Meizu, a firm that really does stick to a theme with its hardware. It’s no bad thing. The M5s is attractive, and comes wrapped in an aluminum body, with a 2.5D curved glass panel over the 5.2-inch screen. The screen only has a 1,280 × 720 pixel resolution though, which is a shame, but does help keep the price low.

More: Xiaomi’s Redmi Note 4x is a slight update over the Note 4, with one very cute difference

Meizu’s being coy about the processor inside the M5s, claiming it’s a 64-bit, octa-core chip, but stopping short at telling which in the press release. Exploring the official M5s page reveals it’s the MediaTek MT6753, a small step up over the MT6570 used in the M5. It’s supported by 3GB of RAM, and there’s a MicroSD card slot to boost the internal storage space.

There’s a Sony 13-megapixel camera with an f/2.2 aperture and phase detection auto-focus on the back of the M5s, and a 5-megapixel selfie cam with a lower f/2.0 aperture above the screen. Also on the front of the phone is Meizu’s mTouch fingerprint scanner, which has always been a strong performer, and promises to unlock the phone in just 0.2 seconds. Power comes from a 3,000mAh battery, and with it the first major difference between the M5s and the older M5: Fast charging. The 18w system takes the M5s’s battery from flat to 56 percent capacity in 30 minutes, almost matching phones like the OnePlus 3T.

Finally, and the reason anyone wanting a cheap Meizu phone would be advised to check the M5s over the older version, is the presence of Android. The M5 used Yun OS, which is developed for China by Chinese ecommerce giant Alibaba, as its operating system. Both phones have Meizu’s Flyme user interface over the top.

Meizu will release a 16GB and 32GB version of the M5s, in a choice of four colors, with prices converting over to a very reasonable $115 or $145.