Google’s $5000 Jamboard collaborative whiteboard is now available in the U.S.
This is the future of office collaboration — but it’ll cost you.
Google’s enterprise-focused smart whiteboard, called the Jamboard, is now available for those in the U.S. who are ready to increase collaboration and move beyond smelly dry erase markers. The Jamboard at its core is a 55-inch 4K touchscreen panel powered by an NVIDIA Jetson TX1 board, focused on super-fast touch response with low-latency stylus input as well.
The Jamboard hooks into G Suite accounts to pull in files and documents from Google Drive, Docs, Sheets and Slides, letting everyone work and draw together at the same time. Everything is saved constantly to the cloud, so you can pull in members to collaborate remotely, or move between Jamboards and computers seamlessly.
The Jamboard is a smooth $4999 to buy, but that’s really just the starting price. If you want to use it on a rolling stand rather than mounting it on a wall, that’ll set you back another $1199 (or $1349 after September 30). There’s also a yearly service fee, which comes in at $300 if you sign up early (again, before September 30) or $600 per year thereafter. That’s of course above and beyond the fees you’re paying for your G Suite enterprise app platform.
$5000 is a small price to pay if it dramatically improves collaboration at work.
That’s a whole lot of money when looking at it from the point of view of an individual, but for even a small business that could be just a drop in the bucket of your yearly technology budget. It only takes a couple minutes of video from Google to show just how much something like the Jamboard could improve collaboration by removing the friction of moving data between platforms and having it visualized in a room.
You’ll have to be in the U.S. to buy a Jamboard today, but it’ll be coming to the UK and Canada this summer with an expansion further in the coming months.
Google Hardware

- Google Wifi review
- Google Home review
- Everything you need to know about the Chromecast Ultra
- Chromecast vs Chromecast Ultra: Which should you buy?
Google Wifi:
Google
Amazon
Google Home:
Google
Best Buy
Chromecast Ultra:
Google
Best Buy
You won’t empty your wallet for studio-grade headphones with the Status Audio HD One’s at $30
Our friends at Thrifter are back again, this time bringing you studio-grade headphones for much less!
Wireless headphones have grown in popularity lately, but wired headphones are still invaluable for many since they don’t need to be charged, offer a more reliable connection and more. Over-the-ear headphones can range in price from around $10 to over $300, depending what you are looking for. If quality is the main concern for you, but you don’t want to go broke buying a new set of headphones, we’ve got just the deal for you. Status Audio, a company that specializes in high-quality affordable headphones, is currently offering its HD One wired headphones for just $29.92 with coupon code THRFTR32.

So, you’ve never heard of Status Audio? Don’t worry, these headphones have a 4.3 out of 5-star rating at Amazon with over 700 reviews, and the company’s other headphones, the CB-1, maintain a 4.8 out of 5-star rating with over 250 reviews. With the HD One’s you’ll get:
- THE STATUS AUDIO ADVANTAGE – No logos, no celebrities, just sound. Join the 100,000+ customers enjoying premium sounding, beautifully constructed headphones at a fraction of the price of the big name competition.
- SLEEK AND MINIMAL – Prefer not to be a walking billboard? The HD One headphones are 100% free of logos and branding, and are coated in a rubberized matte finish that doesn’t show fingerprints or reflect light. Perfect for a minimal look or for customizing.
- STUDIO GRADE – 40mm drivers deliver an exciting sound signature that responds to all genres of music. Deep, articulated bass with a sizzling high end. Exceptional detail and clarity throughout the entire frequency spectrum. This is the big, crisp sound you’ve been looking for.
- FIT AND FINISH – On-ear design that is lightweight and comfortable for long-term use. A unique foldable design collapses into a small form factor, ideal for travel and commuting. Substantial, yet highly portable, these are your go-to headphones for your morning commute and throughout the day.
You don’t have to take our word for it that you’ll be impressed with the quality of these, though. Try them for yourself now for just $29.92, which happens to be the best price we’ve tracked on them, and let your ears show you just how good they are.
See at Amazon
For more great deals be sure to check out our friends at Thrifter now!
Twitch adds video speed controls to slow down that amazing play
Now that Twitch is making a bigger deal out of pre-recorded videos, it’s competing more directly with services like YouTube — and that means adding the kind of playback controls you take for granted elsewhere. Accordingly, the service has introduced a reworked video settings menu that adds speed controls. You can slow all the way down to 0.25X to relive a game-winning moment, or ramp up to 2X if you’re skipping through the slow parts.
The revamp also streamlines access to the most common settings and groups some that were otherwise scattered. These additions aren’t likely to change your viewing habits in the near future, of course. Twitch may be diversifying beyond games, but the odds are still high that you’ll head elsewhere if you’re watching non-gaming clips. However, you could see this as laying groundwork. The more Twitch expands its catalog, the more people it will attract beyond Twitch’s core gamer audience. It needs to be ready when those newcomers arrive, and even little features like this could help win them over.
Source: Twitch (Medium)
Facebook defends content policy after guidelines leak
Facebook has caught its share of flak from critics of its leaked content rules. Some are worried that it’s being too lenient on abusers, while others are convinced it’s engaging in heavy-handed censorship. And apparently, Facebook has had enough — it wants to explain its rationale. Global policy head Monika Bickert has published an article that defends the social network’s guidelines while acknowledging the tricky nature of governing a site that serves nearly 2 billion users. The company doesn’t always get things right, Bickert explains, but it believes that a middle ground between freedom and safety is ultimately the best answer.
Bickert contends that Facebook has to be “as objective as possible” in order to have consistent guidelines across every area it serves. You’ll almost never see widespread legal standards, it says, and it has to balance “tensions” on delicate issues like violent imagery or nudity. What’s acceptable in one region can be horribly offensive in another. If anything, the uproar from both pro- and anti-censorship camps suggests Facebook is getting things right — it’s a sign the company isn’t “leaning too far in any one direction,” according to Bickert.
That isn’t to say that the internet giant has found a perfect balance. It’s aware that it can be difficult to understand the context behind a post: is that violent video going to produce copycats, or should it stay up to raise awareness of a problem? And Bickert stresses that Facebook’s guidelines aren’t set in stone. It can and will change its standards as it learns new things. For example, it will leave a live suicide attempt running so that people know the streamer needs help. And Facebook will still “end up making the wrong call” at times, Bickert says.
There are definitely points of contention in the piece. The policy chief maintains that Facebook keeps some of its guidelines secret so that it doesn’t prompt people to “find workarounds.” That’s a real concern, to be sure, but it also means that Facebook’s decision-making process sometimes appears arbitrary. Also, Bickert likely won’t satisfy everyone when she says that Facebook is fine with “general expressions of anger,” but not specific threats. What if that generic rage amounts to a tangible threat based on where it’s posted? Just because someone isn’t calling you out by name doesn’t make it any less scary when they talk about violence in your comments.
Still, the piece at least recognizes that this isn’t a simple problem with simple answers. If it’s even possible to find a harmonious set of content rules, it may take a long time to create them. And Facebook is at least doing something about its imperfections — the tech firm is hiring 3,000 new moderators who could improve its review process. The concern is that Facebook’s concept of neutrality might not hit the mark. While it has a strong incentive to play it safe and respect your freedom of expression as much as possible, it could inadvertently put some people at risk in the process.
Via: Business Insider
Source: Facebook Newsroom
New Microsoft Surface Pro (2017) vs Surface Pro 4: What’s the difference?
Microsoft has updated its Surface Pro line, with the introduction of the new Surface Pro.
The latest model, which would be the Surface Pro 5 if Microsoft had kept the numbers, packs in more power and a better battery than its Surface Pro 4 predecessor, but what else has changed? Here’s how the new Surface Pro (2017) compares to the Surface Pro 4.
Microsoft Surface Pro (2017) vs Surface Pro 4: Design
- New Surface Pro said to be lightest Pro ever at 768g
- Expected to have similar footprint to Pro 4 at 292.1mm x 201.4mm
- New hinge system on new Surface Pro
The new Microsoft Surface Pro looks similar to its predecessor, but it has a new hinge system on the kickstand which allows it to be used in Surface Studio mode. That means it lays flat to work directly with the new Surface Pen.
Its also lost a bit of weight, touting itself as the lightest Surface Pro ever created at 768g. Presumably this is the case for the more powerful models too as otherwise the Pro 4 is actually lighter by a couple of grams, not that we are counting.
The Pro 4 weighs 766g, or 786g for the more powerful models. This is for the tablet only with no Type Cover included, making it marginally lighter than the older 794g Pro 3. Measurements have yet to be revealed for the new Surface Pro but we suspect they won’t be too far off the Pro 4’s front-on footprint of 292.1mm x 201.4mm.
The Pro 4 features a magnesium chassis measuring 8.4mm thick, which was 0.6mm thinner than the Pro 3. It could have been slimmer still, Microsoft claimed when it launched, but for the sake of keeping the full-size USB 3.0 port in tact that’s about as slim as it could be, which is likely to be the same for the new Surface Pro.
- Microsoft Surface Pro 4 review
Microsoft Surface Pro (2017) vs Surface Pro 4: Display
- New Surface Pro and Pro 4 have same 12.3-inch screens
- Same 2736 x 1824 pixel resolution on new Surface Pro and Pro 4, 267ppi
- Pro 3 had 12-inch screen, 2160 x 1440 resolution
The new Surface Pro has the same size screen as the Surface Pro 4, measuring 12.3-inches diagonally, which is larger than the 12-inch panel in the Pro 3 so if you’re considering upgrading from the Surface Pro 3, you’ll get a slightly larger screen in a similar footprint.
The resolution of the new model is 2736 x 1824 pixels, which results in a pixel density of 267ppi. This is also the same as the Surface Pro 4 but slightly sharper than the 2160 x 1440 resolution of the older Pro 3. There are no varying resolution options available for any of the models.
Microsoft Surface Pro (2017) vs Surface Pro 4: Hardware
- New Surface Pro has Intel Core 7th-gen rather than Pro 4’s 6th-gen
- RAM options between 4GB and 16GB on both Pro models, SSD between 128GB to 1TB
- New Surface Pro has 50 per cent more battery life
The new Surface Pro features Intel Core 7th-generation processors, while the Surface Pro 4 features Intel Core 6th-gen Skylake processors, which was said at the time of launch to offer 30 per cent more power than the Pro 3.
The base model of the new Surface Pro features the Core m3 Intel 7th-gen processor, configurable to a faster-yet Core i7 option. The base model of the Surface Pro 4 features the Core m3 Intel 6th-gen processor, also configurable up to a Core i7 version.
RAM options come in abundance too: the entry-level new Surface Pro and Pro 4 both feature 4GB RAM, expandable up to 16GB. This wasn’t a possible option for the Pro 3 as the memory was part of the main board, so couldn’t even be done manually.
The new Surface Pro and Pro 4 storage options start at 128GB SSD, through 256GB, 512GB and even a maximum 1TB option. There is also a microSD card slot on both the new Surface Pro and the Surface Pro 4, however, so paying the extra for the 1TB option might not be worthwhile for many.
In terms of battery, the new Surface Pro is claimed to offer 50 per cent more, at 13.5 hours. The Surface Pro 4 by comparison offers 9 hours and it was one of the things that let it down. There is an 8-megapixel rear camera and 5-megapixel front camera on both the new Surface Pro and the Surface Pro 4, both of which are capable of 1080p video recording.
Microsoft Surface Pro (2017) vs Surface Pro 4: Accessories
- New Surface Pen
- New Type Cover
- Old Type Cover still available, though not clear if compatible with new Surface Pro
The new Surface Pro has the option of several accessories, all of which cost extra with none included in the box. There is a new Type Cover called the Surface Pro Signature Type Cover, which is made from Microsoft’s Alcantara material. It has a magnetic interface, doubles up as a protective cover, comes in three colours and it is compatible with the new Surface Pro, as well as the Pro 4 and Pro 3.
The Type Cover that launched alongside the Surface Pro 4 is also still available. This keyboard also clips onto a magnetic port and doubles-up as a protective cover too. It will also work with the Pro 3 and Pro 4 models, but it is not clear if it will work with the new Surface Pro as yet.
There is also a new Surface Pen, which comes in four colours and offers four times the sensitivity of its predecessor.
The new pen is said to be better than ever, offering precision ink on one end and a rubber eraser on the other, as well as tilt for shading, greater sensitivity and “virtually no lag”. The new Surface Pro also offers on-screen support for the Surface Dial.
Microsoft Surface Pro (2017) vs Surface Pro 4: Software
- Windows 10 Pro software on both
The new Surface Pro and the Surface Pro 4 are up to date and therefore come with the latest Windows 10 Pro operating system.
That means the full bevvy of software: from Office to Cortana and beyond so one isn’t better than the other in the case of software.
Microsoft Surface Pro (2017) vs Surface Pro 4: Price
The new Surface Pro (2017) will start at £799 and will ship on 15 June.
The Surface Pro 4, by comparison, did start at £749 in the UK and $899 in the US, but it has seen a price drop since the recent announcement, now starting at £675. Both are still more expensive than the Pro 3 however, which started at £639.
Incremental upgrades will each add additional cost, although to what degree it’s not yet known – the official Microsoft Surface Pro page is live, but it was not accepting pre-orders at the time of writing.
Microsoft Surface Pro (2017) vs Surface Pro 4: Conclusion
The new Microsoft Surface Pro is said to be rethought from the inside out. It offers a similar design to its predecessor but it delivers more power and a better battery life, while also being lighter in build.
It is more expensive though and the Surface Pro 4 has also seen a reduction in price so for some, the older model might be the better option for budget.
Blizzard launches an ‘Overwatch’ league for amateurs
Blizzard’s colorful shooter may be one of the most popular multiplayer games out right now, but its eSport scene hasn’t exactly got off to the best start. With rumors circulating that a slot in Overwatch’s official League costs between $15 and $25 million, pro teams have started to abandon the game in droves. Now, in a bid to win them back, Blizzard has announced a new entry-level league for the game called Overwatch Contenders.
Blizzard describe Contenders as a tournament for “aspiring Overwatch League professionals”. Instead of just showcasing pros, this new battleground allows any amateur team to sign up and duke it out in an online-only qualifying match. These games then decide the best eight teams from each region, earning the winners a place in Overwatch Contenders season zero. Once locked in, competing teams battle it out for a juicy prize pool of $50,000 — so it might be time to start studying that meta.
If you haven’t already got a killer squad though, you might be out of luck, as season zero registration has already started. Qualifying matches take place next month in North America and Europe, and if you’re not playing in them, you can watch people’s dreams get crushed live on Twitch.

Unsurprisingly, the Contenders league doesn’t end at season zero. The best six North American teams will then join existing teams Envy and Rogue (who competed in South Korea’s APEX Tournament) for the launch of Overwatch Contenders season one. After squaring off, the four best squads will then go on to fight for a chance to win a hefty $100,000 prize. Luckily for EU players, they also have their own season one tournament, with participants whittled down into the four best teams from each region. These EU squads will then battle it out in an offline tournament, where the winning European team will also earn $100,000.
With the main Overwatch League losing teams because of exclusivity, this all sounds like fairly promising stuff. It also seems likely that talented players in the Contenders could potentially land themselves a place on the pricey teams of the big boy Overwatch League, which could help to inject more value into the money-haemorrhaging main league. Whether the Contenders will be entertaining enough to draw in the viewers and therefore the sponsors, however, remains to be seen. Still, given the lengths Blizzard has gone to before — we’re feeling fairly optimistic that these matches will be just as fun to watch as the game is to play.
Source: Overwatch Blog
Apple plans to make Safari scrolling a lot smoother
Apple is making changes to its mobile Safari browser that will make scrolling work smoother across all websites, according to posts on Hacker News and Daring Fireball. Right now, regular pages scroll differently on iOS Safari than sites like Reddit that use AMP (Google’s accelerated mobile pages). That’s because Google uses an iOS technique that allows AMP pages to override the default scrolling, making pages slicker and faster to browse.
The discussion came up, amusingly enough, because Daring Fireball’s Jon Gruber bashed AMP in a separate post, saying “other than loading fast, AMP sucks.” To make his point, he highlighted Safari’s inconsistent behavior depending on whether pages use AMP encoding or not.
In a discussion on Hacker News, an AMP team member pointed out that “we didn’t implement scrolling ourselves,” but rather just made it so that scrolling operates within the website (in an “overflow”), rather than the browser itself. It actually filed a bug with Apple, asking it to make the scroll inertia for AMP pages the same as for regular sites.
To the team’s surprise, Apple agreed to do the opposite: make built-in Safari scrolling behave as it does with AMP pages. The reason for that, an Apple engineer called OM2 said in the Hacker News discussion, is because Google’s implementation actually matches scrolling behavior elsewhere in iOS. Scrolling as implemented by Apple engineers in Safari, however, is slower — “an intentional decision made long ago,” said OM2. The team decided that original reason is no longer valid, so scrolling in the next version of Safari will match what Google does with AMP.
While that will no doubt sooth Safari users, Apple’s team still isn’t thrilled about how AMP implements scrolling. As OM2 points out, AMP breaks certain key features on Safari, like tapping on the top of the screen to scroll to the top of the page and auto-hiding the top and bottom bars.
And while it saves data and boosts speed, AMP has its share of users and developers who dislike it, not only because of tech issues but the fact that it puts so much control into Google’s hands. As Gruber puts it, “if you are a publisher and your web pages don’t load fast, the sane solution is to fix your fucking website so that pages load fast, not to throw your hands up in the air and implement AMP.”
Via: Daring Fireball
Source: Hacker News
Twitter’s new interactive cards help you slide into brands’ DMs
Twitter has been working hard to find new ways for brands and advertisers to connect with its users, especially since becoming a public company in 2013. Hey, shareholders need to make that money. But while customer service experiences are clearly a major focus for the social network, its latest business feature is more about brands luring you into their DMs with “fun and engaging” promoted tweets. With the new Direct Message Card, advertisers can create up to four customizable actions and pair them with images or videos to, hopefully, get you to click and see their pitch.
For example, Patrón Tequila, one of the brands experimenting with Direct Messages Cards, is launching a campaign that lets people engage with its messaging bot, Bot Tender, by simply answering a few questions about where they’d like to consume some cocktails. If you click to respond, Patrón then creates a personalized drinks recommendations based on your answer to whether you prefer to drink tequila at a pool, party, cookout or somewhere in a mountain. What’s next? Wendy’s asking if you dip french fries in a vanilla or chocolate shake?

Facebook can keep your trash talk private during live events
Facebook wants to be a serious destination for online video, and it’s fleshing out its Live streaming experience to help it get there. Consider the process of talking about up-to-the-minute events unfolding in a Facebook Live stream. Rather than just throwing your comments into a huge, messy pool of commingled conversations, you’ll soon be able to privately chat with others in a separate space while the Live feed plays on.
Facebook unimaginatively calls this feature “Live Chat with Friends,” and they’re sort of like real-time, text-based viewing parties. Switching between these private conversations and the broader, public one is said to be dead-simple, but it’ll be a while before we get to judge that for ourselves. Despite being conceptually very simple, Facebook is still testing the feature in its mobile apps and expects to push it live for more people later this summer.
And if you’re at the helm of a Facebook Live broadcast yourself, you can now drag viewers or guests into the stream. (To be clear, the person you’ve invited can decline.) Facebook Live aficionados might remember that the company launched this feature for public figures with verified accounts last year, but that restriction is officially finito. Go forth and stream with friends — your Live will turn into either a picture-in-picture or a side-by-side affair, depending on how your guest is holding their device.
These features are arguably helpful to content creators and streaming junkies, but the biggest questions surrounding Facebook Live right now still deal with proper and curation policing. The first five months of 2017 alone have seen multiple horrific crimes streamed live to Facebook viewers, leaving many to wonder about the processes in place for pulling offensive content and what constitutes appropriate responsibility for such a massive company. For what it’s worth, CEO Mark Zuckerberg has said the social giant will add 3,000 people to Facebook’s community operations team “to review the millions of reports” submitted weekly, and wants to “improve the process for making it happen fast.” It’s nice that Facebook wants to make communication — both public and private — easier, but it’s also clear the changes Facebook Live really needs are behind-the-scenes.
We’re not getting Luke Skywalker’s prosthetics any time soon
In 1937, robot hobbyist “Bill” Griffith P. Taylor of Toronto invented the world’s first industrial robot. It was a crude machine, dubbed the Robot Gargantua by its creator. The crane-like device was powered by a single electric motor and controlled via punched paper tape, which threw a series of switches controlling each of the machine’s five axes of movement. Still, it could stack wooden blocks in preprogrammed patterns, an accomplishment that Meccano Magazine, an English monthly hobby magazine from the era, hailed as “a Wells-ian vision of ‘Things to Come’ in which human labor will not be necessary in building up the creations of architects and engineers.”

The robot Gargantua – credit: Meccano Magazine
In the 80 years since, Gargantua’s progeny have revolutionized how we work. They’re now staples in agriculture, automotive, construction, mining, transportation and material-handling. According to the International Federation of Robotics, the United States employs 152 robots for every 10,000 manufacturing employees — though that lags behind South Korea’s 437, Japan’s 323 and Germany’s 282. Over the past two decades, this push toward automation helped boost US manufacturing output by almost 40 percent, an added value worth $2.4 trillion annually. Even with losing roughly 5.6 million manufacturing jobs between 2000 and 2010 — only 15 percent of which were the result of international trade — America’s manufacturing base manages to produce more products (and more valuable products) than ever.
But as efficient as these machines are, they still can’t hold a candle (literally) to the dexterity, sensitivity and gripability of the human hand. Take the industrial robots that work on automotive-manufacturing lines, for example. Their jobs nearly exclusively are to perform coarse assembly work — lifting body panels into place, spray painting and welding — while final assembly tasks, like wiring and installing interior panels, fall to more-dextrous humans. But what if these industrious automatons were able to perform the tasks that only humans can currently do? What if we could design a robotic hand that’s just as good as a biological one? A number of researchers around the world are looking to do just that.
The primary challenges hampering the creation of a truly human-like robotic hand can be broken down into three distinct parts: the dexterity and strength of the physical hand itself; its sensitivity; and the system that moves it. Dexterity, when it comes to robot hands, determines how readily it can manipulate objects around it. Similarly, sensation refers to how responsive the hand is to stimuli, and grip strength ensures that the robot can hold onto whatever it’s trying to grab without crushing it. These three capabilities must operate in unison if the overall system is to work efficiently.
Like this, but with robots.
The human hand’s dexterity — specifically, the opposable thumb — has proved nearly as important to our evolution as our cognitive abilities, according to researchers from Yale University, and could potentially impart the same benefits to robot-kind. However, providing mechanical grippers with the same flexibility and adaptivity of the human hand has been incredibly difficult and expensive. That’s why a vast majority of current robotic manipulators still use the two-pronged pincer system or are highly specialized to a singular task. “You can get a relatively simple gripper that’s inexpensive, or you could get a very dextrous hand for two orders of magnitude more,” said Jason Wheeler, a roboticist at Sandia Labs who helped develop the hand used by Robonaut 2 aboard the International Space Station. “And there aren’t a whole lot of options in between.”
There are a number of engineer limitations slowing the development of truly dextrous robot hands. First, modern actuators — the components that activate the fingers to open and close — are still pricey and often prohibitively heavy. This limits the degrees of freedom (DoF) a robo-hand can have. The human hand has 27 DoF: four in each finger, five in the thumb and another six in the wrist. As Wheeler points out, if you want 27 DoF in a robot hand, you’re going to need 27 individual actuators, as well as somewhere to put them. “Most of the muscles for the human hand are up in the forearm, so if you’re designing a forearm to go with it, you can put most of those actuators up there,” he explained.
What’s more, Wheeler continued, “the force capability and the force density of human muscle is still outpacing what we can do with electromagnetic actuators, so the size and weight of the hand and the force and torque are limited because we don’t have actuators that can match human muscles.” Simply put, the greater flexibility of the robo-hand, the weaker it is going to be because we can’t pack enough strength into the same amount of space as human muscles are able to produce. Plus, the more components you pack into a robotic hand, the more often things will wear down, fail and require service.
Despite these challenges, a number of public and private research groups are working to improve the strength and dexterity of robot hands. The Sandia Hand from Sandia National Labs, which was born out of Darpa’s Revolutionizing Prosthetics program, is one such example. It was initially designed as a stand-in for IED (improvised explosive device) disposal technicians so they wouldn’t have to don advanced bomb suits and put themselves in harm’s way. In this case, anthropomorphic function was a key design point because the robot hand had to operate in a chaotic real-world environment rather than in an automotive assembly line or warehouse. “Because of the number and diversity of those tasks that were required and they were all things that humans would normally do, having an anthropomorphic hand made sense,” Wheeler explained.
But even with a high degree of dexterity, robotic hands can’t achieve the same level of performance as biological ones if they don’t possess a sense of touch. The human hand is jam-packed with roughly 17,000 nerves that allow it to recognize not only that it is physically touching an object, but also nearly instantly sense that object’s weight, firmness, shape, temperature and texture.
“What your hand can sense is a combination of the vibrations when you stroke the object, the deformation of your fingertip, which informed shear and roll forces, as well as points of contact, like the curvature of an object,” said Gerald Loeb, CEO and co-founder of robotics startup, Syntouch. “You can also tell how heat is exchanged between your hand and the object.”
Our sense of touch is what enables us to feel when a glass is slipping from our grasp and tighten our grip without squeezing down hard enough to shatter it. Not so with robotic hands, Dr. Ravi Balasubramanian of Oregon State University explained during a recent interview. “Even if there are 10 or 20 sensors on a robot hand, the designer will be extremely nervous when it is interacting with the outside world.”
Part of this concern with operating outside of lab conditions grew from DARPA’s 2006 attempt to develop more-functional prosthetics. Dubbed “Revolutionizing Prosthetics,””, this program effectively sought to build a robotic hand as capable as Luke Skywalker’s arm from The Empire Strikes Back. DARPA highlighted the differences between the current state of robotics and what science fiction thinks is possible “and said ‘these should do everything that that does,’ essentially, everything that a human hand should do,” Loeb explained.
But the problem is that a decade ago, the state of the art was still quite crude. “They weren’t just providing crude information, they’re providing the wrong information,” Loeb said. The early touch sensors employed in the DARPA program were measuring force and profilometry, which, as the name implies, physically maps a surface’s profile. It’s the same premise as pin art impression boards. “That’s interesting,” Loeb quipped, “but not actually what human hands do.”
As such, Loeb and his team at Syntouch have set about tackling the challenge of getting all that sensory information from a single device that’s the same size as a fingertip and robust enough to be used in the real world. The result is a proprietary system called the BioTac Toccare. The BioTac’s hardware is an anthropomorphic mechanical finger. All of the tactile sensors are embedded where the bone would be and are surrounded by an elastic skin. The beauty of this setup is that the external skin can easily and inexpensively be replaced when it wears down without having to swap out the sensors, which makes it far more durable than other similar systems.
The raw data generated by the BioTac are then interpreted by Syntouch’s software, called the Syntouch Standard. It’s a lot like the conventional RGB standard, but for tactile data. “Developing a number of quantifiable dimensions [of touch] … allows us to quantify each of the critical elements that make up what something feels like,” Loeb explained. The standard measures 15 metrics in all, from coarseness and resistance to thermal cooling and adhesiveness.
And while this may sound like science fiction, the company has already integrated its tactile sensors into prosthetic arms. “We at Syntouch have been focusing on reflex loops,” Loeb said. Loops include picking up a glass, grabbing your phone without crushing it — all the stuff you do without actively thinking about it. With funding by the Congressional Directed Medical Program, Syntouch is developing those reflexive subroutines, putting them in arms and giving the prosthetics to military veterans, who take them home and try them out in real-world situations before keeping the arm long-term (basically a try-before-you-buy situation). “Our preliminary work at our facilities has shown that amputees respond very well to having a reliable grasp,” Loeb said. “It’s convenient, it doesn’t distract them, and they’ve been really happy with it.”
The ability for to train artificial hands to react reflexively has proved a surprisingly difficult challenge to overcome. “If you are holding a glass and it begins slipping, we don’t know how to mimic that in a robot hand,” Balasubramanian said. This is a symptom of a larger roadblock in the development of human-like robotic hands: our general inability to control them, especially when used as prosthetics.
For pure robotic applications, command and control is fairly straightforward. If you want the hand to have a certain number of degrees of freedom, perform a dextrous task or incorporate sensory-feedback data, you simply increase the processing power of the computer controlling it. Even in applications where physical space, high-bandwidth feedback or computational power is limited, there are technical workarounds.
For example, the Robonaut 2, which is serving aboard the ISS, is “under-actuated” — that is, it has more joints than it can actively control. By coupling these extra joints with actively controlled ones in preprogrammed manners, its hands can increase their relative dexterity without having to add more processing power. Still, as Wheeler points out, “there’s not a lot of value in adding more than three or four degrees of freedom per finger unless you have a really good, smart way to control them. So for most applications, some of those more subtle degrees of freedom used for very fine dextrous tasks, we don’t have the capability.” And for such tasks that do require a high-level of dextrous control, teleoperation systems like Shadow Robot’s “cyberglove” can remotely control the hand. It uses motion-capture to precisely mimic the user’s hand motions and relay those commands to the hand.
Unfortunately, when it comes to using robotic hands as prosthetics, neither of these solutions is really feasible right now. “The single biggest issue with a prosthetic hand is control,” Wheeler explained. “Right now, there’s almost no sensory feedback. And there’s only two to three control channels at the most.” That’s the single biggest limiting factor, Wheeler explained. “How you get the command signal and sensory feedback information back and forth?” That’s where the neural interfaces will come in.
Interfaces like the one that Elon Musk has been making headlines with in recent weeks. He’s touting a partial brain-machine interface (BMI) that, when it is released in the next four to five years, could be used to treat Parkinson’s, depression and epilepsy. However, he is far from the first to work on bridging the divide between robotic hardware and the squishy bio-computers that reside in our skulls.
In fact, there have been a number of significant developments over the past 20 years for connecting neural interfaces to prosthetics. In 2015, researchers from the EPFL created the e-Dura, an implantable “bridge” that physically crosses the severed point of a patient’s spinal cord. Last November, the EPFL improved upon that design with a wireless version that “uses signals recorded from the motor cortex of the brain to trigger coordinated electrical stimulation of nerves in the spine that are responsible for locomotion,” David Borton, assistant professor of engineering at Brown University, said in a press statement. This is very similar to the system that a University of California, Irvine, team used to partially restore a paralyzed man’s ability to walk. And just last week, researchers at Radboud UMC in the Netherlands wired up a Bluetooth connection to an amputee’s arm stump to create the world’s first “click-on” prosthetic.
But don’t get too excited for our Ghost in the Shell future, at least not yet. While robotic hand fabrication techniques and cutting-edge sensory capabilities are beginning to put these devices on equal footing with their human counterparts, the ability to integrate them into biological systems and seamlessly control them remains in its infancy. Even though Luke Skywalker’s robo-hand has been a cornerstone of the science-fiction universe since The Empire Strikes Back, don’t expect it to become part of scientific reality for at least another couple decades.
Welcome to Tomorrow, Engadget’s new home for stuff that hasn’t happened yet. You can read more about the future of, well, everything, at Tomorrow’s permanent home and check out all of our launch week stories here.



