Android phones are safer than you think, says Google’s head of Android security
The idea that the Android platform is insecure is popular and persistent. And quite possibly wrong.
Barely a week goes by without a new headline about a freshly uncovered vulnerability or new malware affecting millions of devices.
These issues are exacerbated by the fact that the Android ecosystem is complicated. Fragmentation makes it incredibly difficult to update the platform. A glut of different device manufacturers build thousands of different phones and tablets running different versions of Android. As a result, updates with security fixes in them take months to roll out — or worse, never do at all. Too many manufacturers only update their flagships, leaving known vulnerabilities in older and lesser devices that could put users at risk.
Consider a vulnerability like Stagefright, which could give hackers control of an Android device through malicious code in an audio or video file. Reports suggested up to 95 percent of devices were vulnerable. But how many were actually affected?
“Here we are a year and a half in, almost going on two years since we first found out about it and we still don’t know that anybody’s actually affected,” Adrian Ludwig, Director of Android Security, told Digital Trends.
The concern was, Google worked out fixes relatively quickly, and rolled them out to Google’s Nexus line of devices immediately. Patches for other devices came out at the discretion of the manufacturers.
That means, if you have a Google Pixel with the latest Android 7.0 Nougat, you’re benefitting from the latest security, but someone with a phone running KitKat (20 percent of Android devices) that hasn’t seen an update for a year or more could have been at risk.
It’s a thorny issue that’s not easily solved, but the Android security team has worked hard to reduce the risk for users. Scary statistics make for good headlines, but does Android deserve the reputation it has for insecurity?

Android Security Chief, Adrian Ludwig
“I do think we have a bit of a perception problem, but it’s very different from actual user risk,” Ludwig explained. “The cryptographic work that we’ve been doing, the sandboxing that we’ve been doing, and a lot of the work to make exploitation more difficult is all coming together nicely.”
Digital Trends talked with Ludwig on Google Hangouts to find out the current state of Android security, ask whether people should really be concerned about headline vulnerabilities and malware, and learn what Google is doing about fragmentation to enable wider security updates.
Digital Trends: Is Android really insecure?
Adrian Ludwig: No, it’s not insecure. There are a lot of things we’ve done that have moved expectations forward over the last couple of years.
For Mac or Windows, you had to have third-party antivirus protection, but we said we’re going to do that for everybody and make it for free.
Application sandboxing is a relatively new concept in the world of Android security – the idea that applications don’t have access to all your user data, but only have access to their data is entirely new, it’s not something that exists on Mac, it’s not something that exists on Windows.
“We have a bit of a perception problem, but it’s very different from actual user risk.”
Then there’s device encryption. Most enterprise don’t have it turned on all the time. An expectation has been set in the mobile space that everything should be encrypted all the time and there’s even an expectation that it’s going to be encrypted so well that it’s going to be difficult even for a sophisticated attack to get access to that data without user authorization.
We’ve also learned a lot about how the bad actors work and what they’re trying to do, and we’re now at a little bit of an inflection point. For the first few years we were learning, building our understanding, and improving our technology stack. Now we can keep up with the bad actors. Malware rates, for example, are relatively flat across the last three or four years, but I think this is the year where we’re going to see them drop, perhaps drop significantly, because we’ve gotten to the point where we have enough skill and experience. We’re now able to move more quickly than the actors, catch them sooner, and take action more effectively across the entire ecosystem than we could before.
I think we’re at a turning point where even by Android standards we’re going to start to see pretty significant improvements with regards to malware.
There’s still more to do, but it’s easy to forget how far we’ve come over the last five years.
We see a lot of reports about vulnerabilities with frightening statistics. What’s the realistic risk of your Android device being exploited or hijacked? For example, something like Stagefright was said to potentially impact 95 percent of Android devices. Do we have an idea how many have actually been hijacked using that vulnerability?
Here we are a year and a half in, almost going on two years since we first found out about it and we still don’t know that anybody’s actually affected. There are rumors that a small number of devices might have been affected, but even those we haven’t got any substantiated evidence for.
And trust me, whenever we hear a rumor like that we try to chase it down. We go talk to the company that’s making that statement. We ask if there’s data that they can share. We’ve never been able to substantiate any of those numbers. I can say definitely that there weren’t 900 million devices affected.
Please enable Javascript to watch this video
Certainly, the headlines that ran and the excitement was disproportionate to reality and it may be that nobody was affected. Which is incredible I think, even looking back myself there’s always a concern that there may be something you’re not seeing, but time seems to be the thing that’s revealing those blind spots.
I’ve been working on Android security for the last six years and every time you look in an area where someone has said “that’s a blind spot,” we don’t find anything. So, early on it was “there’s tons and tons of malware in Google Play” and we looked, there was some, we removed it. Then we hear “it’s outside of Google Play,” we look, there’s some, we put pretty good protections in place. Then “it’s going to climb next year” and that didn’t happen either. Now, “it’s vulnerabilities are going to be exploited,” but we don’t see that.
Time and time again we’re moving forward in where we’re looking and the checks that we’re doing and the services we’re providing to look for bad actors, but we’re just not seeing any actual harm.
That said, we want to be as cautious as we possibly can and so we’re investing in services to look in all those little dark alleyways. We’re also working with partners to make sure that they’re able to respond as quickly as possible, so that’s where we’ve invested a lot in security updates, not because we’re seeing a lot of actual exploitation, but because we don’t want that to be a risk that ever gets realized.
A lot of it is about staying ahead and never getting to a point where there’s a problem.
Why do you think this narrative about Android being a “toxic hellstew” of vulnerabilities persists?
There’s a few reasons. One is that complexity is often very scary and the narrative for the Android ecosystem is a complex one. There are lots of different OEMs [phone and tablets makers] in the ecosystem, lots of different device models.
“[Machine learning] is one of the main reasons we’ll get ahead of the attackers.”
Very succinctly describing what’s happening in the Android ecosystem is difficult, in much the same way that describing human anatomy or the population of humanity is very difficult. But we know that medicine is getting better, and we know that people are living longer. We know that people are getting healthier, but we still read lots of stories about people dying, bad things happening, and diseases.
I think that’s a mirror of what we have going on in the Android ecosystem. It’s complicated, so there’s not often a satisfying, super simple answer, but overall it’s getting more and more secure and robust.
We also see a lot of malware stories, but is the average Android user, who never downloads apps outside of the Play Store, in danger?
From Play the malware number is about 0.05 percent which is 5 out of 10,000 apps, so that’s pretty low. In terms of what percentage of devices get infected, that’s in the range where if we weren’t talking about it, no one would know it was even happening.
We talk about it to make sure there’s transparency about the level of risk. Often platforms don’t want to talk about things. They turn a blind eye. We like to have transparency into external actors and our policies and processes, so we can build trust. We don’t want people to trust blindly.
My guess would be, certainly in the Android ecosystem, the Play Store is the cleanest app store. I would imagine it compares similarly to other app stores with ecosystems that are more closed. [We believe Adrian is referring to the Apple App Store.]
Having discussed it with a lot of people, anecdotally, we don’t know anyone who has had an Android malware problem, but I’ve had Windows problems myself. Why is everyone talking about Android security?
I think we’ve gotten bored of Windows malware and so it’s not fun to talk about it anymore. Android was sort of the new, exciting thing.
Everything I’ve seen shows that across the Android ecosystem. The hundreds of millions of devices that install from Google Play are an order of magnitude cleaner than a managed corporate fleet of Windows devices. Our infection rate is a half percent globally, where for managed Windows devices it’s higher, and for consumer households the infection rate for Windows devices is higher still.

But Android is exciting. It’s a growing market. It’s a growing market for consumers, but I think it’s also a growing market for the security industry, so they’re very interested in making sure people are aware and thinking about those things. That’s the shape of communication around the platform.
When you do find malware, what type is most common?
Most of what we’re seeing is commercial in nature. They’re typically trying to make money and the mechanism to monetize on mobile is to install applications. We do see niche cases of apps that go after banking passwords or things like that, but the simplest way to monetize is to install an app. A very large percentage is related to what we call hostile downloaders.
What’s interesting is that the apps they install are not themselves harmful. It might be a game that’s looking to get a promotion, or it might be another service where they benefit from having market distribution. The end result is not the types of things people think about when they think about malware. It’s often not somebody trying to steal your data.
There is spyware. I don’t want to suggest that it doesn’t exist. We even did a post this week describing a very high-end spyware that we found, but that was on 25 devices. It’s certainly not the type of thing that’s common or most popular across the ecosystem.
Is there anything inherently less secure about Android compared to other mobile operating systems?
I don’t think there’s anything inherently less secure about the platform. I think the complexity makes it more difficult to make statements at a platform level.
People love to compare iPhone to Android. The iPhone is a device with an operating system from a manufacturer, in fact it’s about five different devices. If you look at one manufacturer from Android — Samsung is the biggest — they have hundreds of different device models. Merely comparing Samsung to iOS you’re roughly 20 times more complex already, in terms of this device versus that device. It’s not a reasonable comparison.
Perhaps comparing the Pixel and Nexus line to iPhone might be fairer?
Yes, very similar hardware-wise – similar security properties. The app stores have similar security properties, verified apps, application isolation — very similar security properties. Both have a commitment to rapid updates.
“Comparing Samsung to iOS you’re roughly 20 times more complex already, in terms of this device versus that device.”
Where you get into differentiation is in transparency. Android is open source. That information is available to everybody. We encourage third-party research through our security rewards program, so we know that not only are we looking for issues in the platform, but other people are as well and that makes a big difference.
I think the services make a huge difference as well. We have intentionally designed in visibility and the ability to check on devices in the field, whereas that doesn’t exist on any other platform. It means we get feedback on a lot of little things that are happening and we can respond to that.
How do you combat the slow roll out of security updates for non-stock Android devices? Is it frustrating?
We really appreciate how many people have adopted Android and how many devices have Android on them. The reality of that sheer diversity of the ecosystem is that some manufacturers will move very quickly and others move more slowly.
We’ve spent a lot of time over the last year to try to help those that are moving more slowly to solve some of their technology challenges, solve some of their engineering challenges, and in some instances its organizational challenges. They may lack a staff of engineers to provide updates. Perhaps they didn’t think about that, so we ask what can we do to get you to a point where you have thought about it and it does makes sense?
It definitely makes things more complicated, but it’s also at the core of why Android has been so successful, because a lot of different people were able to jump in and start building devices.
What action has the Android team taken to make the platform more secure? And what’s the next area you’d like to tackle or improve?
I think all the pieces are coming together really nicely. It’s been a multi-year journey, but the cryptographic work that we’ve been doing, the sandboxing that we’ve been doing, a lot of the work to make exploitation more difficult is all coming together nicely, so those are the areas that we’re going to keep working on.
Why is sandboxing important?
Sandboxing at a fundamental level is about how you isolate one application from another. A game is a perfect example, one where people don’t think about it, but on a PC, games are often networked. They’re one of the few things on that sort of device that has network port service, so that is one of the scariest pieces of software that you’re running on most consumer devices. If you compromise a game, the game author might be perfectly benign, but that game has access to everything on your PC.

Whereas on Android that’s not at all the case. You have to then also compromise the core operating system to be able to go beyond that. For us, that was really, really important to make sure that you always have to compromise Google’s code, Android’s code, to get to the point where you can do something that really hurts a user.
How important is the third-party research program for finding bugs and vulnerabilities?
It’s really important actually. Last year we paid almost a million dollars to researchers. I think there were about 120 different researchers that found issues and reported them to us. Dozens come in every month, so it’s really important for us.
One thing that has happened actually that’s really interesting is that we started to get more and more reports of issues, not in Android, but in other components that are in the device. For example, this week there was a report of an issue in Broadcom’s Wi-Fi drivers that affected Android, iOS devices, and anybody else who was using those types of drivers. That’s the kind of thing we’re seeing more and more.
Is machine learning starting to play a role? Do you have enough data for it to be effective?
We do have a huge amount of data now and we’ve started to find some machine learning techniques that work really well for different types of things. One thing machine learning works really well for is finding other applications that are also malware. When we find one bad app, we might be able to take down a thousand or more applications that same day that we know are related based on machine learning techniques.
And you expect that to improve over time? Obviously, it’s learning so it should get better?
“Machine learning lets us develop protection capabilities much more quickly.”
It’s one of the main reasons that in the next couple of years we’ll get ahead of the attackers. Machine learning lets us develop protection capabilities much more quickly than a human can improve their hiding, which is ultimately why malware in the past has been persistent — because even very small changes can hide it effectively. That’s not going to be the case anymore.
Does tightening security mean losing some of the openness and customizability that has helped make Android the most popular mobile OS in the world?
Not at all. The openness, customizability, and security of Android are all among its greatest strengths. We think it’s possible to continue to improve on all three.
When we are confronted with a feature that appears to put these principles in conflict, we’ll go to great lengths to find an approach that is balanced. One common strategy is to have the default be more secure (to protect as many users as possible) while allowing users choice (to allow for customization).
We do the same thing with OEMs [device makers], defining a security model that is robust, but also providing a myriad of opportunities to innovate and customize. The resulting diversity is itself a security enhancement, as monocultures are known to be more susceptible to systemic risk. And in some cases, that customization leads to innovative security enhancements, which is a boon for the ecosystem.
Do you think that antivirus, anti-malware, and other third-party Android security apps are needed?
We are committed to making the free protections provided by Google Play the best protection in the world. We already think we’ve accomplished that, and we’ll continue to publish information that makes it possible for others to double-check and confirm it for themselves.
What advice would you give an Android user with security concerns? What actions potentially put them at risk and what can they do to stay safe?
We’ve published a help center article on this topic, here.
Airbnb fights off account hijackers with new security tools
If you get hacked on Airbnb, you won’t only have to worry about criminals getting ahold of your credit card details. You’ll also have to fret about internet scammers knowing exactly where you’re staying on a particular date or the addresses of the properties you own. In a blog post, the company has announced that it has added new security measures to protect your account from hijackers. Starting today, you’ll have to authenticate every new phone, tablet or computer you log into by typing in the unique code Airbnb sends you via text or email. It’s no two-factor authentication, but it can at least lessen the chances of a rando getting into your account.
In addition, you’ll now get text messages whenever changes are made to your account, so you’ll get a heads up if someone else is tinkering things in there. The company says it already uses a machine learning model that predicts whether it’s the true owner or a hijacker who’s trying to log in based on locations and IPs. If the system thinks it’s a hijacker, it will require additional info.
However, account takeover is one of the biggest problems on the internet today, since hackers have more and more password dumps from massive security breaches to consult. If their targets’ passwords aren’t there, they can also turn to phishing or infecting people’s computers with malware. Airbnb must have felt that its machine learning model isn’t enough anymore and added the extra layer to keep interlopers out.
Source: Airbnb
Microsoft says it already patched ‘Shadow Brokers’ NSA leaks
Yesterday, the mysterious “Shadow Brokers” posted some hacking tools for Windows that were allegedly stolen from the NSA. All of them were at least a few years old, but exploited flaws in several versions of the operating system to move across networks and infect systems. early Saturday morning, Microsoft has responded with a blog post, saying it has evaluated all of the exploits listed. Its response to the release is surprisingly simple: most of them have already been fixed.
In a statement to Reuters yesterday, Microsoft said that “Other than reporters, no individual or organization has contacted us in relation to the materials released by Shadow Brokers,” but that may not be the entire truth. For three of the exploits, Microsoft says they don’t affect supported platforms (read: any operating system recent enough that it’s still receiving security updates. If you’re still using them then you need to upgrade to Windows 7 or newer). For the other seven, the company says all of them are addressed by updates and patches (notably, the patches reveal these exploits impacted Windows 10 and Windows Server 2016 also).

What’s particularly curious is that four of the exploits — EternalBlue, EternalChampion, EternalRomance and EternalSynergy — were fixed in an update just last month, on March 14th. Because “The Shadow Brokers” listed what tools they had in January, it seemed like the NSA had to know this release could happen. Despite a long list of acknowledgments for security issues discovered and fixed in the March 2017 update, as @thegrugq points out, there’s no name listed for the MS17-010 patch that fixed these.
So it’s unclear how that happened, but the timeline looks like this: January reveal –> February Microsoft skips its usual “Patch Tuesday” security update –> March Microsoft spontaneously fixes several flaws that no one knew existed for several years prior. Clearly, someone said something. Security researcher Mustafa Al-Bassam has a possible explanation, musing that Microsoft paid up and quietly bought the exploits, while Zerodium CEO (and purchaser of vulnerabilities) Chaouki Bekrar also suggests the Shadow Brokers gave Microsoft the info.
Developing…
Source: MS17-010, Microsoft TechNet Blog
Shadow Brokers release also suggests NSA spied on bank transactions
Besides a cache of potentially damaging zero-day exploits against many versions of Windows, another element of today’s Shadow Brokers release is a folder titled SWIFT. Inside, it has documents listing the internal structure at EastNets, a Dubai-based bank and anti-money laundering organization. Banks use the SWIFT messaging system to transfer trillions of dollars every day, and if the documents released are accurate, it appears the NSA wanted access to monitor transfers between banks in the Middle East.
NSA completely hacked @EastNets, a global anti-money laundering company, inside out. pic.twitter.com/PP55fjBy4r
— Mustafa Al-Bassam (@musalbas) April 14, 2017
Security researcher Mustafa Al-Bassam tweeted that the NSA hacked EastNets “inside out.” Curiously, despite the detailed information released, EastNets put out a statement claiming its systems are secure. According to the company, “The EastNets Service Bureau runs on a separate secure network that cannot be accessed over the public networks. The photos shown on twitter, claiming compromised information, is about pages that are outdated and obsolete, generated on a low-level internal server that is retired since 2013.”
Nice catch: 2013 archive confirms #NSA hacked the EU’s SWIFT network, violating data-sharing agreement. Any comment yet from EU? https://t.co/p86jgSqtj8
— Edward Snowden (@Snowden) April 14, 2017
Reuters reports that SWIFT also claims there’s no evidence its network has been accessed. Meanwhile, Matt Suiche looked through the documents and writes about what they show, and why EastNets would be such a good target. Back in 2013, Der Spiegel reported that documents released by Edward Snowden showed the NSA targeted SWIFT and Visa, and set up its own financial database to facilitate the spying program.
Source: The Shadow Brokers, Eastnets
Facebook busts up international spam operation
While Facebook has spent significant time fighting fake news on its network, it continues to battle another plague to its social platform: Fake accounts. These are often used to spread low-quality content, so the internet titan has been ramping up its crackdowns. Hot on the heels of banning 30,000 profiles earlier this week, Facebook announced it has disrupted a massive international spam operation the network had been combating for the last six months.
In this case, the profiles weren’t generated through “traditional mass account creation methods,” Facebook’s blog post stated, using sophisticated methods to hide their coordinated efforts. The ring was apparently trying to build a network of friend connections by using the accounts to like and interact with Pages, with the eventual intent to unleash a torrent of spam. But the accounts hadn’t started friending real users before Facebook disrupted the network by wiping away every “inauthentic like” they were responsible for. Presumably, they also eliminated the fake profiles.
The social network caught wind of the false profiles thanks to its improved detection protocols that look for suspicious behavior, like rapid posts of the same content. Detecting and curbing “inauthentic” behavior is important to the social network: “Improvements in this area make our community stronger for everyone, including advertisers, publishers, and partners,” the company noted.
Source: Facebook
‘Shadow Brokers’ dump of NSA tools includes new Windows exploits
Earlier this year “The Shadow Brokers” — an entity claiming to have stolen hacking tools from the NSA then offering them for sale — seemed to pack up shop, but the group has continued on. Today, it made a new post that contained a number of working exploits for Windows machines running everything from XP up to at least Windows 8. As far as Windows 10, it appears that the stolen data is from 2013 and predates the latest OS. As such, it isn’t immediately apparent if it’s vulnerable, but early results indicate at least some of the tools aren’t working on it.
This is really bad, in about an hour or so any attacker can download simple toolkit to hack into Microsoft based computers around the globe.
— Hacker Fantastic (@hackerfantastic) April 14, 2017
WINDOWS 10 does not appear impacted by ETERNALBLUE or ETERNAL exploit series in my lab test.
— Hacker Fantastic (@hackerfantastic) April 14, 2017
Releasing this information ahead of a holiday weekend may make it harder for Microsoft and IT workers to respond, as anyone with bad intentions now has access to a number of previously unknown exploits. As security researchers like Matthew Hickey (aka @hackerfantastic) scan through tools with names like ETERNALBLUE (a remote exploit for XP and above) and FUZZBUNCH (a framework that helps control use of the other attacks), Marcy Wheeler notes that the NSA has known these tools were out there since January, when The Shadow Brokers listed them for sale.
Lost in Translation — Steemit https://t.co/OH5UexWJsG enjoy!
— theshadowbrokers (@shadowbrokerss) April 14, 2017
For now, the response from a Microsoft spokesperson is that “We are reviewing the report and will take the necessary actions to protect our customers.”
So what is there to do if you’re not a network admin and just use a Windows computer, whether at work or at home? In a quote to Motherboard, one hacker said to have formerly worked for the Department of Defense says plainly that “It’s not safe to run an internet-facing Windows box right now.”
Of course, your PC is — or should be — behind a router/firewall. I spoke to Travis Smith, a Senior Security Research Engineer at Tripwire, and he explained that for the tools released, they largely rely on local network protocols that attackers use to move from one compromised PC to others across a network. As he put it “even if you aren’t running the latest greatest operating system and you don’t have antivirus, if your Windows laptop isn’t plugged directly into the internet, then your risk profile greatly diminishes.” If you do have an antivirus, like Microsoft’s Windows Defender, or products from McAfee, Kaspersky and the like, they should update quickly to recognize these executables now that they’re known.
Contacted via email, Matthew Hickey expressed a similar outlook, saying that “most home users will not be directly impacted by these vulnerabilities as an attacker needs to connect to services on their computer. The risk is much bigger to enterprise and businesses who rely on these services to connect online.”
Now that these 0days are public, disabling SMB might work as a workaround security patch on #Windows https://t.co/m13iFZdVTF #EquationGroup
— x0rz (@x0rz) April 14, 2017
@GossiTheDog You are people, Kev!
Worth noting that every version of Windows since Vista has SMB server svc blocked inbound by firewall by default also
— Ned Pyle (@NerdPyle) April 14, 2017
For folks at home, this isn’t a big deal. Install the Windows Updates when Windows Update says “install me!”. But you should do that anyway.
— Pwn All The Things (@pwnallthethings) April 14, 2017
@JukesSitus No SMB, no remote desktop, and not sure if that’s enough. These should not be reachable from Internet, but could rip through institutions.
— Nicholas Weaver (@ncweaver) April 14, 2017
No matter what software you’re running though, making sure you’re up to date with the latest patches will be one of the best things you can do to defend yourself. Also, as Travis explains, it’s possible the code could eventually be modified to attack newer systems including Windows 10 and Windows Server 2016, but that will likely take more than a couple of days. Even if remote exploits or a worm don’t arise from the use of these tools, now that they’re out in the wild they could still be delivered by the web, email or even a USB stick. Matthew closed out his email by noting that “Microsoft will need to release fixes for several of the ETERNAL exploits and customers should ensure they apply them as soon as available.”
Here is a video showing ETERNALBLUE being used to compromise a Windows 2008 R2 SP1 x64 host in under 120 seconds with FUZZBUNCH #0day 😉 pic.twitter.com/I9aUF530fU
— Hacker Fantastic (@hackerfantastic) April 14, 2017
Source: The Shadow Brokers
GOP rep. on ISP privacy rules: ‘Nobody’s got to use the internet’
The internet is a ubiquitous part of our daily lives. It’s where many of us turn when we need to file our taxes, apply for jobs or search for housing. But one Republican lawmaker who voted to roll back FCC privacy regulations last month said, “Nobody’s got to use the internet” when asked about his decision at a town hall meeting, displaying a staggering amount of ignorance about how the internet affects the modern world.
“If you start regulating the internet like a utility, if you did that right at the beginning, we would have no internet,” US Rep. Jim Sensenbrenner (R-Wis.) told the crowd. “I don’t think it’s my job to tell you that you cannot get advertising for your information being sold. My job, I think, is to tell you that you have the opportunity to do it, and then you take it upon yourself to make the choice that the government should give you.”
Last month, the US House of Representatives passed a resolution to overturn a rule that forced internet service providers to get your explicit permission before selling your personal data. The resolution has also passed the Senate, and President Donald Trump said he plans to sign it. Sensenbrenner’s statement was in response to a constituent who argued that ISPs should have stricter requirements than websites like Facebook.
“Facebook is not comparable to an ISP,” the woman said. “I do not have to go on Facebook. I do have one provider. I live two miles from here. I have one choice. I don’t have to go on Google. My ISP provider is different than those providers.”
“[People] ought to have more choices rather than fewer choices with the government controlling our everyday lives,” Sensenbrenner said before moving on to the next question. The exchange was caught on video and posted to Twitter by American Bridge 21st Century, a PAC that claims it’s committed to “holding Republicans accountable for their words and actions.” You, of course, need the internet to do this.
Via: Ars Technica
Source: Twitter
Google Photos can now stabilize your shaky handheld videos on Android
Why it matters to you
Shaky video? Google Photos can help, but you loose a bit of resolution in the process.
Google Photos is far from being about just about cloud storage — the latest Android app, which began rolling out to users Tuesday, can reportedly stabilize videos after you shoot them.
Google shared that they were working on adding stabilization last summer, but now the feature has officially arrived inside the app. After opening a video, tapping on the pen icon brings up editing options — a quick tap of “stabilize” starts the process, which can take some time depending on the size of the video file.
As an edit after the fact, the stabilization is electronic, not optical, which means the system is cropping the footage to stabilize each frame, but it may help salvage some shaky shots. Early users are reporting better results with occasional artifacts but not comparable with shooting the footage with a gimbal in the first place.
Along with the stabilization, the latest version of the app includes new smart filters along with a “Deep Blue” slider that helps enhance the color of the water and sky. The app’s automated movies built from photos also see a number of new options.
The stabilization joins a number of other editing tools inside Google Photos, including filters and contrast and color tweaking. The app also turns photos into movies as well as crafting collages, animations, and panoramas from still photos.
The editing features join one of the app’s biggest assets, free unlimited photo storage. Auto-tagging and object recognition software also makes photos searchable without manually adding tags, while automated albums assemble photos from one event into one place.
While a new Google Photos update rolled out to iOS on Monday, video editing options still only list rotations currently. The App Store only lists performance improvements as adjustments to the latest version.
Google Photos is a free download from both Google Play and the App Store.
Portable power station The River can hold its 500-watt charge for a year
Why it matters to you
Going off the grid doesn’t have to mean being without power — at least, not with the River, an eco-friendly portable power bank.
If you’re looking to go off the grid for a while without being completely without power, there’s finally a long-term solution for you. It’s called the River, and it promises smart, clean, mobile power for up to a year. You can charge it with a car jack, wall socket, or solar power, and in turn, it’ll charge all your various devices for you. So if you want to leave civilization but somehow stay plugged in, you may want to check out this out.
Thanks to the River Portable Power Station‘s 500-watt battery, you’ll be able to recharge a smartphone more than 30 times, or send enough electricity to a small refrigerator to keep it operational for up to 10 hours. And because the portable device is waterproof, weighs just 11 pounds, and will work in temperatures ranging from minus 4 to 140 degrees Fahrenheit, there really isn’t any place you can’t take the River.
Perhaps the most compelling aspect of the River is the number of charging ports it contains. That means that no matter how many gadgets need juicing, this generator can probably find a way to accommodate them. With a total of 11 outlets, including two USB-C ports, two standard AC outlets, two DC outputs, a 12V car port, and four fast-charging USB-C ports, you can simultaneously charge your family’s smartphones, tablets, cameras, laptops, and sure, even a drone.
Not only will the River maintain its charge for a long time, it also takes very little time to power up itself. It’ll take just six hours if you plug it into a wall socket, nine hours with a car jack, or 10 to 15 hours with its solar panel (depending on the amount of sunlight).
With a month left in its campaign, the River has already raised well over $180,000, blowing past its original funding goal of $30,000. Over 300 backers have already pledged their support, and if you’d like to join their ranks, you can pre-order a River for $459, with an expected delivery date of July 2017.
Drones and robots wove the University of Stuttgart’s otherworldly new pavilion
Why it matters to you
The future is here: This striking new pavilion at Germany’s University of Stuttgart was constructed entirely by mechanical workers.
If you want evidence of the innovative technology-related work being carried out at Germany’s University of Stuttgart, all you have to do is take a stroll around campus. That’s where the university recently unveiled a new carbon-fibre pavilion, named the ICD/ITKE Research Pavilion 2016/7.
Resembling a piece of otherworldly landscape from Ridley Scott’s classic movie Alien, the 40-foot-long pavilion was constructed using a combination of cutting-edge drones and robots.
Its design was modeled on the silk hammocks created by moth larvae, and produced using more than 180 kilometers of woven resin-impregnated glass and carbon-fiber.
“Creating a long span structure, beyond the working space of standard industrial fabrication equipment, required a collaborative setup where multiple robotic systems could interface and communicate to create a seamless fiber laying process,” the University of Stuttgart’s website explains. “A fiber could be passed between multiple machines to ensure a continuous material structure. The concept of the fabrication process is based on the collaboration between strong and precise, yet stationary machines with limited reach and mobile, long-range machines with limited precision.”
The construction process involved two stationary industrial robotic arms with the strength and precision necessary for the fiber-winding work, while a drone carried out the fiber-laying process.
“The UAV could fly and land autonomously without the need of human pilots, the tension of the fiber was actively and adaptively controlled in response to both the UAV and robot behaviors,” the website continues.
Sure, we’re unlikely to reach a point any time soon when robots and drones carry out the bulk of building work. However, work like this shows that it’s certainly an available option if called for. “The series of adaptive behaviours and integrated sensors lay the foundation for developing novel multi-machine, cyber-physical fabrication processes for large scale fibre composite production,” the creators note.
Hey, it’s hard to argue with the quality of the results. Or to think of a way a human team could’ve so easily carried out the task!



