Google wants Android O to make users of accessibility services more productive
Why it matters to you
Enhancements to these services will help extend the benefits of Android apps to those with accessibility needs.

The Android accessibility services team took the stage at Google’s I/O developer conference Wednesday to discuss a number of changes coming in Android O that aim to make the platform much more user-friendly for everyone.
Increasing productivity for accessibility services users was job one in preparing for Android O, according to Victor Tsaran, technical program manager on the Accessibility development team. To that end, the upcoming version of the mobile operating system delivers several critical improvements to TalkBack, an Android accessibility service that reads screen content to users who are visually impaired.
First, Android O introduces a separate volume stream for times when the system reads back to you. In other words, media like music and YouTube videos no longer has to play at the same volume that TalkBack does, so it’ll be easier to distinguish between them.
An even bigger addition is support for multilingual text-to-speech. Tsaran demonstrated the feature by having the system read an email out loud that contained phrases in several different languages. Android was intelligent enough to differentiate between them and adjust on the fly.
Android O will also allow fingerprint sensors on devices to support basic gestures so users can swipe between options. In tandem with TalkBack, this means a user who is unable to see the screen can swipe successively between menu items, hearing each one individually read back to them.
Finding and triggering accessibility services was another major focus for Android O. The update will bring a context-aware dedicated accessibility button at the bottom-right of the navigation bar, that will be able to trigger certain actions depending on what’s visible on the screen, and what services you have enabled.
For example, if you’re browsing the home screen, pressing the button will trigger magnification. If you’re using text-to-speech, it will bring up a remote control that allows you to start and stop screen reading, and determine the speed that the system reads to you.

The focus on just making accessibility services easier to understand has made its way to the settings menu as well. Gone are the vague category descriptors, like “System” and “Services.” The menu now groups features based on the actions they perform, and also contains descriptions for what each service does. What’s more, a new shortcut has been added to turn accessibility services on and off on the fly, by pressing both volume buttons.
During the event, the development team stressed that Google arrived at many of these improvements by testing them, in iterative fashion, with real users. Likewise, the company is imploring third-party developers to perform their own accessibility research.
Last year, Google released an app called Accessibility Scanner that could examine developers’ apps and suggest changes to help enhance accessibility, like improving text contrast. Since that time, the company says developers have used the app to find over one million opportunities to improve their apps’ functionality for users with accessibility needs.
Google Home just leapfrogged Amazon Echo at I/O 2017

Google just took the lead with a 2-hour keynote address.
Google I/O 2017 marked a massive improvement in Google Home’s capabilities, the importance of which should not be underestimated. With less than a 30 minute slice of the two-hour long keynote address, Google rolled out fresh Google Home features that improve daily functionality of the connected speaker and completely change the possibilities for both requesting and receiving information from it.
Amazon should take note.
Adding push information

It becomes harder and harder to ignore Google Home’s presence.
In what may have initially come across as a small development, Google made an important change to the way Google Home works by introducing what it calls “proactive notifications.” Up to now, Google Home was always listening and waiting for your input — now, it can pulse its lights to let you know it has something to tell you. When you notice the lights, simply say “hey google, what’s up?” and it will give you the timely information that you’ll hopefully find useful. Google says what it pushes will be limited to only the most important information, and if done correctly it can be extremely useful.
This is a huge change to the way you’re expected to interact with Google Home, and has the potential to dramatically increase use by the average Home owner. By proactively pushing useful information, it becomes harder and harder to ignore Google Home’s presence, which creates a loop of using Home more often.
Calling without a catch
One large feature that caught everyone’s eyes in the wake of Amazon’s recent Echo announcements was free calling from Google Home. You can now simply ask Google Home to call any of your contacts, so long as they have a phone number associated with their contact entry in your Google account. This critically bests the Echo in that it actually dials a phone number — you can call any mobile or landline, rather than dialing someone else’s Google Home or phone via the Home app. The outgoing calls from Home can even be masked to look like they’re coming from your phone, which makes the experience 100% seamless for the person on the other end.
Call any number at any time — no strings attached.
An important function that really makes voice calling effective is Google’s recent implementation of multi-user functionality based on voice recognition. If you say “call mom” it’s going to dial your mom … and if your spouse says the same query it’s going to call their mother instead. A decidedly personal experience that just makes sense, but is a difficult technological problem to solve.
An entirely new interface paradigm

Google Home can respond on your phone or TV, too.
The final part of the latest Google Home announcements has less to do with Home itself and more with how it fits into your entire life. Now Google Home is no longer operating in a silo — it’s simply the contact point for your voice, and can then give you information on other devices. Google Home can now send content to your phone or TV when applicable, whether that means sending Google Maps directions to your phone when you ask or playing a YouTube video on your nearby TV.
You could easily see this as a direct shot across the bow of the new Amazon Echo Show, which made the important jump to using a screen in addition to voice so that it can always offer you information no matter your query. Google Home and Google Assistant’s strength over Amazon here is that Google has potential for deeper integration with more of your screens. Chromecast and Android TV give more options for your big screens and multi-room audio, while Google Assistant being built into just about every Android phone offers a deep hook in billions of devices.
Of course this is only a big feature if you’re a household that already has Chromecasts or Android TVs — which isn’t necessarily a given — but the potential is there in ways that Amazon can’t yet offer.
Your move, Amazon
With these fresh Google Home features, the ball is back in Amazon’s court to try and step up and match what Google Home is now capable of. Amazon may have a larger, longer-standing install base of Echo devices, with new hardware coming, but Google’s superiority in software and platforms is winning right now.
Google Hardware

- Google Wifi review
- Google Home review
- Everything you need to know about the Chromecast Ultra
- Chromecast vs Chromecast Ultra: Which should you buy?
Google Wifi:
Google
Amazon
Google Home:
Google
Best Buy
Chromecast Ultra:
Google
Best Buy
AI-powered Google for Jobs has work for everybody
While the technology industry is a goldmine of employment, for anyone not developing an app or working on AI, finding a job can be tough. This is especially true for folks looking for entry-level positions. Craigslist decimated the classified section of newspapers and while sites like Monster, Linkedin and others are helpful if you have an established career, for entry-level jobs, it’s tough to find work. According to Google, it’s also hard for employers to find people to fill those positions.
So, in partnership with Linkedin, Monster, CareerBuilder, Glassdoor, and — surprisingly — Facebook, the tech giant will be launching Google for Jobs, an AI-powered search engine that combines Google search, machine learning (to delve into career sites), job boards, staffing agencies and applicant tracking systems to help you find work in your area. You can even set an alert for your search. That means, if the barista position you’re looking for isn’t available today, Google will notify you when it surfaces.
That’s outstanding if you’re one the 4.4 percent of the population that’s unemployed. It was also good to see Google CEO Sundar Pichai demo job search with retail jobs instead of developer jobs. There are plenty of ways for a computer science major to find a gig. Finding a spot as a cashier in a store can be a bit trickier because of inconsistent job titles and some companies sticking to the same job posting plan year after year, even though better solutions are out there. Google takes all the information from various sources, throws its AI at it, and spits out an easy to read list that’s beneficial to everyone involved.
This announcement is also is a timely reminder of the importance of smartphones for those struggling to make a living. The ubiquitous device has become an important tool for finding work, as for many it’s their primary means of getting online. With Google’s announcement, that task is now just a bit easier.
Google’s AI initiative will power some pretty cool technology like Google Lens and Assistant, but it’s good to know the company is also making sure that artificial intelligence is helping folks that need a paycheck more than they need to know what kind of flower they’re looking at.
For all the latest news and updates from Google I/O 2017, follow along here
Watch Google’s I/O 2017 keynote in under 16 minutes
If you missed out on Google’s I/O 2017 keynote earlier today, don’t fret. We’ve cut down all of the noteworthy news on Google Lens, AI, Google Assistant, Google Home, Daydream, Android O and more into a quick TK-minute clip. Just sit back, relax and catch up on all of the news in way less time than we spend taking in the 2-hour presentation this afternoon.
For all the latest news and updates from Google I/O 2017, follow along here.
Google Assistant on the iPhone is better than Siri, but not much
Google’s Assistant is finally ready to take on Siri on Apple’s own turf: the iPhone. Yes, you could already play around with the AI-powered chatbot if you downloaded Allo — Google’s mobile-only messenger app — but its functionality was limited. Today, that changes thanks to a new standalone Google Assistant app available on Apple’s App Store (though it’s US-only for now). Eager to check it out, we downloaded it right away and spent some time commanding our Google-branded phone butler around. After a few hours, I’ll say that while I find Google Assistant a lot friendlier and smarter than Siri, it doesn’t quite replace it. At least, not yet.
The first obvious barrier is that while Siri is baked right into iOS, you’ll need to download Google Assistant as a separate app. Plus, accessing Siri is as easy as holding down the iPhone’s home button — with Google Assistant (as with Cortana, Alexa and all other third-party assistants), you’ll need to take the extra step of launching an app. If you have an Android phone, Google Assistant is ready to go without having to download anything at all.
As you might expect, when you first launch Google Assistant on the iPhone, it asks you to log in with your Google account. After you do, it introduces itself to you and invites you to ask it anything you wish. Press the microphone icon at the center to offer a voice command, or if you’d rather not disturb the people around you, you can hit the keyboard icon to type your query.
The first thing you might wonder is if you can make a call or send a message on the iPhone with Google Assistant. The answer is: You can, but it’s not any easier than it would be with Siri. When I say, “Call Mom,” for example, it brings up her name and triggers a phone call, which you can then cancel or confirm. When I say, “Text Mom,” it asks me for my message and then kicks me over to the Messages app on my phone, where I can choose to send it off or not. At least Siri can send messages without me having to open the app.

I also tried to play music on Google Assistant to see how the experience compares to Siri. It was a little, well, uneven. When you first tell Google Assistant to play music, it’ll ask you to choose between Apple Music and YouTube as your default. I chose YouTube and then said, “Play LCD Soundsystem.” It kicked me over to the YouTube app, where it played a random song from the band. Then I went back and said “Play Radiohead,” and it would just give a list of albums. I then tried to switch the default choice to Apple Music, which I somehow was able to do so by saying “Play on Apple Music.” From then on, whenever I said “Play [name of song],” it would play the song on Apple Music. Unfortunately, it doesn’t appear that I can switch back to YouTube as the default, despite multiple attempts. Sometimes it says it’s playing a song, but nothing happens. Clearly, this feature is still pretty buggy.
As you might expect, Assistant plays particularly well with Google’s own apps. So sending email through Gmail is a snap — say who you want to send the email to, and it’ll kick you over to the Gmail app to follow through. Similarly, it’ll offer directions with Google Maps rather than Apple’s own.
What I found particularly intriguing about the Google Assistant app on iOS is that there’s a whole Explore page full of suggestions on what you can do with it. There’s a list of the usual suggestions, like “How many pounds in a kilogram?” or “What sound does a dog make?”
But interestingly, there’s also a slew of third-party chatbots you can try out. Examples include Genius, a bot that’ll guess the name of a song based on a lyric snippet, or the Magic 8 Ball, which will offer pithy responses to yes-or-no questions. Google Home users likely already know about some of these third-party chatbots, but to mobile users, this is new.

Aside from Explore, there’s also a Your Stuff tab that lists your Reminders, Agenda, Shopping List and quick Shortcuts that you can add to customize Assistant. So, for example, you can say “Late again” to trigger an automatic text to your best friend that you’re running five minutes late. “Cheer me up” will automatically bring up a list of kitten videos on YouTube.
I then tried to do a number of things on both Google Assistant and Siri to compare the two. I discovered that due to iOS restrictions, Google Assistant isn’t able to set alarms, take selfies, launch apps, post to Twitter or Facebook, call Ubers or Lyfts, or use third-party apps like Whatsapp for sending messages. Siri, however, was able to do all of these tasks without issue.
At the same time, Google Assistant was vastly superior when it came to translating languages (Siri often faltered) and remembering context clues. For example, when I asked, “Who’s the president of the United States” and followed it up immediately with “How tall is he?” Google Assistant immediately responded with “Donald J Trump” and “6-feet 2-inches tall.” Siri, on the other hand, could answer the first question, but not the second (it responded with “I don’t know”). Google Assistant also was smart enough to respond to set-a-reminder requests with the place and time in which I wanted to be reminded — Siri just placed them on a Reminders list. Siri was also sometimes just plain wrong — it erroneously said the population of Egypt was 85,800 (it’s actually 91.51 million).
In many ways, Siri pales in comparison to Google Assistant. It can’t understand voice commands as well as Google, and it doesn’t remember your preferences like Google can. Siri makes so many errors that there’s even a Reddit group called “Siri fails” that documents its many mistakes. But as long as it comes preinstalled in every iPhone out there and does a good-enough job, Google Assistant — and all other rivals — will have a hard time replacing it.
For all the latest news and updates from Google I/O 2017, follow along here
Android O has emoji you’ll actually recognize
Ever since KitKat, Android’s standard emoji have used minimalist blobs to represent people. They’re unusual and cute, but that gumdrop look isn’t usually what you associate with emoji — just about everybody else uses circular shapes. And that can create real problems if you send an emoji that doesn’t convey the same meaning on your friend’s phone. Thankfully, Google has seen the light. Android O will include more conventional (not to mention more recognizable) emoji, complete with gradients and a wider range of colors. They’re not as distinctive, but they make considerably more sense.
The O update will also include the new emoji characters due to arrive this summer, such as a vomiting face, an orange heart and critters like dinosaurs and a giraffe. More importantly, there’s a better chance that you’ll see those characters elsewhere: Google is promising an upgrade that will keep you up to date with emoji on older Android releases. You won’t have to replace a years-old phone just to see everything your friends are saying.
It’s a largely cosmetic change, and you might not even notice it if your phone manufacturer customizes its icon set (LG and Samsung tend to do this, for instance). You may may well miss the blobs, for that matter. Even so, the shift is significant. Google is accepting that it doesn’t control the common visual language of emoji, and that it may be wiser to use relatively humdrum emoji than to risk annoying users.
Source: Emojipedia
Cabs in Washington, DC are replacing meters with Square readers
If nothing else, Uber has permanently disrupted the ride-for-hire system that has traditionally been served by taxis. Grabbing a ride has never been easier (at least where services like Lyft and Uber are allowed to operate), and paying with a credit card number stored in an app ensures that none of the drivers or riders need to worry about cash. Taxi companies have been trying to push back, however. Square is helping the fight, too, with a partnership to process payments for cab drivers in Washington, DC.
Square is one of the original financial payment services, with an app and plug-in dongles to let everyone take credit cards with a mobile device. The company isn’t getting any money from Washington, DC for this partnership, according to Bloomberg, and its even charging a reduced transaction rate. Taxi drivers will need to download a meter app approved by the Department of For-Hire Vehicles, which will let riders swipe, dip or tap their payment at the end of the ride. Tips will happen via the app as well, and electronic receipts can be sent via email or text message, just like the standard Square app. Drivers will need to move to the new platform by August 31, 2017.
DC taxis already have their own app, so this new partnership is yet another way to stay competitive with ride-share services like Lyft and Uber.
Via: Bloomberg
Source: Department of For-Hire Vehicles
Nintendo’s Switch continues to outsell the competition
Despite its launch issues and rollercoaster early sales projections, Nintendo’s portable console can safely be declared a legitimate hit for the company at this point. In March, the Switch became the best-selling piece of game hardware, outselling Nintendo’s own projections, and selling more copies of Breath of the Wild than there were consoles to play them on. Although sales slowed in the month after launch — dropping from 906,000 units in March to 280,000 units April — the Switch still continued to top the charts.
Before launch, outside analysts estimated Nintendo would move about five million devices in the first year, so the roughly 1.2 million it has sold so far put it right on track. On the other hand, the Wall Street Journal reported in March that the company ramped up production from 8 million to 16 million units for 2017 after the console started flying off of shelves, so Nintendo might be hoping the Switch shows up on a lot of holiday wish lists.
In the meantime, new game releases should help Nintendo sell a few more consoles, especially if the not-so-surprising success of Mario Kart 8 Deluxe is any indicator. In only two days, the latest entry into the beloved franchise became the top-selling video game for the entire month, moving over 550,000 physical and digital copies. If Nintendo’s own early advertisements are to be believed, every backyard barbecue will be gathered around a game of Mario Kart or Splatoon 2 this summer.
Via: VentureBeat
Source: Businesswire
ARM targets your brain with new implantable chips
Elon Musk isn’t the only one getting into the wetworks game. Chip manufacturer ARM announced on Wednesday that it is pairing with the Center for Sensorimotor Neural Engineering (CSNE) at the University of Washington to develop a line of brain-implantable systems-on-a-chip that can interface between our squishy bits and the next generation of powered prosthetics.
One of the primary challenges facing prosthetic development today is the lack of sensory feedback. Sure, organizations are developing smart hands that can see and “think” for themselves but the ability for a prosthetic appendage to transmit sensory information back to its user remains woefully inadequate compared to their biological counterparts. But that’s where ARM’s SSoCs come in.
ARM will be leveraging its existing Cortex-M0 processor, the smallest one the company makes, as the basis for its wetware. “They have some early prototype devices,” ARM’s director of healthcare technologies, Peter Ferguson, told the BBC. “The challenge is power consumption and the heat that generates. They needed something ultra-small, ultra-low power.”
These chips are designed to act as intermediaries. They’ll work to decode the complex signals emanating from the brain and transcribe them into digital signals that computers can understand, and vice versa. It’s a more permanent version of what researchers at Braingate2 and the Cleveland Functional Electrical Stimulation (FES) center have already developed (without the giant OBD2 plug sticking out of the patient’s head, natch). It’s also a more direct connection than what the Johns Hopkins Applied Physics Laboratory and Ossur created, both of which rely on the patient’s peripheral nervous system.
Engineering challenges aside, the potential upside for this technology is enormous. ARM hopes that these chips will eventually be able to help patients suffering from everything from seizures to Parkinson’s disease, spinal injuries to strokes. There’s no word on how long it will take to mature the technology but for the people it can potentially help, that day can’t come soon enough.
Via: BBC
Source: ARM
If flippers aren’t enough for you, check out this handheld underwater jetpack
Why it matters to you
This powerful underwater booster will make your swimming experience more enjoyable, whether you’re splashing in a pool or exploring the open ocean.
Propelling yourself through the water by moving your arms and kicking your legs is so 2016. If you really want to channel your inner Aquaman (or, we guess, Namora for the ladies?) you’ll want to pick up Sublue’s new Whiteshark MIX device, a lightweight and powerful underwater booster which means you’ll never have to do anything as strenuous as actually swim again.
“Sublue has developed an underwater device that glides you through the water in an exhilarating, easy, and fun way,” Jiancang Wei, co-founder and CEO of Sublue, told Digital Trends. “It’s equipped with double propellers, which ensures speed and power as well as carefully thought-out safety features. The device is the smallest underwater scooter in the world, which can easily fit it in a backpack. It is the ultimate James Bond-style swimming accessory.”
The Whiteshark MIX boasts a top speed of 3.45 miles per hour, and can survive at depths of up to 130 feet. Its battery lasts up to one hour, and takes 2.5 hours to fully charge. If you’re in the mood for recording your travels, it also includes a built-in GoPro mount, especially positioned to capture underwater images and videos.

“We spent a lot of time researching to get the right type of blades for the propellers and decided on Kaplan four-leaf blades, which greatly reduced the high speed shock to ensure a smooth but still powerful glide through the water,” Wei continued.
Whether you’re a serious scuba diver or snorkeler who wants to explore the ocean or just a holidaymaker looking to have fun at the pool, the Whiteshark MIX is definitely worth a look. It’s currently available for pre-order on Indiegogo, with a price tag starting at $349 — which represents a 30-percent saving on the $499 it will eventually cost.
Shipping is set to take place in October, just in time for a freezing winter swim.



