Google Photos makes it easier to share your shots, create physical photo books
Why it matters to you
Do you forget to send your photos to friends and family? Google Photos will now suggest images to share — all you have to do is tap a button.

With a hefty 500 million users, Google Photos is one of Google’s most-loved products, and Wednesday at Google I/O the company announced a number of new features to the service. Most notably, Google is making it much easier to share photos with your friends, both automatically and manually.
Perhaps the biggest new feature is “Suggested Sharing,” which will allow users to share photos with their friends as photos are taken. For example, if you take a photo of a friend, using facial recognition and a series of other smart guesses, Google Photos will be able to tell who that person is, and ask if you want to share the photos with them.
Google Photos will also now feature a new tab at the bottom of the Photos UI. Titled “Sharing,” the tab will offer a number of suggestions on how to share photos with your friends and family. From there, you can review the photos included that you’ll be sharing, then hit the share button. It’s really rather quick and easy.
Photos can also automatically share photos, if you choose to let it do so. For example, if you want your photo library to be automatically shared with your significant other, Google Photos can automatically update their library every time you add new photos to it.
Google knows photos aren’t just digital — many prefer to have actual printed photos — and so it’s launching a new feature to help with that. The feature is called Google Photo Book, and it’s rolling out to Android and iOS in the United States, though it will expand to more countries soon. Through the feature, you’ll be able to select photos that you want printed as part of a Photo Book, then organize them to be printed into a nice-looking, high-quality printed book. Those books will range from $10 to $20, depending on if you want a softcover or hardcover.
On top of those features, Google is also baking Google Lens into Photos. Google Lens is a newly launched service from the company that uses AI to intelligently recognize a range of different pieces of information. For example, you can point Lens at a sign in another language, and it will translate it for you. Point it at a painting, and you can get historical information about that painting.
Google Photos is likely to continue to evolve over the next few years, especially considering how popular it is.
Windows Insider build supports creatives with improved My People feature
Why it matters to you
If you’re a Windows Insider on the Fast Ring, then you’ll want to go check out the latest build for some improvements to My People and system settings.
Microsoft’s Build 2017 developers conference was last week, and one of its stars was Windows 10 Fall Creators Update. The next major Windows 10 revision will feature more functionality aimed at helping creative professionals get their work done along with ways to make Windows 10 a user’s technological hub.
One of the most visible features coming with Windows 10 Fall Creators Update is the Story Remix app, which has already arrived on the desktops of Windows Insiders in the first post-Build 2017 preview build. Microsoft has released the next preview build, 16199, and it offers quite a few updates of a few features that weren’t heavily covered at Microsoft’s event.
Before getting into what’s new in build 16199, Microsoft took a moment to explain why some Windows Insiders have received Story Remix and others have not. The answer is simple: Microsoft is testing the experience and needs a control group to be able to determine how Story Remix is affecting systems.
If you don’t have Story Remix on your system — and the easiest way to tell is to check for the Create pivot option in the Windows 10 Photos app — then you’re in that control group. If you want to get into the experimental group and get Story Remix installed, then send the app team an email for instructions on how to sign up.

In terms of build 16911, Microsoft’s emphasis seems to be on improving some of Windows 10’s core functionality along with improving the My People feature. Some of the changes in the latter include:
-
- See emoji from pinned contacts: The My People feature lets you pin contact icons to the taskbar, and now Windows 10 will play emoji animations in pinned icons. You can turn the feature off in taskbar settings by right-clicking on the taskbar.
- Notification Badging: The contact icons will now show a counter of the number of notifications you’ve received. This feature requires Skype version 11.16.556.0 to work.
- People-first Sharing: There are now two ways to kick off a conversation with a contact. You can drag a file onto a contact icon to create an email and you can share directly to a contact by choosing from a list of recommended people in the share picker.
- See emoji from pinned contacts: The My People feature lets you pin contact icons to the taskbar, and now Windows 10 will play emoji animations in pinned icons. You can turn the feature off in taskbar settings by right-clicking on the taskbar.
Next up are some improvements to the Windows 10 gaming options in Settings > Gaming and the Windows Game Bar:
-
-
-
- Broadcast using game-only audio: You can now go to Settings > Gaming > Broadcasting and set your audio to “Per-app” rather than sending the entire PC’s audio through Beam.
- Audio settings is now Game DVR in Game bar: Audio settings have been renamed to Game DVR, and all settings are now in that section in settings.
- If you notice a “Game Monitor” page in Settings > Gaming, ignore it for now. It is a placeholder for future functionality.
-
-
Settings has received a few other updates as well. Here are the highlights:
-
-
-
- Settings > System > About is now integrated with System Health. The About page will now show information from Windows Defender Security Center for a quick glance at your PC’s status.
- Tips and Videos in Settings: Microsoft is adding tips and videos to help you learn more about the options you can configure in Windows 10. The first examples are in Settings > Ease of Access and Settings > Update & Security.
- Expanding Storage Sense’s abilities: You can now configure your system to automatically clean up files that haven’t been changed for 30 days. Go to Settings > System > Storage to configure this option.
- Settings > System > About is now integrated with System Health. The About page will now show information from Windows Defender Security Center for a quick glance at your PC’s status.
-
-
Microsoft also updated Cortana on Android to support incoming call notifications. On your Android smartphone, go to the Cortana app then Settings > Sync Notifications and turn on the complete list of Cortana cross-device features. Then, whenever you receive a call on your Android device you will get immediate notification on your Windows 10 PC with options to decline the call or send a text response.

Build 16199 also includes the usual list of fixes, changes, and improvements. Highlights include:
-
-
-
- Windows Defender Application Guard now works on touch-enabled PCs.
- An issue with third-party antivirus programs that causes Windows desktop apps to fail when launching has been fixed.
- Disable drivers will no longer show up as an issue in Windows Defender Security Center.
- The Note quick action has been removed due to low usage.
-
-
Of course, no Windows 10 preview build would be complete without some known issues to keep in mind. Here are a few in build 16199 in addition to a few that have been around in the past couple of builds:
-
-
-
- Outlook 2016 may hang on launch due to a spam filter issue.
- Microsoft is investigating issues with opening PDFs in Edge.
- Some games, including Civilization VI, may not launch in this build.
- Windows Mixed Reality doesn’t work in this build, and developers should hold off on updating if they’re actively working on a Windows Mixed Reality project.
-
-
As always, you will need to be a Windows Insider in the Fast Ring to receive this build. And be sure to keep those known issues in mind before hitting the update button. If you want to get even more involved with the development of Windows 10 Fall Creators Update, then you can check out the ongoing Bug Bash.
Google wants Android O to make users of accessibility services more productive
Why it matters to you
Enhancements to these services will help extend the benefits of Android apps to those with accessibility needs.

The Android accessibility services team took the stage at Google’s I/O developer conference Wednesday to discuss a number of changes coming in Android O that aim to make the platform much more user-friendly for everyone.
Increasing productivity for accessibility services users was job one in preparing for Android O, according to Victor Tsaran, technical program manager on the Accessibility development team. To that end, the upcoming version of the mobile operating system delivers several critical improvements to TalkBack, an Android accessibility service that reads screen content to users who are visually impaired.
First, Android O introduces a separate volume stream for times when the system reads back to you. In other words, media like music and YouTube videos no longer has to play at the same volume that TalkBack does, so it’ll be easier to distinguish between them.
An even bigger addition is support for multilingual text-to-speech. Tsaran demonstrated the feature by having the system read an email out loud that contained phrases in several different languages. Android was intelligent enough to differentiate between them and adjust on the fly.
Android O will also allow fingerprint sensors on devices to support basic gestures so users can swipe between options. In tandem with TalkBack, this means a user who is unable to see the screen can swipe successively between menu items, hearing each one individually read back to them.
Finding and triggering accessibility services was another major focus for Android O. The update will bring a context-aware dedicated accessibility button at the bottom-right of the navigation bar, that will be able to trigger certain actions depending on what’s visible on the screen, and what services you have enabled.
For example, if you’re browsing the home screen, pressing the button will trigger magnification. If you’re using text-to-speech, it will bring up a remote control that allows you to start and stop screen reading, and determine the speed that the system reads to you.

The focus on just making accessibility services easier to understand has made its way to the settings menu as well. Gone are the vague category descriptors, like “System” and “Services.” The menu now groups features based on the actions they perform, and also contains descriptions for what each service does. What’s more, a new shortcut has been added to turn accessibility services on and off on the fly, by pressing both volume buttons.
During the event, the development team stressed that Google arrived at many of these improvements by testing them, in iterative fashion, with real users. Likewise, the company is imploring third-party developers to perform their own accessibility research.
Last year, Google released an app called Accessibility Scanner that could examine developers’ apps and suggest changes to help enhance accessibility, like improving text contrast. Since that time, the company says developers have used the app to find over one million opportunities to improve their apps’ functionality for users with accessibility needs.
Google Home just leapfrogged Amazon Echo at I/O 2017

Google just took the lead with a 2-hour keynote address.
Google I/O 2017 marked a massive improvement in Google Home’s capabilities, the importance of which should not be underestimated. With less than a 30 minute slice of the two-hour long keynote address, Google rolled out fresh Google Home features that improve daily functionality of the connected speaker and completely change the possibilities for both requesting and receiving information from it.
Amazon should take note.
Adding push information

It becomes harder and harder to ignore Google Home’s presence.
In what may have initially come across as a small development, Google made an important change to the way Google Home works by introducing what it calls “proactive notifications.” Up to now, Google Home was always listening and waiting for your input — now, it can pulse its lights to let you know it has something to tell you. When you notice the lights, simply say “hey google, what’s up?” and it will give you the timely information that you’ll hopefully find useful. Google says what it pushes will be limited to only the most important information, and if done correctly it can be extremely useful.
This is a huge change to the way you’re expected to interact with Google Home, and has the potential to dramatically increase use by the average Home owner. By proactively pushing useful information, it becomes harder and harder to ignore Google Home’s presence, which creates a loop of using Home more often.
Calling without a catch
One large feature that caught everyone’s eyes in the wake of Amazon’s recent Echo announcements was free calling from Google Home. You can now simply ask Google Home to call any of your contacts, so long as they have a phone number associated with their contact entry in your Google account. This critically bests the Echo in that it actually dials a phone number — you can call any mobile or landline, rather than dialing someone else’s Google Home or phone via the Home app. The outgoing calls from Home can even be masked to look like they’re coming from your phone, which makes the experience 100% seamless for the person on the other end.
Call any number at any time — no strings attached.
An important function that really makes voice calling effective is Google’s recent implementation of multi-user functionality based on voice recognition. If you say “call mom” it’s going to dial your mom … and if your spouse says the same query it’s going to call their mother instead. A decidedly personal experience that just makes sense, but is a difficult technological problem to solve.
An entirely new interface paradigm

Google Home can respond on your phone or TV, too.
The final part of the latest Google Home announcements has less to do with Home itself and more with how it fits into your entire life. Now Google Home is no longer operating in a silo — it’s simply the contact point for your voice, and can then give you information on other devices. Google Home can now send content to your phone or TV when applicable, whether that means sending Google Maps directions to your phone when you ask or playing a YouTube video on your nearby TV.
You could easily see this as a direct shot across the bow of the new Amazon Echo Show, which made the important jump to using a screen in addition to voice so that it can always offer you information no matter your query. Google Home and Google Assistant’s strength over Amazon here is that Google has potential for deeper integration with more of your screens. Chromecast and Android TV give more options for your big screens and multi-room audio, while Google Assistant being built into just about every Android phone offers a deep hook in billions of devices.
Of course this is only a big feature if you’re a household that already has Chromecasts or Android TVs — which isn’t necessarily a given — but the potential is there in ways that Amazon can’t yet offer.
Your move, Amazon
With these fresh Google Home features, the ball is back in Amazon’s court to try and step up and match what Google Home is now capable of. Amazon may have a larger, longer-standing install base of Echo devices, with new hardware coming, but Google’s superiority in software and platforms is winning right now.
Google Hardware

- Google Wifi review
- Google Home review
- Everything you need to know about the Chromecast Ultra
- Chromecast vs Chromecast Ultra: Which should you buy?
Google Wifi:
Google
Amazon
Google Home:
Google
Best Buy
Chromecast Ultra:
Google
Best Buy
AI-powered Google for Jobs has work for everybody
While the technology industry is a goldmine of employment, for anyone not developing an app or working on AI, finding a job can be tough. This is especially true for folks looking for entry-level positions. Craigslist decimated the classified section of newspapers and while sites like Monster, Linkedin and others are helpful if you have an established career, for entry-level jobs, it’s tough to find work. According to Google, it’s also hard for employers to find people to fill those positions.
So, in partnership with Linkedin, Monster, CareerBuilder, Glassdoor, and — surprisingly — Facebook, the tech giant will be launching Google for Jobs, an AI-powered search engine that combines Google search, machine learning (to delve into career sites), job boards, staffing agencies and applicant tracking systems to help you find work in your area. You can even set an alert for your search. That means, if the barista position you’re looking for isn’t available today, Google will notify you when it surfaces.
That’s outstanding if you’re one the 4.4 percent of the population that’s unemployed. It was also good to see Google CEO Sundar Pichai demo job search with retail jobs instead of developer jobs. There are plenty of ways for a computer science major to find a gig. Finding a spot as a cashier in a store can be a bit trickier because of inconsistent job titles and some companies sticking to the same job posting plan year after year, even though better solutions are out there. Google takes all the information from various sources, throws its AI at it, and spits out an easy to read list that’s beneficial to everyone involved.
This announcement is also is a timely reminder of the importance of smartphones for those struggling to make a living. The ubiquitous device has become an important tool for finding work, as for many it’s their primary means of getting online. With Google’s announcement, that task is now just a bit easier.
Google’s AI initiative will power some pretty cool technology like Google Lens and Assistant, but it’s good to know the company is also making sure that artificial intelligence is helping folks that need a paycheck more than they need to know what kind of flower they’re looking at.
For all the latest news and updates from Google I/O 2017, follow along here
Watch Google’s I/O 2017 keynote in under 16 minutes
If you missed out on Google’s I/O 2017 keynote earlier today, don’t fret. We’ve cut down all of the noteworthy news on Google Lens, AI, Google Assistant, Google Home, Daydream, Android O and more into a quick TK-minute clip. Just sit back, relax and catch up on all of the news in way less time than we spend taking in the 2-hour presentation this afternoon.
For all the latest news and updates from Google I/O 2017, follow along here.
Google Assistant on the iPhone is better than Siri, but not much
Google’s Assistant is finally ready to take on Siri on Apple’s own turf: the iPhone. Yes, you could already play around with the AI-powered chatbot if you downloaded Allo — Google’s mobile-only messenger app — but its functionality was limited. Today, that changes thanks to a new standalone Google Assistant app available on Apple’s App Store (though it’s US-only for now). Eager to check it out, we downloaded it right away and spent some time commanding our Google-branded phone butler around. After a few hours, I’ll say that while I find Google Assistant a lot friendlier and smarter than Siri, it doesn’t quite replace it. At least, not yet.
The first obvious barrier is that while Siri is baked right into iOS, you’ll need to download Google Assistant as a separate app. Plus, accessing Siri is as easy as holding down the iPhone’s home button — with Google Assistant (as with Cortana, Alexa and all other third-party assistants), you’ll need to take the extra step of launching an app. If you have an Android phone, Google Assistant is ready to go without having to download anything at all.
As you might expect, when you first launch Google Assistant on the iPhone, it asks you to log in with your Google account. After you do, it introduces itself to you and invites you to ask it anything you wish. Press the microphone icon at the center to offer a voice command, or if you’d rather not disturb the people around you, you can hit the keyboard icon to type your query.
The first thing you might wonder is if you can make a call or send a message on the iPhone with Google Assistant. The answer is: You can, but it’s not any easier than it would be with Siri. When I say, “Call Mom,” for example, it brings up her name and triggers a phone call, which you can then cancel or confirm. When I say, “Text Mom,” it asks me for my message and then kicks me over to the Messages app on my phone, where I can choose to send it off or not. At least Siri can send messages without me having to open the app.

I also tried to play music on Google Assistant to see how the experience compares to Siri. It was a little, well, uneven. When you first tell Google Assistant to play music, it’ll ask you to choose between Apple Music and YouTube as your default. I chose YouTube and then said, “Play LCD Soundsystem.” It kicked me over to the YouTube app, where it played a random song from the band. Then I went back and said “Play Radiohead,” and it would just give a list of albums. I then tried to switch the default choice to Apple Music, which I somehow was able to do so by saying “Play on Apple Music.” From then on, whenever I said “Play [name of song],” it would play the song on Apple Music. Unfortunately, it doesn’t appear that I can switch back to YouTube as the default, despite multiple attempts. Sometimes it says it’s playing a song, but nothing happens. Clearly, this feature is still pretty buggy.
As you might expect, Assistant plays particularly well with Google’s own apps. So sending email through Gmail is a snap — say who you want to send the email to, and it’ll kick you over to the Gmail app to follow through. Similarly, it’ll offer directions with Google Maps rather than Apple’s own.
What I found particularly intriguing about the Google Assistant app on iOS is that there’s a whole Explore page full of suggestions on what you can do with it. There’s a list of the usual suggestions, like “How many pounds in a kilogram?” or “What sound does a dog make?”
But interestingly, there’s also a slew of third-party chatbots you can try out. Examples include Genius, a bot that’ll guess the name of a song based on a lyric snippet, or the Magic 8 Ball, which will offer pithy responses to yes-or-no questions. Google Home users likely already know about some of these third-party chatbots, but to mobile users, this is new.

Aside from Explore, there’s also a Your Stuff tab that lists your Reminders, Agenda, Shopping List and quick Shortcuts that you can add to customize Assistant. So, for example, you can say “Late again” to trigger an automatic text to your best friend that you’re running five minutes late. “Cheer me up” will automatically bring up a list of kitten videos on YouTube.
I then tried to do a number of things on both Google Assistant and Siri to compare the two. I discovered that due to iOS restrictions, Google Assistant isn’t able to set alarms, take selfies, launch apps, post to Twitter or Facebook, call Ubers or Lyfts, or use third-party apps like Whatsapp for sending messages. Siri, however, was able to do all of these tasks without issue.
At the same time, Google Assistant was vastly superior when it came to translating languages (Siri often faltered) and remembering context clues. For example, when I asked, “Who’s the president of the United States” and followed it up immediately with “How tall is he?” Google Assistant immediately responded with “Donald J Trump” and “6-feet 2-inches tall.” Siri, on the other hand, could answer the first question, but not the second (it responded with “I don’t know”). Google Assistant also was smart enough to respond to set-a-reminder requests with the place and time in which I wanted to be reminded — Siri just placed them on a Reminders list. Siri was also sometimes just plain wrong — it erroneously said the population of Egypt was 85,800 (it’s actually 91.51 million).
In many ways, Siri pales in comparison to Google Assistant. It can’t understand voice commands as well as Google, and it doesn’t remember your preferences like Google can. Siri makes so many errors that there’s even a Reddit group called “Siri fails” that documents its many mistakes. But as long as it comes preinstalled in every iPhone out there and does a good-enough job, Google Assistant — and all other rivals — will have a hard time replacing it.
For all the latest news and updates from Google I/O 2017, follow along here
Android O has emoji you’ll actually recognize
Ever since KitKat, Android’s standard emoji have used minimalist blobs to represent people. They’re unusual and cute, but that gumdrop look isn’t usually what you associate with emoji — just about everybody else uses circular shapes. And that can create real problems if you send an emoji that doesn’t convey the same meaning on your friend’s phone. Thankfully, Google has seen the light. Android O will include more conventional (not to mention more recognizable) emoji, complete with gradients and a wider range of colors. They’re not as distinctive, but they make considerably more sense.
The O update will also include the new emoji characters due to arrive this summer, such as a vomiting face, an orange heart and critters like dinosaurs and a giraffe. More importantly, there’s a better chance that you’ll see those characters elsewhere: Google is promising an upgrade that will keep you up to date with emoji on older Android releases. You won’t have to replace a years-old phone just to see everything your friends are saying.
It’s a largely cosmetic change, and you might not even notice it if your phone manufacturer customizes its icon set (LG and Samsung tend to do this, for instance). You may may well miss the blobs, for that matter. Even so, the shift is significant. Google is accepting that it doesn’t control the common visual language of emoji, and that it may be wiser to use relatively humdrum emoji than to risk annoying users.
Source: Emojipedia
Cabs in Washington, DC are replacing meters with Square readers
If nothing else, Uber has permanently disrupted the ride-for-hire system that has traditionally been served by taxis. Grabbing a ride has never been easier (at least where services like Lyft and Uber are allowed to operate), and paying with a credit card number stored in an app ensures that none of the drivers or riders need to worry about cash. Taxi companies have been trying to push back, however. Square is helping the fight, too, with a partnership to process payments for cab drivers in Washington, DC.
Square is one of the original financial payment services, with an app and plug-in dongles to let everyone take credit cards with a mobile device. The company isn’t getting any money from Washington, DC for this partnership, according to Bloomberg, and its even charging a reduced transaction rate. Taxi drivers will need to download a meter app approved by the Department of For-Hire Vehicles, which will let riders swipe, dip or tap their payment at the end of the ride. Tips will happen via the app as well, and electronic receipts can be sent via email or text message, just like the standard Square app. Drivers will need to move to the new platform by August 31, 2017.
DC taxis already have their own app, so this new partnership is yet another way to stay competitive with ride-share services like Lyft and Uber.
Via: Bloomberg
Source: Department of For-Hire Vehicles
Nintendo’s Switch continues to outsell the competition
Despite its launch issues and rollercoaster early sales projections, Nintendo’s portable console can safely be declared a legitimate hit for the company at this point. In March, the Switch became the best-selling piece of game hardware, outselling Nintendo’s own projections, and selling more copies of Breath of the Wild than there were consoles to play them on. Although sales slowed in the month after launch — dropping from 906,000 units in March to 280,000 units April — the Switch still continued to top the charts.
Before launch, outside analysts estimated Nintendo would move about five million devices in the first year, so the roughly 1.2 million it has sold so far put it right on track. On the other hand, the Wall Street Journal reported in March that the company ramped up production from 8 million to 16 million units for 2017 after the console started flying off of shelves, so Nintendo might be hoping the Switch shows up on a lot of holiday wish lists.
In the meantime, new game releases should help Nintendo sell a few more consoles, especially if the not-so-surprising success of Mario Kart 8 Deluxe is any indicator. In only two days, the latest entry into the beloved franchise became the top-selling video game for the entire month, moving over 550,000 physical and digital copies. If Nintendo’s own early advertisements are to be believed, every backyard barbecue will be gathered around a game of Mario Kart or Splatoon 2 this summer.
Via: VentureBeat
Source: Businesswire



