Google opens Maps to bring the real world into games
Pokémon Go and other games that use real-world maps are all the rage, but there’s a catch: they typically depend on semi-closed map frameworks that weren’t intended for gaming, forcing developers to jump through hoops to use that mapping info. Google doesn’t want that to be an issue going forward. The search firm is both opening its Maps platform’s real-time data and offering new software toolkits that will help developers build games based on that data.
The software includes both a kit to translate map info to the Unity game engine as well as another to help make games using that location data. The combination turns buildings and other landmarks into customizable 3D objects, and lets you manipulate those objects to fit your game world. It can replace every real hotel into an adventurer’s inn, for instance, or add arbitrary points of interest for the sake of checkpoints.
It’s going to be a while before you see games based on these frameworks. As with augmented reality kits like ARKit and ARCore, however, they could lead to a surge of titles from developers who previously would have had to write a lot of this code themselves. Now, they can focus less on the nuts-and-bolts aspects and more on the actual gameplay.
Source: George Portillo (YouTube), The Verge
I took a break at SXSW to listen to an ‘audio-based movie’
I stopped short when I came upon Audiojack’s booth at the SXSW Wellness Expo. Four blindfolded men sat in a huddle with headphones on, looking deeply engrossed. The company’s affable representative David Tobin was a few steps away, explaining his app to a group of curious onlookers. It turns out the listeners were immersed in “audio movies” — three to 10-minute sound stories that tell tales without visuals or words.
Audiojack’s app has been available for years, but Tobin told Engadget he’s making a concerted effort to launch the service this SXSW. He’s been working with hospitals, research institutions and charitable organizations to bring sound movies to kids and the elderly and investigate the benefits they bring. Audiojack conducted a study at senior living facility Belmont Village’s “memory loss and dementia group” to test ease of recall, and found that “every participant recalled their past and were 100-percent lucid and engaged.”
Apprehensive but intrigued, I dropped my backpack to the floor, strapped on the blindfold and put the headphones on. Tobin started a “science fiction adventure” and after a couple of false starts, the “movie” began. I listened to a person’s footsteps in what sounded like a forest, drawing up lush greenery in my head as I envisioned a trekker walking. As vehicles drove by, I started to see a small road in my head, and my mental imagery grew more and more detailed as the story progressed. Eventually, as the plot twisted and turned, I had a hard time following along in my head, but that was in part due to the distractingly loud sounds coming from adjacent booths.

After the four-minute session, I didn’t feel much more relaxed than I did before. But I can definitely see Audiojack’s appeal. Besides the touted benefits for memory and encouraging imagination, the idea of ambient sound that isn’t simply repetitive white noise is something I appreciate. I can even imagine playing this on my smart speaker to lull me to sleep or fill my apartment with noise as I go about my errands.
Despite my skepticism at the potential benefits, I’m still intrigued by the promise. The app also offers exercises to foster imagination — activities like asking the user to draw what they heard and fill in specific details. According to the company, the Cancer Wellness Center of Chicago has used the app “for those going through chemotherapy and other treatments,” while fitness programs use it to help with motivation and endurance training. Audiojack is also used in writing, art and history classes.
Given all that our senses are bombarded with today, the idea of a sound-only movie seems not just like a great way to relax, but also a chance to hone typically unflexed muscles like focus and imagination. The app is available for free on Android and iOS, but you’ll need to pay $2.99 a month (or $14.99 a year) to unlock more episodes.
Catch up on the latest news from SXSW 2018 right here.
Set location-based reminders with your voice on Google Home
Google added voice-controlled Reminders to its Home devices last September, allowing it to catch up with Alexa and Siri. You could set one-off and recurring reminders on a daily or weekly basis as well as contextual reminders that would propagate to your Android phone, too. Now you can set location based reminders with your Google Home device and then get reminded on your phone.
Yup, we’ve all been there. Try saying, “Hey Google, set a reminder to pick up more coffee at the grocery store” and your Google Assistant will remind you on your phone. pic.twitter.com/IkLjV4I2zd
— Made by Google (@madebygoogle) March 14, 2018
Your reminder will appear on your phone when you arrive at the given location. If you say, “remind me to grab some cereal from the grocery store,” then you’ll be notified when you get to a grocery store. We confirmed the feature by asking home to remind us to get coffee beans the next time we were at Starbucks (don’t judge). Home replied, “Sure, I’ll remind you on your phone the next time you’re at a Starbucks,” which sounds like Assistant will remind us at any Starbucks we end up at. That’s in contrast to asking Siri to remind you, as it asks which Starbucks you want to be reminded at. We’ve reached out to Google for more details and will update this post if we hear back.
Via: Android Police
Source: Google
Microsoft’s inclusive Xbox avatars could arrive this spring
Microsoft announced last year that it was overhauling its avatar system and while the more diverse and customizable Xbox Live avatars were initially due out last fall, the company has kept us waiting. But a source familiar with Microsoft’s Xbox plans has told The Verge that the new system will be available to Xbox Insiders for preview this month and is set for a wider rollout in April.
Through the redesigned setup, users will have way more options when it comes to designing their avatars. There will be more props, more clothing options, more body types to choose from and as last year’s trailer shows, users can even incorporate prosthetic limbs and pregnancy. And all of the props and accessories are gender neutral. “If you can see it in the store, you can wear it,” Kathryn Storm, an interaction designer at Xbox, said last year at E3. “We’re not holding you to any type of checkboxes.
According to The Verge, the new avatar system will be integrated in the Xbox One dashboard and Microsoft plans to open a new avatar store in May. However, Mike Ybarra, Microsoft’s corporate VP of gaming, said on Twitter that the company has no firm release date yet and shipment will depend on what kind of feedback it gets. So while it appears the more inclusive avatars may be on the way sometime soon, an exact release date still seems to be up in the air.
They’ll ship when they’re ready. We have no firm date and ship date will be based on feedback (like most features). https://t.co/yZuxjdFblD
— Mike Ybarra (@XboxQwik) March 14, 2018
Via: The Verge
A new test could tell us whether an AI has common sense
Virtual assistants and chatbots don’t have a lot of common sense. It’s because these types of machine learning rely on specific situations they have encountered before, rather than using broader knowledge to answer a question. However, researchers at the Allen Institute for AI (Ai2) have devised a new test, the Arc Reasoning Challenge (ARC) that can test an artificial intelligence on its understanding of the way our world operates.
Humans use common sense to fill in the gaps of any question they are posed, delivering answers within an understood but non-explicit context. Peter Clark, the lead researcher on ARC, explained in a statement, “Machines do not have this common sense, and thus only see what is explicitly written, and miss the many implications and assumptions that underlie a piece of text.”
The test asks basic multiple-choice questions that draw from general knowledge. For example, one ARC question is: “Which item below is not made from a material grown in nature?” The possible answers are a cotton shirt, a wooden chair, a plastic spoon and a grass basket.
If machine learning can successfully pass the Arc Reasoning Challenge, it would mean that the system has a grasp of the common sense that no AI currently possesses. It would be a huge step forward, leading to smarter artificial intelligence, and move these systems closer to one day taking over the world.
Source: MIT Technology Review
Former Apple Employees Reflect on Siri’s ‘Squandered Lead’ Over Amazon Alexa and Google Assistant
The Information has published an in-depth look at how Siri has transitioned from one of Apple’s most promising technologies into a “major problem” for the company. The article includes interviews with a dozen former Apple employees who worked on the various teams responsible for the virtual assistant.
The report claims that many of the employees acknowledged for the first time that Apple rushed Siri to be included in the iPhone 4s before the technology was fully ready, resulting in several internal debates over whether to continue patching up the half-baked product or start from scratch.
Siri’s various teams morphed into an unwieldy apparatus that engaged in petty turf battles and heated arguments over what an ideal version of Siri should be—a quick and accurate information fetcher or a conversant and intuitive assistant capable of complex tasks.
The team working on Siri was overseen by Apple’s then iOS chief Scott Forstall, but his attention was reportedly divided by other major projects, including the upcoming launch of Apple Maps. As a result, Forstall enlisted Richard Williamson, who was also managing the Apple Maps project, to head up the Siri team.
According to the report, several former employees said Williamson made a number of decisions that the rest of the Siri team disagreed with, including a plan to improve the assistant’s capabilities only once a year.
Williamson, in an emailed response to the report, wrote that it’s “completely untrue” that he decided Siri shouldn’t be improved continuously.
He said decisions concerning “technical leadership of the software and server infrastructure” were made by employees below his level, while he was responsible for getting the team on track.
“After launch, Siri was a disaster,” Mr. Williamson wrote. “It was slow, when it worked at all. The software was riddled with serious bugs. Those problems lie entirely with the original Siri team, certainly not me.”
Forstall and Williamson were both fired by Apple in 2012 following the botched launch of Apple Maps on iOS 6. The former employees interviewed said they lamented losing Forstall, who “believed in what they were doing.”
Another interesting tidbit is that the Siri team apparently didn’t even learn about the HomePod until 2015. Last year, Bloomberg News reported that Apple had developed several speaker prototypes dating back to 2012, but the Siri team presumably didn’t know due to Apple’s culture of secrecy.
In a sign of how unprepared Apple was to deal with a rivalry, two Siri team members told The Information that their team didn’t even learn about Apple’s HomePod project until 2015—after Amazon unveiled the Echo in late 2014. One of Apple’s original plans was to launch its speaker without Siri included, according to a source.
The report says that Siri is the main reason the HomePod has “underperformed,” and said Siri’s capabilities “remain limited compared to the competition,” including Amazon Alexa and Google Assistant.
The most notable failure in Siri’s evolution is that it still lacks the third-party developer ecosystem considered the key element of the original Siri vision. Apple finally launched SiriKit in 2016 after years of setting aside the project and shifting resources away to other areas. […]
But SiriKit has yet to fulfill its promise. So far it includes just 10 activities—Apple calls them “intent domains”—such as payments, booking rides, setting up to-do lists and looking at photos. Several senior engineers who worked on SiriKit have left Apple or moved off the project.
Some former employees interviewed noted that “while Apple has tried to remake itself as a services company, its core is still product design.”
Apple responded to today’s report with a statement noting Siri is “the world’s most popular voice assistant” and touted “significant advances” to the assistant’s performance, scalability, and reliability.
“We have made significant advances in Siri performance, scalability and reliability and have applied the latest machine learning techniques to create a more natural voice and more proactive features,” Apple wrote in its statement. “We continue to invest deeply in machine learning and artificial intelligence to continually improve the quality of answers Siri provides and the breadth of questions Siri can respond to.”
The full-length article is a worthwhile read for those interested in learning more about Siri’s internal struggles and shortcomings.
The Information: The Seven-Year Itch: How Apple’s Marriage to Siri Turned Sour
Tags: Siri, theinformation.com
Discuss this article in our forums
Twitter Experimenting With Curated News Timelines, Planning Camera-First Feature
Twitter is working on several new features for its social networking platform, according to reports shared today by CNBC and BuzzFeed.
Twitter is experimenting with algorithmically curated timelines for major news events that will be shown to Twitter users at the top of their main Twitter feeds, a Twitter spokesperson told BuzzFeed this morning. The feature is an extension of “Happening Now,” which has previously only highlighted sports-related tweets.
Image via BuzzFeed
On Wednesday, Twitter began surfacing curated tweets surrounding major news events, including the congressional special election in Pennsylvania and the death of Stephen Hawking. Tweets included those from both news organizations and people who are not news professionals.
“People come to Twitter to see and talk about what’s happening. We’re working on ways to make it easier for everyone to find relevant news and the surrounding conversation so they can stay informed about what matters to them,” Twitter product VP Keith Coleman told BuzzFeed News in a statement.
Twitter plans to promote curated timelines at the top of the main feed using a banner, and when the banner is tapped, it will bring up Twitter’s curated timeline of the event. Currently, a small number of iOS and Android users are able to see the new timeline feature.
Separately, CNBC says Twitter is working on a “camera-first” feature that’s designed to put more emphasis on video and images. The new functionality combines location-based photos and videos with Twitter Moments around notable events, with companies able to sponsor events or put ads between tweets.
CNBC says the feature is similar to aggregated location-based story snaps in Snapchat, which are displayed in the Discover tab.
It’s not clear when Twitter plans to launch its camera-first feature, and it’s still possible that it could be changed drastically or abandoned entirely. Sources who spoke to CNBC believed the feature was in the early stages of development, and Twitter declined to comment.
Tag: Twitter
Discuss this article in our forums
Grab the Amazon Prime exclusive 64GB Moto G5 Plus for $210
A stellar deal.
Amazon no longer powers its Prime-exclusive phones with ads, but that doesn’t mean the discounts are gone. The 64GB Moto G5 Plus has dropped to its lowest price yet, and it’s hard to find a better value out there than this deal. It has a 5.2-inch display, 64GB of internal storage, and comes unlocked.

You can use this on any major U.S. carrier without any issue, so be sure to grab one before the price jumps back up. Looking for something different? Be sure to check out all of the Prime exclusive devices now.
See at Amazon
Samsung Galaxy S9+ vs. Google Pixel 2 XL: Which should you buy?
Samsung and Google’s true flagships go head to head.
We’ve already put the smaller versions of these phones, the Galaxy S9 and Pixel 2, head to head — but now, we look at the big ones. In many ways, comparing the Galaxy S9+ to the Google Pixel 2 XL is far more interesting, as they match more closely regarding size, specs, capabilities, and price. Being more expensive and bigger, people have much higher expectations that these are the true flagships from Samsung and Google — so, which one is best for you? We have all of the information you need to decide.
What’s the same
In many ways, Samsung and Google have made very similar flagships. Size-wise, the phones are nearly identical. The Galaxy S9+ has a bit smaller bezels, and its curved display makes it a tad narrower, but the Pixel 2 XL’s slightly smaller screen averages things out, so both phones have a very similar footprint. They’re big, and give you lots of screen to use, but aren’t particularly unwieldy — and in most cases, you can get things done with one hand.
In size and feel, Samsung and Google have made very similar flagships.
They’re both curvy phones, limiting the number of times your hand finds a sharp edge in use. I’ll discuss the merits of glass versus metal below, but both are constructed extremely well and give you the feeling that you got your money’s worth spending $850.
Samsung Galaxy S9 vs. Google Pixel 2: Which should you buy?
Both phones are water-resistant and have dual speakers that give stereo separation and sound pretty good. Inside the Galaxy S9+ has a slight spec advantage with its newer Snapdragon 845 processor and 6GB of RAM, but you’ll see I have this mentioned in the “same” category because the Pixel 2 XL simply feels just as fast as, or faster than, the Galaxy S9+ in daily use. Chasing specs in this case isn’t a great idea, even though I’ll recognize that a couple years in the future that extra processing power and RAM could make a difference.
Outside of that processor and RAM, things are about the same. Both phones have high-resolution displays, and all of the supporting radios and random little specs you expect to see on a high-end phone. Their batteries are almost the same size, and in my experience they offer about the same battery life from day to day — the only real difference being the Galaxy S9+’s inability to settle down and just sip power when in standby, where it just continues to drain at a faster rate than the Pixel 2 XL will if it’s just sitting on a table ostensibly not doing much.
What’s different
Let’s continue the hardware discussion with the parts that differentiate these phones. Whether you like a glass-backed or metal-backed phone is mostly personal choice — and in this case, even the Pixel has a pretty large pane of glass on its back. The Galaxy S9+’s glass enables wireless charging, and it sure looks stunning out of the box. But the glass isn’t nearly as durable as the Pixel 2 XL’s metal, and the number of fingerprints and scratches the Galaxy S9+ picks up over time can be a major disappointment.
Samsung continues to offer more hardware for the money, but the same approach in software isn’t always great.
With the potential durability concerns, the Galaxy S9+ gives you hardware benefits. You get a headphone jack (and headphones in the box as a bonus), as well as an SD card slot for adding up to 400GB of storage at a whim. The latter may not be a huge deal, but I’ll still argue the headphone jack is an incredibly useful port to have. The Galaxy S9+’s display is also a big step up in overall quality from the Pixel 2 XL’s, with better brightness, colors and off-axis viewing — Samsung still wins the screen fight, hands-down.
Where things swing back in the Pixel 2 XL’s favor is in software. The Galaxy S9+’s sheer number of features could be perceived as an advantage, but I’ll argue all day that Google’s simplicity wins overall. You can always add capabilities and features through tweaking settings and installing apps, but you can never get away from Samsung’s duplicate apps and immense changes to Android that are regularly getting in your way. It’s never a good feeling to have to fight with the phone to get it working cleanly and efficiently.
Advanced users can manage, and normal people will be able to put up with the cruft, but they shouldn’t have to — and the Pixel 2 XL offers a more enjoyable daily software experience because of what it doesn’t have. Add in Google’s commitment to two years of major software updates and three years of monthly security patches, and the Pixel 2 XL offers a simpler overall package that doesn’t require so much maintenance or worry. You can just enjoy using the phone, with Google’s apps and services, plus the key apps you want to use, and you’ll quickly forget about the other fringe features you thought would be useful on the Samsung phone.
Both take wonderful photos; the question is how hard you want to work for it and how many tools you need.
The final important factor here is the cameras. I’ll say it right from the start that I think both of these phones have excellent overall camera experiences, and they both take great photos every time you press the shutter button. Everything else about the phones aside, anyone would be happy with these phones. But there’s some nuance to how they get there.
The Pixel 2 XL takes all of the work out of taking photos. Its extremely simple camera interface just lets you press the shutter and watch magic happen, as HDR+ processes your photo and gives you a wonderful recreation of the scene. It gives you bursts of color and crazy-wide dynamic range, and that’s super-appealing to the eye. The Galaxy S9+ makes you work for it a little more — it’ll take a fundamentally great photo with fine details and low noise even in a dark room, but you may have to adjust exposure or tap to focus to get just the right shot sometimes. The GS9+ then goes above and beyond with a great Pro camera mode, 960 fps slow-motion and lots of little tweaks that just give you more options for taking a variety of photos with the camera.
Again, both phones can do wonderful things with their rear cameras. Whether you want the super-simple route or one that gives you more tools and a little more work to do is up to you.
Bottom line: Which should you buy?

This is the big question. At the highest level, which phone is “best” for you lands primarily on your feelings about the software. Do you want Google’s clean, sleek experience that is lacking a bit in terms of raw features? Or do you want the power, options and customization potential of Samsung’s software, at the cost of usability and some added frustration? Both are valid choices, and the average customer probably won’t be as upset with Samsung’s software as I may be, but I still feel Google is doing things the right way with its Pixel software — now, and two years on when it’s still getting updates.
It comes down to which software experience aligns with your needs.
If you’re indifferent on the software front, Samsung offers a compelling total package with the Galaxy S9+. It straight-up offers a better display, newer internal specs, more hardware features and some nice value-adds — and aside from its glass back being a bit more fragile, it doesn’t do anything worse than the Pixel 2 XL. The Pixel 2 XL’s hardware is a bit more robust and has a nice understated design, but in most people’s eyes, this hardware isn’t as enticing as what Samsung has.
Some will hang onto the differences in camera quality to make a decision, but once again I feel the software experience is on a higher priority level. Both phones take great photos, but simply do things a little differently. If you’re fine with Samsung’s approach to software, you’ll be plenty happy with how it handles photography — likewise for Google and the Pixel 2 XL. So pick which software you like, come to terms with the hardware differences, and when you pick you’ll know that either phone will take great photos.
Samsung Galaxy S9 and S9+
- Galaxy S9 review: A great phone for the masses
- Galaxy S9 and S9+: Everything you need to know!
- Complete Galaxy S9 and S9+ specs
- Galaxy S9 vs. Google Pixel 2: Which should you buy?
- Galaxy S9 vs. Galaxy S8: Should you upgrade?
- Join our Galaxy S9 forums
Verizon
AT&T
T-Mobile
Sprint
Here’s what goes on behind-the-scenes with Motion Photos on the Pixel 2
More than meets the eye.
The Pixel 2’s camera continues to be in a league of its own, and not a day goes by where it fails to impress me. I still haven’t messed around too much with its Motion Photos feature, but after reading through Google’s behind-the-scenes look at the technology used to pull it off, that may begin to change.

When Motion Photos was announced, I personally just saw it as a way of Google playing catch-up with Apple’s “Live Photos” on iOS. Capturing a couple extra seconds of footage along with a still image is a neat idea, but Google’s actually doing a lot more than simply recording a scene prior to hitting the shutter button.
With Motion Photos enabled on the Pixel 2, taking a picture also records motion metadata that’s created using the Pixel 2’s gyroscope and optical image stabilization system within its camera. These two components are combined through the use of software to create Motion Photos, and by using a combination of hardware and software-based stabilization, Google can greatly reduce the amount of camera shake found within these short clips.

Before (left) and after (right) Motion Photos’ stabilization
Per Google’s Research Blog:
For motion photos on Pixel 2 we improved this classification by using the motion metadata derived from the gyroscope and the OIS. This accurately captures the camera motion with respect to the scene at infinity, which one can think of as the background in the distance. However, for pictures taken at closer range, parallax is introduced for scene elements at different depth layers, which is not accounted for by the gyroscope and OIS.
Once this system determines how much background movement there is in a Motion Photo:
We determine an optimally stable camera path to align the background using linear programming techniques outlined in our earlier posts. Further, we automatically trim the video to remove any accidental motion caused by putting the phone away. All of this processing happens on your phone and produces a small amount of metadata per frame that is used to render the stabilized video in real-time using a GPU shader when you tap the Motion button in Google Photos.

Before (left) and after (right) Motion Photos’ stabilization
As you can see from the GIFs above, the end result of this process is pretty darn incredible – and all of it happens in the background using the power of software.
Motion Photos are turned on by default on the Pixel 2, and you can share them as video clips and high-resolution GIFs right within the Google Photos app.
Google Pixel 2 and Pixel 2 XL
- Pixel 2 FAQ: Everything you need to know!
- Google Pixel 2 and 2 XL review: The new standard
- Google Pixel 2 specs
- Google Pixel 2 vs. Pixel 2 XL: What’s the difference?
- Join our Pixel 2 forums
Best Buy
Verizon
Google Store
Project Fi



