Skip to content

Posts tagged ‘Google’


AI was everywhere in 2016

At the Four Seasons hotel in South Korea, AlphaGO stunned grandmaster Lee Sodol at the complex and highly intuitive game of Go. Google’s artificially intelligent system defeated the 18-time world champion in a string of games earlier this year. Backed by the company’s superior machine-learning techniques, AlphaGo had processed thousands and thousands of Go moves from previous human-to-human games to develop its own ability to think strategically.

The AlphaGo games, watched by millions of viewers on YouTube, revealed the ever-increasing power and progress of AI. This contest between man and machine was not the first of its kind. But this time it was more than just a computer beating a human at a game. AlphaGo not only conquered the complexities of the game but seemed to surpass the intelligence of the grandmaster across the board game. The unpredictable moves that shocked Sodol (and the world) revealed AlphaGo’s ability to think and respond creatively. It is the kind of intelligence that has long been an asset for Hollywood’s all-powerful versions of AI, but one that had been unattainable for computers in reality.

That victory marked a shift in the trajectory of AI this year. The technology that has long been aimed at replicating human intelligence now seems to be paying attention to human patterns and behaviors. Recent advances in deep learning have enabled that kind of insight, but it’s not limited to beating humans at games. In 2016, AI broke out of the confines of research labs to transform the way we live, communicate and even conserve the planet. Chatbots popped up in group texts. Personal assistants invaded our homes. Cognitive systems are detecting cancer. Bots are writing movie scripts. And car makers are gearing up to unleash a bevy of autonomous vehicles onto public roads.

Professional 'Go' Player Lee Se-dol Set To Play Google's AlphaGo

Grandmaster Lee Sedol looks on during a match with AlphaGo. Credit: Google via Getty Images.

For a few years now, a cohort of mobile assistants like Siri, Cortana and the new Google Assistant has been getting people in the habit of talking to their devices so they can spend less time swiping screens. But now, these personal assistants are swiftly moving past the basics of reminders and internet searches. They are invading our homes as efficient helpers.

One of the highlights in the talking-devices category this year was Google Home. The voice-activated speaker, designed for personal spaces, joined the ranks of Amazon Echo. These at-home digital helpers carry the same promise of efficiency as their smartphone counterparts, but they seem to have a different agenda. They are efficient assistants that not only want to understand human needs, but predict them to create an environment of reliance and reciprocity.

That kind of environment, depicted in movies like Her and Iron Man, is essential to the next stage of human-machine interaction where assistants can turn off the lights in a room for you and also, one day, tell you when you’re out of diapers for your child. Mark Zuckerberg’s Jarvis, a Morgan Freeman-voiced AI helper he recently built for his own house, is a glimpse into the kind of personalized AI that will coexist in the connected homes of the future.

The ability to comprehend humans is integral to AI in all its forms, present and future. With the recent boost in speech-recognition and natural-language-processing techniques, machines are getting closer to understanding humans than ever before. With companies like Tesla, GM, BMW, Fiat Chrysler all rolling out autonomous vehicles, the ability to communicate with these moving machines will play a pivotal role in making the experience stress-free.

An interior shot of Tesla’s self-driving car.

Smart cars promise to bring down the frequency and casualties of road accidents. They are also expected to boost mobility for the elderly and people with disabilities. This summer, that promise came under scrutiny for a fatal Tesla crash. But a couple of months later, when a Missouri-based lawyer suffered a pulmonary embolism while driving on the freeway, the autopilot in his Tesla Model X reportedly drove him to the hospital and saved his life.

Just as the narrative started to shift back to the benefits of self-driving cars, Uber launched its semiautonomous fleet of Volvos in Pittsburgh. The ride-sharing company also rolled out its autonomous car service in San Francisco this month, but city officials swiftly cracked down on the company because it did not have the state permits required to operate the cars. This week, Uber pulled back from the city and is now looking to redeploy the vehicles in Arizona.

Uber’s antics aside, the enthusiasm around self-driving vehicles has been palpable. Whether we see AI increasing mobility for people who need the autonomous services the most is still a thing of the future. But the one area where AI is already pitching in is medicine.

Image credit: DeepMind Health

With troves of raw medical data being gathered through computers and personal devices across the world, doctors are increasingly turning to algorithms and cognitive computing systems for help. While access to the data is transforming the way doctors diagnose diseases, the sheer volume of data has made it virtually impossible for doctors to process the information for a timely diagnosis.

It’s also increasingly hard for a doctor to match the information intake of a computer brain like IBM’s Watson that has the ability to absorb every medical journal that has ever been written. In addition to already deploying Watson in major hospitals, IBM recently partnered with more than a dozen cancer institutes to train its cognitive system. The exposure will enable Watson to find personalized treatments for patients who have already tried existing treatments with no success.

The diagnostic potential of AI also extended to the field of ophthalmology. According to a study, Google’s deep learning algorithm was able to detect diabetic retinopathy through photographs. The most common method for determining signs of diabetic eye disease, which reportedly affects about 415 million people across the world, is to have a doctor examine the images of the back of the eye for lesions. But a recent experiment revealed that Google’s algorithm was able to recognize the lesions as accurately as the doctors. While the company points out that a lot more work needs to be done in this area, the initial results reveal that AI assistance could speed up a doctor’s diagnoses drastically.

This year, AI also pitched in to save the planet. From water conservation in California to saving tuna in Palau, AI was deployed in environmental efforts across the world. OmniEarth, an environmental-analytics organization, used Watson to map and classify irrigated and nonirrigated areas through satellite images to improve water conservation in California. IBM’s AI was able to process the images 40 times faster than humans who were tasked with the job.

The Nature Conservancy, a global nonprofit, also turned to machine learning as it ramped up efforts in the Pacific region of Palau to monitor fishing activities. The organization already equipped a fleet of ships with cameras and GPS devices to hold fisheries accountable for their catch. But last month, it launched a competition to find an algorithm that can speed up the process of identifying sharks, tuna or turtles that might be brought on board the ships.

The presence of AI was felt and needed in both personal spaces and far-off reaches of the Earth. But it was not entirely unexpected. The preoccupation to make computers think like humans has been evident for decades and 2016 was as much a culmination of those efforts as an indication of things to come.

Despite the constant debate around the dangers of AI, with every new development machines become more capable of human thought. And the concept of intelligence is no longer limited to personal assistance or medical-speak. For a technology that’s built on human culture, it’s bound to tackle avenues of creativity.

Benjamin, a self-improving neural network, wrote its own short sci-fi movie in June. The AI, which is good at text recognition, was fed human screenplays so it could learn to write a script. The film, titled Sunspring, turned out to be an incoherent mess, but it reportedly picked up on the repetitions and patterns of human writing.

The scriptwriting AI might not be ready for the film circuit, but it seemed to follow in the footsteps of AlphaGo whose stunning victory in Korea already revealed AI’s capacity for creative intelligence. While none of the machines have made their mark as filmmakers or musicians yet, it’s not for lack of trying.

Check out all of Engadget’s year-in-review coverage right here.


In 2016, emoji kept it 💯

In addition to everything else that happened in tech this year, something small, cute and unassuming wormed its way into your smartphone, your social network and even your MacBook keyboard. While emoji have been around a while, this was the year these pictographs firmly lodged themselves into our lives. It’s become less like immature shorthand and more like another language.

Apple and Google both showed they were both taking the tiny icons seriously. The iPhone’s iOS 10 added search and predictive features for emoji to its keyboard, making it even easier to inject winks and explosions into everything you type. (Apple also added emoji functions to the OLED Touch Bar on its new MacBook Pro.)

Google took it even further, with its latest Android keyboard and gBoard on iOS both including predictive emoji. The company even baked them into its new AI assistant, Allo. The assistant can play emoji-based movie guessing games. In fact, the internet juggernaut has a real emoji crush: In early December, its main Twitter account even started offering local search results if you tweeted an emoji at it.

Granted, the results are … mixed. It won’t be replacing Yelp anytime soon, but it demonstrates how emoji are moving beyond their quick-and-dirty text-message roots.

Quicker access to emoji on your phone also comes at a time when most of our digital interactions (or at least mine) happen through smartphones. It’s become easier to use emoji, and new uses are introduced all the time. GoDaddy launched a service that allows you to create and register website addresses written purely in emoji. It could open a new wave of easily memorable sites — and there’s no shortage of emoji combinations available.

Perhaps the biggest challenge in use of emoji is how open to interpretation many of the pictograms are. More than the written or spoken word, emoji can be easily misunderstood — a fact compounded by the subtle visual differences between identical symbols in different emoji fonts. Send an iPhone emoji to someone using Google hangouts on a PC, and they might not pick up the exact same meaning.

Credit: Grouplens

They can also deliver entirely new uses, beyond the simple word was once meant to represent. There’s a reason for the popularity of the eggplant emoji and it has nothing to do with moussaka.

This vagueness and playfulness is part of their charm; some things are just funnier or easier to say in emojis. Occasionally, they can be haunting:

It’s not all frivolity and euphemisms. Updates to the emoji series attempt to better represent modern culture and society. Unicode’s latest character set for 2016 had a strong focus on gender and jobs, offering dancing bunny-boys and female police officers in an effort strike a better balance between the sexes. It even added the option of a third, gender-neutral option — although that’s apparently proved more difficult to visually express.

This year, Sony Pictures announced that it’s making a CGI feature film based entirely around emoji. It sounds like a terrible idea, but the studio believes it can make money from it. (There might even be more than one movie.)

The effect of emoji has even been noted by one of the world’s most prestigious design museums, with the Museum of Modern Art inducting emoji earlier this year. The debut set of symbols, designed for Japanese phone carrier Docomo back in 1999, is now filed under the same roof as the works of van Gogh and Dali. Used at the time to convey the weather and other messages (in a character-frugal way), the symbols were soon copied by other Japanese carriers, but it took another 12 years before they were translated into unicode in 2010, which Apple then expanded when it launched the original iPhone the following year.

So have we reached peak emoji? The initial set of low-pixel characters totaled 176. Now, at the end of 2016, there’s over 1,300 of them — and no shortage of new suggestions.

Check out all of Engadget’s year-in-review coverage right here


‘Pokemon Go’ gives you more goodies for the holidays

One of the reasons why Pokemon Go was such a success is because it got people out and about during the summer. Now that winter is settling in and the snow is starting to fall, however, the developers at Niantic Labs have to figure out a way to keep people playing. Beginning on Christmas Day morning, Pokestops will dole out a single-use incubator. More than that, until January 3rd you’ll have a better chance of finding Togepi, Pichu and other Johto-based eggs.

To further celebrate the season Pikachu wearing Santa hats are going to be a thing as well. An adorable thing that will stick around a bit longer. Need more reason to brave the unforgiving tundra your neighborhood has become? Come December 30th, finding the original starting Pokemon (Bulbasaur, Charmander, Squirtle and their subsequent evolutions) will be easier too, according to a press release from the company.

To give you a leg up in finding them, Lure Modules will double their lifespan and last for an hour. These bits only last through January 8th, though, so time is of the essence. But hey, now you have a valid excuse for taking a walk when your family members are on your last nerve.


The team behind ‘That Dragon, Cancer’ made a VR radio play

The folks behind the heartrending, award-winning autobiographical game That Dragon, Cancer are back with a new project. Don’t worry though, because it sounds like the polar opposite of that tragic tale. In the episodic virtual reality game Untethered (told you it was different), you play a talk radio DJ. Ands such, you can talk to other characters in the game by speaking aloud. What good would a DJ-starring game be without that ability, anyhow?

A press release describes it as such: “The player can record commercials, interview callers, and try to piece together what is happening outside of the radio station, all while getting to know their lonely producer.”

More than that, the development studio has worked with professional musicians Future Folk and Jill Sobule to populate the game with songs to find and broadcast to listeners in-game. Each episode has you playing a different protagonist, and from there your task is figuring out how everyone connects to each other. It all sounds a little like a quirky take on the indie-horror flick Pontypool, if you ask me.

The premiere installment is available today exclusively on Google’s Daydream View mobile headset for $4.99.

Source: Google Play


Android Wear 2.0 will launch on a pair of flagship smartwatches

The market for smartwatches is drying up, but Google seems intent on shaking it up. We already knew that Android Wear 2.0 would arrive in early 2017, but Android Wear product manager Jeff Chang recently confirmed to The Verge that the updated platform would launch on two new, flagship smartwatches. Make no mistake, though: these aren’t Google watches, strictly speaking. While the search giant will no doubt promote them like crazy, Chang noted in the interview that the watches will bear the brand of their manufacturer rather than Google.

In other words, Pixel watches these ain’t.

Chang apparently referred to the deal Google has with this mystery manufacturer as akin to the long-running Nexus program, in which Google provided the software for hardware makers who could build the kind of phone Google desired. Chang’s characterization is curious, though, because Google has more-or-less backed away from the Nexus program with the launch of its Pixel phones, devices that Google developed entirely on its own. To date, the company’s bet seems to have paid off: the Pixels have enjoyed strong critical acclaim, and not even Google could correctly gauge demand for the devices, leading to delayed delivery of pre-ordered phones. Hell, check out the Google Store right now: at time of writing, at least one version of the Pixel XL is still sold out.

It’s not hard to jump to the conclusion that a Google-branded smartwatch could have done well, if maybe not to the same extent. Rumors of in-house Google smartwatches have been swirling for months now, too, so what gives? While some (including your author) hoped Google would lead the Android Wear 2.0 charged with some watches of its own, it makes complete sense that Google decided not to.

For one, it minimizes the reputational risk that comes with a potential failure. Think about it: people don’t seem to want smartwatches as much as they used to, and a big whiff on Google’s part could shake perceptions that smartwatches are worth the hassle. That’s no bueno for a company with a vested interest in spreading Android Wear far and wide. With that said, it also makes sense that Google would on some level want to give the actual manufacturer of the smartwatch the attention. With any luck, that attention drives sales, which spurs competition, which ultimately makes Android Wear a more palatable platform for everyone involved. A pair of Pixel watches might still be in the offing, but the timing doesn’t seem right yet. Better the rising tide of Android Wear 2.0 lift the sales of all manufacturers, rather than just directly benefit Google.

Oh, by the way: Chang let slip other juicy details, like that these two watches would be the first of many in 2017, Android Pay would be enabled on all of them and that those mobile payments would work with iOS devices. That last bit could be big for Google as it continues trying to prove Android Wear’s value — smartwatches from Apple and Samsung already have elegant, built-in payment solutions. In other words, the two major companies still trying to make smartwatches a thing are enjoying a healthy lead, while Google works to enable support across the many watches to come.

Source: The Verge


The best gadgets of 2016

A year ago, virtual reality felt almost like a pipe dream. But during 2016, we saw the launches of the Oculus Rift, HTC Vive, PlayStation VR and Daydream, a new mobile platform from Google. VR is here, and it’s very much . . . well, real. We’re still waiting for more games to appear and for the price of truly immersive platforms to fall, but it’s an auspicious start for a category that’s sometimes felt overhyped.

Of course, there was even more great stuff this year beyond VR. We’ve seen the steady evolution of smartphones with Google’s Pixel devices, the iPhone 7 Plus and Samsung’s Galaxy S7 line (with the Note 7 being the obvious exception). Both Dell and HP delivered some of the most refined laptops we’ve ever seen (sorry, MacBook Pro). And we can think of a few more standouts too. Find all of our favorite gadgets of 2016 in the gallery below.

Check out all of Engadget’s year-in-review coverage right here.


Barnes & Noble’s $50 Nook came pre-installed with spyware

Barnes & Noble introduced the $50 Nook just in time for the holiday shopping season, but it failed to mention one crucial bit of software pre-installed on its 7-inch e-reader: malware. Specifically, the new Nooks came with an ADUPS program that granted a third party full access to all of a device’s data plus complete control privileges. This means someone overseas had the ability to collect your personal information and wipe your Nook clean, if it had the ADUPS spyware installed.

This is the same malware that was recently discovered in 120,000 Blu unlocked smartphones.

Barnes & Noble told 9to5Google that a software update removing the exploit was deployed for all Nooks, and it’s working on an update that will remove ADUPS from the e-readers entirely. ADUPS additionally said it didn’t collect any “personally identifiable information or location data,” and it didn’t intend to.

Barnes & Noble tried something new with its latest Nook. Its existing partner, Samsung, doesn’t manufacture Android devices in the $50 range, so Barnes & Noble outsourced production to Shenzhen Jingwah Information Technology Co., Ltd, according to LinuxJournal. ADUPS is also a Chinese company, based out of Shanghai.

Source: LinuxJournal, 9to5Google


Honda is in talks to use Alphabet’s self-driving car tech

Mere days after Google spun out its self-driving car division as Waymo, the newly spawned Alphabet company is already in the midst of cutting big deals. Honda has revealed that it’s entering talks with Waymo on integrating autonomous hardware with its vehicles. It’s still extremely early, but Honda has proposed giving Waymo modified cars to help speed things along. This wouldn’t sidetrack Honda’s goal of getting its self-driving tech on highways by 2020, the company makes clear — it would just allow for a “different technological approach.”

There’s no guarantee that the talks will amount to a deal, so don’t expect to be riding in a Waymo-powered Civic in a few years. If they pan out, though, it’ll represent a big coup. The company’s only big ally to date has been Fiat Chrysler. With at least one other potential major partner under its belt, Waymo is closer to becoming a viable option for companies that either don’t have their own self-driving platform or want to complement it with technology they don’t already have. You may not have to wait while your favorite car badge develops autonomous machines, as it could just ask Waymo for a helping hand.

Source: Honda


Families of Pulse nightclub shooting sue Google, Facebook

Google, Facebook and Twitter are facing a lawsuit filed by the families of three victims killed by Pulse nightclub gunman Omar Mateen in Orlando. The plaintiffs are accusing the tech titans of providing “material support” to Mateen, who was known to have pledged allegiance to ISIS and its leader. According to their lawsuit, the families are suing the companies for allowing the terrorist group to create accounts to raise funds and to spread propaganda with the intention of attracting new recruits.

The material support these tech giants provide, the lawsuit says, “has been instrumental to the rise of ISIS and has enabled it to carry out or cause to be carried out, numerous terrorist attacks.” In addition, the plaintiffs are accusing the companies of profiting from ISIS-related posts by combining them with advertisements and of violating the Anti-Terrorism Act in the United States.

This is far from the first time a tech company has been sued for providing support to terrorist groups. Back in July, the families of five victims killed in the Palestinian attacks on Tel Aviv sued Facebook for playing “an essential role in Hamas’s ability to carry out its terrorist activities.” The wives of two American contractors killed in a shooting spree in Jordan, on the other hand, sued Twitter for allowing ISIS activity to flourish on the microblogging site.

However, tech companies are pretty well-protected by the law, particularly by Section 230 of the federal Communications Decency Act. It says providers and website owners are not liable for information published by their users. That’s why the judge who presided over the American contractors’ case ended up tossing the lawsuit.

Source: Reuters


Employee sues Google for ‘illegal’ confidentiality policies

The Information has reported that a Google employee brought a lawsuit against his employer, accusing the company for internal confidentiality policies that supposedly breach California labor laws. One of the more egregious complaints is that Google apparently runs an internal “spying program” that encourages employees to snitch on one another if they think someone leaked information to the press. Further, Google apparently warns employees to not write about potentially illegal activities within the company, even to Google’s own attorneys. There’s even a note that prohibits employees from writing “a novel about someone working at a tech company in Silicon Valley” without approval.

The employee, known only as “John Doe” in the suit, said that one of the reasons for this strict policy is that the company is very fearful of leaks to the press, so much so that anyone who’s guilty of it could be fired. In fact, the employee in question was apparently falsely accused of doing just that. “Confidential information” is classified as “everything at Google,” and can’t be shared with “press, members of the investment community, partners, or anyone else outside of Google.” Essentially, the lawsuit alleges that employees are barred from discussing anything about Google anywhere.

According to the lawsuit, current labor laws state that employees should be able to discuss workplace conditions and potential violations inside the company without the fear of retribution. Additionally, that it should relax the policies so that employees are allowed to speak about the company to outsiders under certain circumstances.

The lawsuit was filed in the California Superior Court in San Francisco under California’s Private Attorneys General Act. If successful, the state would collect 75 percent of the penalty, while the rest would be paid out over to the company’s 65,000 employees. Since there are 12 alleged violations in the suit, the maximum fine could amount to $3.8 billion, with each employee getting about $14,600.

“Google’s motto is ‘don’t be evil.’ Google’s illegal confidentiality agreements and policies fail this test,” the lawsuit said.

Source: The Information

%d bloggers like this: