Skip to content

Archive for

23
Jun

‘Overwatch’ loot boxes will have fewer duplicates


Players of Overwatch and Hearthstone should pay attention to Blizzard. The company has made two separate announcements that will significantly affect the loot systems in both games.

Beginning with Hearthstone’s next expansion pack, you will no longer receive duplicates of cards until you have every single Legendary card from that specific set. This applies whether you have the golden or non-golden version of a card. What’s more, you will no longer receive more duplicate cards than you can use in a deck. When you open a new set, you will also receive a Legendary within your first ten packs — guaranteed. The result of all these changes? Players will likely have more Legendary cards than they have now.

Additionally, Overwatch’s game director Jeff Kaplan made some interesting announcements in their latest developer video update. If you’ve been frustrated by receiving duplicate cosmetic items in loot boxes, that will no longer be a problem. Kaplan says, “One of the things that we’re going to do is drastically reduce the rate of duplicates that you’ll get out of any loot box.” It’s not eliminating the issue entirely, but the team hopes that the change will be “immediately evident,” and that as a result players will buy many, many more loot boxes.

Via: Gamasutra

Source: Overwatch, Hearthstone

23
Jun

The next video game controller is your voice


For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.

In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.

At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”

I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.

It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.

The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.

Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.

Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.

Video games have always been concerned with blurring the lines between art and real life.

Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.

While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.

That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”

More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”

“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon

Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?

Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.

Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.

For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”

Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”

Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.

Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”

“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”

In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.

InGame_SolStation02.jpg

Human Interact

Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.

Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.

In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.

The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact

This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”

“The script is more like a funnel, where people all want to end up in about the same place,” he adds.

Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”

In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.

An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.

“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.

untethered.jpg

‘Untethered,’ a virtual reality title from Numinous Games.

Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?

With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.

“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.’”

In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.

“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI

As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”

Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.

I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.

“Take me to the Delta outpost and I’ll let you live,” he says.

“Sure, I’ll take you,” I say.

This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.

A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.

“It mistook ‘I’ll take you’ for ‘fuck you,’” he says. “That’s a really common response, actually.”

23
Jun

Symantec refuses Russia request for source code access


Security firm Symantec will no longer allow Russian authorities to inspect its source code, according to Reuters. “It poses a risk to the integrity of our products that we are not willing to accept,” the company’s Kristen Batch said. The worry is that by allowing the supposedly independent Federal Security Service (FSB) to examine source code, it would give Russia an inside view of potential software vulnerabilities and exploits.

Other companies allow this sort of thing so that they can take advantage of the country’s projected $18.4 billion IT industry. While none of Reuters’ sources could cite an explicit example of security breaches that have resulted from the practice, there was a strong sense of unease. “It’s something we have a real concern about,” a former Commerce Department official said. “You have to ask yourself what it is they are trying to do, and clearly they are trying to look for information they can use to their advantage to exploit, and that’s obviously a real problem.”

The US has previously accused the FSB of 2014’s massive Yahoo email hack and cyber attacks that targeted Hillary Clinton during her 2016 presidential campaign.

Russia isn’t the only country that makes these sorts of requests, however. China, for example, has a long history of such, recently taking two years to scour a version of Windows 10 that Microsoft made for the country’s government before finally approving it in May.

Reuters writes that since Russia’s annexation of Crimea in 2014 that these requests have “mushroomed in scope” following a soured relationship between the countries. Between 1996 and 2013, some 13 products had been requested for security review. In the past three years there have been 28.

IBM, Cisco, Hewlett Packard Enterprise and McAfee have given Russia access to their respective source codes.

Source: Reuters

23
Jun

The best wireless outdoor home security camera


By Rachel Cericola

This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.

After spending almost three months looking, listening, adjusting angles, and deleting over 10,000 push notifications and emails, we’ve decided that the Netgear Arlo Pro is the best DIY outdoor Wi-Fi home security camera you can get. Like the other eight units we tested, the Arlo Pro lets you keep an eye on your property and provides smartphone alerts whenever there’s motion. However, it’s one of the few options with built-in rechargeable batteries to make it completely wireless, so it’s easy to place and move. It also delivers an excellent image, clear two-way audio, practical smart-home integration, and seven days of free cloud storage.

Who should get this

A Wi-Fi surveillance camera on your front porch, over your garage, or attached to your back deck can provide a peek at what really goes bump in the night, whether that’s someone stealing packages off your steps or raccoons going through garbage cans. It can alert you to dangers and can create a record of events. It should also help you to identify someone—and if it’s a welcome or unwelcome guest—or just let you monitor pets or kids when you’re not out there with them.

How we picked and tested

Photo: Rachel Cericola

During initial research, we compiled a huge list of outdoor security cameras recommended by professional review sites like PCMag, Safewise, and Safety.com, as well as those available on popular online retailers. We then narrowed this list by considering only Wi-Fi–enabled cameras that will alert your smartphone or tablet whenever motion is detected. We also clipped out all devices that required a networked video recorder (NVR) to capture video, focusing only on products that could stand alone.

Once we had a list of about 27 cameras, we went through Amazon and Google to see what kind of feedback was available. We ultimately decided on a test group based on price, features, and availability.

We mounted our test group to a board outside of our New England house, pointed them at the same spot, and exposed them all to the same lighting conditions and weather. The two exceptions were cameras integrated into outdoor lighting fixtures, both of which were installed on the porch by my husband, a licensed electrician. All nine cameras were connected to the same Verizon FiOS network via a Wi-Fi router indoors.

Besides good Wi-Fi, you may also need a nearby outlet. Only three of the cameras we tested offered the option to use battery power. Most others required an AC connection, which means you won’t be able to place them just anywhere.

We downloaded each camera’s app to an iPhone 5, an iPad, and a Samsung Galaxy S6. The cameras spent weeks guarding our front door, alerting us to friends, family members, packages, and the milkman. Once we got a good enough look at those friendly faces, we tilted the entire collection outward to see what sort of results we got facing the house across the street, which is approximately 50 feet away. To learn more about how we picked and tested, please see our full guide.

Our pick

The Arlo Pro can handle snow, rain, and everything else, and runs for months on a battery charge. Photo: Rachel Cericola

The Arlo Pro is a reliable outdoor Wi-Fi camera that’s compact and completely wireless, thanks to a removable, rechargeable battery that, based on our testing, should provide at least a couple of months of operation on a charge. It’s also the only device on our list that offers seven days of free cloud storage, and packs in motion- and audio-triggered recordings for whenever you get around to reviewing them.

The Arlo Pro requires a bridge unit, known as the Base Station, which needs to be powered and connected to your router. The Base Station is the brains behind the system, but also includes a piercing 100-plus–decibel siren, which can be triggered manually through the app or automatically by motion and/or audio.

With a 130-degree viewing angle and 720p resolution, the Arlo Pro provided clear video footage during both day and night, and the two-way audio was easy to understand on both ends. The system also features the ability to set rules, which can trigger alerts for motion and audio. You can adjust the level of sensitivity so that you don’t get an alert or record a video clip every time a car drives by. You can also set up alerts based on a schedule or geofencing using your mobile device, but you can’t define custom zones for monitoring. All of those controls are easy to find in the Arlo app, which is available for iOS and Android devices.

If you’re looking to add the Arlo Pro to a smart-home system, the camera currently works with Stringify, Wink, and IFTTT (“If This Then That”). SmartThings certification was approved and will be included in a future app update. The Arlo Pro is also compatible with ADT Canopy for a fee.

Runner-up

The Nest Cam Outdoor records continuously and produces better images than most of the competition, but be prepared to pay extra for features other cameras include for free. Photo: Rachel Cericola

The Nest Cam Outdoor is a strong runner-up. It records continuous 1080p video, captures to the cloud 24/7, and can actually distinguish between people and other types of motion. Like the Nest thermostat, the Outdoor Cam is part of the Works With Nest program, which means it can integrate with hundreds of smart-home products. It’s also the only model we tested that has a truly weatherproof cord. However, that cord and the ongoing subscription cost, which runs $100 to $300 per year for the Nest Aware service, is what kept the Nest Cam Outdoor from taking the top spot.

Like our top pick, the Nest Cam Outdoor doesn’t have an integrated mount. Instead, the separate mount is magnetic, so you can attach and position the camera easily. Although it has a lot of flexibility in movement, it needs to be placed within reach of an outlet, which can be a problem outside the house. That said, the power cord is quite lengthy. The camera has a 10-foot USB cable attached, but you can get another 15 feet from the included adapter/power cable.

The Nest Cam Outdoor’s 1080p images and sound were extremely impressive, both during the day and at night. In fact, this camera delivered some of the clearest, most detailed images during our testing, with a wide 130-degree field of view and an 8x digital zoom.

The Nest app is easy to use and can integrate with other Nest products, such as indoor and outdoor cameras, the Nest thermostat, and the Nest Protect Smoke + CO detector. You can set the camera to turn on and off at set times of day, go into away mode based on your mobile device’s location, and more.

This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.

Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

23
Jun

Twitch’s latest marathon is a a six-day ‘MST3K’ binge


B-movie lovers rejoice, for social video platform Twitch is set to air a Mystery Science Theater 3000 marathon lasting a mind-numbing six days. The stream, which features 38 classic episodes, will air on Shout! Factory TV’s new Twitch channel from June 26 to July 3.

The comic sci-fi show, which emerged as a cult favorite despite two network cancellations, follows hapless host Joel Robinson as he’s trapped by mad scientists in space and forced to watch some of the worst B-movies ever produced. Viewers are not only treated (subjected?) to the appalling films in their entireties, but also to the running commentary of our sorry protagonist and the two robot sidekicks he’s built to keep him sane (which, in the face of absolute fiascos such as Manos: The Hands of Fate, is no mean feat).

Starring Joel Hodgson, Michael J. Nelson, Trace Beaulieu, Kevin Murphy and Bill Corbett, among others, the show’s popularity has grown thanks to fanfare from hardcore MSTies, which saw a new, crowdfunded season hit screens in April.

Netflix dropped 20 classic episodes earlier this year as a teaser for the new season, but Twitch is a particularly apt platform for the show, allowing the audience to access both live and on-demand content accompanied by real-time chat from other viewers. So if you’re about watching someone watch a movie and sharing your thoughts about their thoughts with others watching the same thing — a kind of TV-ception, if you will — then this is for you.

23
Jun

Some Uber employees want Kalanick back in the CEO chair


Travis Kalanick stepped down as Uber’s CEO earlier this week, leaving the CEO, COO, CFO and general counsel positions open. But while he’s no longer head of the company, Kalanick is far from gone. He’s still a member of the board and controls the majority of voting shares, meaning he’ll play a not insignificant role in selecting his successor. And it’s not exactly surprising that he would want to. He is the founder after all.

But some employees at Uber seem to want more than that. According to an email obtained by Recode, at least one person at the company wants him to be reinstated in an operational role and has been circulating an email to gain support in that venture.

Part of the email reads, “Nobody is perfect, but I fundamentally believe he can evolve into the leader Uber needs today and that he’s critical to its future success. I want the Board to hear from Uber employees that it’s made the wrong the decision in pressuring Travis to leave and that he should be reinstated in an operational role.” It then goes on to ask for colleagues’ support, including a link where they can back the effort.

Whether this push to bring Kalanick back in some capacity beyond board member will have any sway remains to be seen. And some suspect that with his aggressive, hands-on style, it will be hard for Kalanick to distance himself regardless of his position in the company. But with all of the company’s many scandals, fresh leadership is probably not a bad thing.

Source: Recode

23
Jun

Watch SpaceX launch and land a reused Falcon 9 rocket


Today, SpaceX will hopefully launch and land a Falcon 9 rocket that it’s already flown to space. The launch window opens at 2:10 PM and lasts for two hours; launch time is currently scheduled for 3:10 PM ET. You can livestream the launch, with commentary, at SpaceX’s website.

This mission is called BulgariaSat-1 and will carry Bulgaria’s first geostationary communications satellite into a high geostationary orbit around the Earth. It’s launching from Kennedy Space Center in Florida, and a drone ship called “Of Course I Still Love You” will be waiting in the Atlantic Ocean for the Falcon 9’s first-stage landing. If things don’t go as planned, there’s another launch window tomorrow at 2:10 PM ET.

This isn’t the only SpaceX launch that’s happening this weekend, though. On Sunday, a Falcon 9 will lift off from Vandenberg Air Force Base in California carrying 10 satellites for the company Iridium. Liftoff is set for 1:25 PM PT, and the company will once again attempt to land the first stage of the rocket.

It’s not the first time SpaceX has flown and landed a flight-proven rocket; that happened on March 30. But rockets are the most expensive part of spaceflight; by reusing Falcon 9 first stages, SpaceX is cutting costs considerably. Reusable components are crucial to the future of human spaceflight, and SpaceX is certainly making steady and significant improvements in that regard.

Update, 6/23/2017, 1:05 PM: SpaceX tweeted that the ground team is taking additional time for checks. New liftoff time is 3:10 PM ET.

Via: The Verge

Source: SpaceX

23
Jun

Google’s neural network is a multi-tasking pro


Neural networks have been trained to complete a number of different tasks including generating pickup lines, adding animation to video games, and guiding robots to grab objects. But for the most part, these systems are limited to doing one task really well. Trying to train a neural network to do an additional task usually makes it much worse at its first.

However, Google just created a system that tackled eight tasks at one time and managed to do all of them pretty well. The company’s multi-tasking machine learning system called MultiModal was able to learn how to detect objects in images, provide captions, recognize speech, translate between four pairs of languages as well as parse grammar and syntax. And it did all of that simultaneously.

The system was modeled after the human brain. Different components of a situation — like visual and sound input — are processed in different areas of the brain, but all of that information comes together so a person can comprehend it in its entirety and respond in whatever way is necessary. Similarly, MultiModal has small sub-networks for audio, images and text that are connected to a central network.

The network’s performance wasn’t perfect and isn’t yet on par with those of networks that manage just one of these tasks alone. But there were some interesting outcomes. The separate tasks didn’t hinder the performance of each other and in some cases they actually improved it. In a blog post the company said, “It is not only possible to achieve good performance while training jointly on multiple tasks, but on tasks with limited quantities of data, the performance actually improves. To our surprise, this happens even if the tasks come from different domains that would appear to have little in common, e.g., an image recognition task can improve performance on a language task.”

MultiModal is still being developed and Google has open-sourced it as part of its Tensor2Tensor library.

Via: New Scientist

Source: Arxiv, Google

23
Jun

Proposed Law Against Apple’s ‘Walled Garden’ Software Approach Sparks Fears of iPhone Ban in Italy


Italian newspaper Corriere della Sera [Google Translate] published a headline today that translates to “the bill that could ban the iPhone in Italy.”

The bill in question, Senate Act 2484, is aimed at ensuring Italians have open access to software, content, and services. The portion of the bill potentially relevant to Apple essentially says that users should have the right to download any software, whether proprietary or open source, on any platform.

An excerpt from Article Four of the loosely translated bill:

Users have the right to, in an appropriate format to the required technology platform […] use fair and non-discriminatory software, proprietary or open source […] content and services of their choice.

It’s well known that iOS is a walled garden, in which apps can only be distributed through the App Store, and only if developers adhere to Apple’s guidelines. The only way to download apps outside of Apple’s parameters is by jailbreaking, which is in violation of Apple’s end-user agreement.

Naturally, there are some concerns about how the iPhone and other devices could be affected if the bill is approved, although the prospect of any Apple product being outright banned in Italy seems highly unlikely.

The bill was introduced last year by Stefano Quintarelli, an Italian entrepreneur and member of the Scelta Civica political party in Italy. The bill was approved by the Chamber of Deputies in July 2016, and it now must be approved by the Senate of the Republic, within Italy’s parliamentary government.

(Thanks, Macitynet and iSpazio!)

Tag: Italy
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

23
Jun

New iPhone 8 Glimpse Combines Leaked Parts to Show Off What Device Might Look Like at Launch


Leaker Benjamin Geskin has posted a few new images and a video of what Apple’s upcoming iPhone 8 might look like once it’s in the hands of users later this year. Using a leaked dummy model, screen protector, and a printed picture of an iOS wallpaper, Geskin has put together the gist of what current rumors have suggested the iPhone 8 will look like once it’s announced in the fall.

Geskin’s images depict an iPhone 8 dummy model as we’ve seen previously, with a suggested 5.8-inch display area, minimal bezels, front-facing camera and sensor dip, but with an all-black frame instead of models that have previously depicted a rumored stainless steel frame. To give users a glimpse as to what the iPhone 8 display might look like when activated, Geskin then attached a picture of an iOS wallpaper to the dummy, and applied a screen protector on top.

This most likely how #iPhone8 will look like.

(Dummy + Printed Picture +Screen Protector) pic.twitter.com/G9SrlSaS9L

— Benjamin Geskin (@VenyaGeskin1) June 23, 2017

The wallpaper Geskin used is part of a beach themed collection of images that appeared within the iOS 10.3.3 beta only for the 12.9-inch iPad Pro, but they have since appeared on the 10.5-inch iPad Pro and new 12.9-inch iPad Pro running iOS 10.3.2.

A video shared on Geskin’s Twitter account has further provided a glimpse into the iPhone 8 dummy in motion.

#iPhone8 Hands-on Video
(sort of 😁)

(Dummy + Printed Picture + Screen Protector) pic.twitter.com/gkKjWH0tLe

— Benjamin Geskin (@VenyaGeskin1) June 23, 2017

The iPhone 8 is predicted to to be the first major iPhone redesign since the iPhone 6 in 2014, with additional features such as wireless charging and improved waterproofing to further bolster the smartphone’s position as a premium device. Alongside the iPhone 8 will be the “iPhone 7s” and “iPhone 7s Plus,” which are expected to keep the current iPhone 7 design while offering the usual iterative spec bumps like improved battery life and snappier performance.

Related Roundup: iPhone 8
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs