Skip to content

Archive for

23
Jun

OnePlus 5 tips and tricks: The essential guide to the flagship killer


OnePlus 5 has just been launched, and while it’s pricier than its predecessors, it’s still a good amount of phone for the £499 price tag. It’s comfortably cheaper than the likes of the Pixel XL, iPhone 7 Plus and Galaxy S8 Plus, and offers a true high-end experience. 

  • OnePlus 5 review: The flagship-killer’s coming of age
  • OnePlus 5: Release date, hardware specs and everything else you need to know

If you’ve bought one recently, or plan to, let us guide you through some of the software features you’re going to want to learn. OnePlus’ OxygenOS system is full of tricks, even though it looks from the outside like it’s just a pure version of Android.

OnePlus 5 home screen tips

Open app shortcuts: Some apps have a list of shortcuts that pop up on the home screen. Press and hold an app icon, and you’ll get those pop-up options above the icon. These only appear on compatible apps. Some won’t respond this way. 

Pin app shortcuts to home screen: When you have the app shortcuts showing, press and hold one that you want to pin to your screen, and drag it where you want it. Now you’ll be able to perform that action just by tapping the shortcut that’s permanently on your home screen. 

Disable app shortcuts: Press and hold the home screen wallpaper, hit “Settings” then toggle “App Shortcuts” to the off position. 

Pocket-lint

Add widgets to Shelf: Shelf is a custom screen that sits to the left of your main home screen. By default it has your weather, most-used apps and contacts, but you can add practically any other widget you want to it by tapping the floating action button in the bottom right corner, then choosing your desired widget.

Customise Shelf widgets: In the shelf, you can press and hold the top widget to change the text displayed, or choose to disable the weather information. Press and hold any of the other widgets, then drag them to reorganize, or press the red “X” to delete that particular card.

Disable Shelf: For whatever reason, you might just decide you don’t want the Shelf. To disable it, head to your regular home screen, then tap and hold the wallpaper. Choose “settings” then switch the “Shelf” toggle to the off position.

Swipe down for notifications: You can access your drop down notifications by swiping downwards anywhere on the home screen. This gesture is enabled by default. To disable, press and hold the home screen wallpaper, choose “Settings” and toggle the “Swipe Down” off. 

Change app icon shape: Press and hold the home screen wallpaper and tap “Settings” then “Icon pack”. This lets you choose between three default options: OnePlus, Round and Square. 

Change icon/content size: Head to Settings > Display > Display size, then move the slider along the bottom of the screen until the icons and text are the size that you want them to be. 

Change battery icon: With OxygenOS you can choose what battery information you want to see in the status bar. Go to Settings > Status bar, then tap “Battery style” and choose which style battery icon you would like.

Show battery percentage: Below this option in same screen you can also choose to toggle the battery percentage on or off.

OnePlus 5 buttons tips

Capacitive or virtual buttons: As has been the case with every OnePlus phone since the beginning, you can choose whether or not you want to use onscreen software buttons, or the built-in capacitive buttons. By default, the capacitive buttons are in use, but if you want software buttons simply head to Settings > Buttons and toggle “on-screen navigation bar”.

Custom actions: In Oxygen OS you can assign secondary functions to all three of the capacitive keys on the OnePlus 5. Each button can have two secondary functions, launched by either a long-press or quick double-tap. There are seven options in total, which include opening recent apps, launching search assistant, turning off the screen, opening the camera, voice search, opening the last used app and opening Shelf. You’ll find the options in the same settings menu under the Buttons category.

Backlight on/off: Both the capacitive buttons have a backlight, which light up when any of the buttons (including the home key) are touched. You can switch this off if you don’t want it on just by tapping the Backlight toggle in Settings > Buttons.

Swap recent/back order: By default, the left capacitive button is the back button, and the right button is the recent apps button. If you’re more accustomed to having them the other way around, you can switch those. Just toggle the “Swap buttons” option in the same settings Buttons menu.

Alert slider: The one other button on the OnePlus 5 is the three-position alert slider on the left edge. Sliding down is regular, show me all the notifications mode. In the middle is priority mode which restricts most apps from sending you notifications. The top position is total silence, which practically silences everything.

You can customise the Do Not Disturb option by heading to Settings > Alert slider. Here you can allow alerts from certain contacts to get through.

OnePlus 3T display tips

Enable Reading Mode: Swipe down the quick settings shade and you should see tile to enable Reading Mode. This turns the screen greyscale, slightly increases contrast and kills the blue light to mimic an e-reader type experience. 

Launch Reading Mode automatically: If you don’t want to manually activate Reading Mode every time, you can choose to have specific apps launch it automatically. Go to Settings > Display > Reading Mode then add the apps you want to have in this mode every time you open them. 

Pocket-lint

Adjust colour temperature: How good the colours on screen look to an individual can often be a point of debate. A perfect balance to some is too cool (blue) or too warm (yellow) for others. Thankfully, OnePlus includes the option to manually adjust the colour temperature. Head to Settings > Display > Screen Calibration and you’ll find a colour balance slider if you select the “Custom color” option. Sliding right makes the screen warmer, sliding left makes it cooler.

Choose sRGB or DCI-P3:  In the same settings menu as the colour temperature customisation you’ll find the option to choose from two additional colour profiles. These are sRGB and DCI-P3, both offering different colour accuracy for people who prefer those. 

Lift to wake: With the OnePlus 5 you can have the screen wake up just by lifting the phone. To activate this feature, simply tap the “Lift up display” toggle in the display settings.

Ambient display: You can also set your OnePlus to wake up whenever you receive a notification. Activating it is very simple. Like the proximity wake option, just switch on the toggle in the settings menu. Rather than have a fully active screen, Ambient display mode is a black screen with white text/notifications.

Night mode: As with most phones with the feature, night mode strips the blue tint from the screen, making it warmer, more yellow and easier on your eyes at night time. Just like the colour balance option, there’s a slider to adjust how deep you want the yellow tint to be.

Change font size: Half way down the list of main display settings is the option to change the font size. Here you can choose between small, default, large and largest.

OnePlus 5 camera tips

Double tap power button to launch: By default, the OnePlus 5 camera can be launched by quickly double-tapping the power button on the right edge. If yours doesn’t have the feature switched on, or you want to switch it off, head to Settings > Buttons and then hit the toggle next to “Press power button twice for camera”.

Quick capture: You choose to have the camera take a photo when you double click the power button. Open the camera app, then open the sidebar menu and hit the settings cog in the corner. Here you’ll find a toggle that enables quick capture. 

Pocket-lint

“Shot on OnePlus” watermark: In the same settings menu tap “Shot on OnePlus Watermark”. Here you can enable a feature that automatically applies the watermark, which you can also customise to include your name. The end result is a photo which has a “Shot on OnePlus by [Your name/handle]” in the bottom corner. 

Pro mode: Go in to the main sidebar menu in the camera app to see the list of shooting options. Pro mode is in that list and selecting it enables you to manually control a number of important settings. Tapping on ISO will then let you change the brightness/gain, the next one along lets you set the white balance to counteract any artificial (or natural) lighting. You can also manually set the shutter speed to take long exposures up to 30 seconds, and manually focus.

Adjusting each of these settings is pretty easy. Once the manual mode has been selected, you just need to press on whichever setting you’d like to change, then you get a semicircle control on-screen. Adjust the ISO, shutter speed, or focus by rotating this onscreen “wheel” clockwise or anticlockwise.

Remove histogram: By default, the Pro mode has a histogram on screen. If you don’t want it there because it’s blocking your video, you go back in to the camera settings and switch the Histogram toggle off. You can do the same for the horizontal reference line.

Shoot straight photos: When you launch Pro mode, you’ll see a line in the middle of the screen, it turns green when your phone is level. 

Immersive mode: In Pro mode, by default there’s a lot of information and many control options on screen. You can clear it all away by activating Immersive mode. Go to the camera settings as before, toggle the “Immersive mode” option, then whenever you’re in Pro mode, you can clear everything from the screen by swiping upwards. 

Save custom preset: Still in Pro mode, you can save a specific preset by tapping the “C” in the toolbar, then adjust your settings and tap “save C1”. If you need to create a second, you can follow the same process, and hit “save C2”. It allows two presets in total. 

Portrait mode: To switch to the depth effect mode, swipe from right to left in the regular camera view. This launches Portrait mode and uses both cameras to create a bokeh/background blur effect behind your subject. 

Save regular photo in Portrait mode: If you want a copy of your Portrait mode photo without the depth effect added, head back into the camera settings and scroll down to the Portrait section. Toggle “save normal photo”, and each time you take a depth effect shot, it’ll save you a normal version as well. 

Add grid lines: Back in the camera settings, select “Grid” and then chose between a 3×3, 4×4 or Golden Ratio grid.  

Change image ratio: In the main camera view, in the toolbar, you’ll see a “4:3” icon. Tap this and you can change the shooting ratio to either a 1:1 square or 16:9 widescreen shot. 

Save RAW photos: Launch Pro mode, then tap the little “RAW” icon in the main toolbar. 

OnePlus 5 other tips

Dual SIM options: Just like the OnePlus 2, 3 and 3T, there’s a dual SIM tray which means you can have two SIM cards in the phone at once. If you have a work and personal line, or have a SIM for two different carriers, it can be an invaluable feature, especially if you know one network in your area is better for data speeds than another.

Heading into Settings then “SIM & network settings”, you can choose which SIM is the preferred option for mobile data, calls or text messages. So if one SIM has a higher data allowance, you could set that as your main data SIM.

Reorder quick tiles: In Android N, Google is introduced the ability to move around quick settings tiles in the drop down settings panel, but OnePlus has had that feature in Oxygen OS for a little while now. Drop down the panel as usual, then tap the little pencil in the top right corner. Then you can reorder the tiles on the screen to suit your preferences.

OnePlus 5 gestures: Like many modern Android phones, you can enable a number of gestures for launching apps or functions. Go to Settings > Gestures then you then activate the ability to flip the phone over on its face to mute a call, swipe with three fingers to take a screenshot, double tap to wake the phone or draw II  on the lock screen with two fingers to play or pause music. 

As well as that you can choose apps to launch by drawing an O, V, S, M or W on the lock screen. 

Take a long screenshot: When you’re on a long page, you can capture all of it by taking a long screenshot. Press the volume down and power button together as usual, then you’ll see a row of four actions on the bottom of the screen. There’s a rectangle icon, press it, and it’ll start taking a long screenshot of the page you’re on. 

Markup screenshots: Once you’ve taken a screenshot, choose the pencil from the row of four actions. This takes you to an editing screen where you can adjust image properties, as well as being able to edit the image and draw/write on it. 

Easter egg: Last but not least, head to the preinstalled calculator and type in “1+=”. See what happens.

23
Jun

What is Snap Map? Snapchat’s new feature explained


Another day, another Snapchat update.

There was once a time when people complained about continual UI updates from Facebook, but with apps like Snapchat adding new features on what seems like a weekly basis, we’ve come to accept – and maybe even appreciate – that the services we use regularly are constantly being refreshed with new tricks to keep us interested.

Snapchat, for instance, has announced that it “built a whole new way to explore the world”. It’s basically a location-sharing feature. The idea is, with Snapchat, you can easily meet up with friends in real life. It lets you share your current location, which then appears to friends on a map and updates when you open Snapchat.

But if you decide you want to keep your location to yourself, you can, with Ghost Mode. Confused? No worries. Here’s everything you need to know about Snapchat’s latest feature, called Snap Map.

  • Snap Spectacles now available in UK, take videos with your eyes
  • 33 of the best Snapchat fails and comedy snaps around
  • 28 Snapchatters to follow for their awesome Snapchat stories

What is Snapchat?

You can read all about Snapchat from Pocket-lint’s guide here:

  • What’s the point of Snapchat and how does it work?

What is Snap Map?

Here’s how Snap has described Snapchat’s latest feature:

“With the Snap Map, you can view Snaps of sporting events, celebrations, breaking news, and more from all across the world. If you and a friend follow one another, you can share your locations with each other so you can see where they’re at and what’s going on around them! Plus, meeting up can be a cinch.”

In other words, Snap Map essentially lets you share your location with all your friends or a few friends, and it also allows you to scroll around an actual map to see where your friends are located. When you open the updated Snapchat app, you will have access to the feature and can choose whether to share your location.

The goal is to get users to engage with their friends instead of just watching their activities via Snapchat, supposedly. But we don’t think Snapchat would never really encourage its users to not use the app, and to be honest, by getting users to open Snapchat to see where friends are, it is ironically making them use Snapchat.

Snap

How does Snap Map work?

Open Snap Map

Make sure you’re running the latest version of Snapchat. Then, go to your Camera screen, then pinch your fingers on the screen like you’re zooming out from a photo. The Snap Map should then appear.

Actionmoji

Snap Map shows your friends and their location as illustrations – so-called Actionmoji – in real time. Actionmojis are created by Snapchat. They’re a new form of Bitmoji. Remember, Snapchat bought Bitmoji, a free app that allows you to create personalised avatars of yourself. Anyway, Actionmoji are based on your actions.

So, if you’re listening to music, Snapchat will know that and show your Actionmoji with a pair of headphones. Snapchat may look at things like your location, time of day, or speed of travel to come up with your Actionmoji. Examples of Actionmoji also include things like: at the beach, at the airport, sitting, and more.

Location settings

When you open the Snap Map for the first time, you’ll get a prompt to choose who to share your location with. If you choose to do this later, just tap the Settings button in the corner of the map screen. You can change who can see your location (all friends or select friends), or you can hide your location entirely with Ghost Mode.

Note: Your location is only updated when you’re using Snapchat.

Find friends

If your friends share their location with you, then you can easily see them on Snap Map – just look for their Actionmoji. If they don’t have their Bitmoji account linked, then you’ll see them as a blank Bitmoji outline. To zoom back to your current location, just tap the current location button in the bottom corner of the Map screen.

You can tap on any friend on the map to start a chat or see when their location was last updated. To search for a friend, tap Search at the top of a screen and type in their name. Easy.

Stories

With the Snap Map, you can also view stories from all across the world. (See this Snap FAQ guide on how to post a story on Snapchat.) Snaps that were submitted to ‘Our Story’ will be visible on the map. You can view them by tapping their circular thumbnail. They show up at special locations, such as a museum. You can also follow the heatmap: blue = a few snaps taken; red = tonnes of snaps.

Just remember that Our Stories are collections of snaps submitted from different Snapchatters throughout the world. They’re curated by Snapchat to capture a place or event from different points of view.

Want to know more?

Check out our Snapchat tips and tricks guide.

23
Jun

Join Pluralsight for a Free Gift


Pluralsight has teamed up with Pocket-lint to give you, for a limited time, the chance to get a $10 Visa gift card when you sign up for a monthly Pluralsight subscription or a $30 Visa gift card when you sign up for an annual subscription.

Someone once said that a day without a mistake is a day wasted for you have learnt nothing. It’s true isn’t it? We are all looking to learn more, to improve, to be better at what we want to do.

Pluralsight, an online education resource, lets you develop new skills with expert-led content and the latest technologies all from the comfort of your own home or office. The company, founded in 2004, knows what and where to start your learning, and lets you rate your skills and uncover knowledge gaps to close them fast.

The site offers over 5,000 courses to choose from in areas such as software development, IT Ops, and Creative professions with varying levels to suit your needs from beginner to advanced. You can even jump past the bits you know or go over again the bits you need more practice with.

To claim your free gift, you need to sign up by the 30 June 2017.

23
Jun

‘Overwatch’ loot boxes will have fewer duplicates


Players of Overwatch and Hearthstone should pay attention to Blizzard. The company has made two separate announcements that will significantly affect the loot systems in both games.

Beginning with Hearthstone’s next expansion pack, you will no longer receive duplicates of cards until you have every single Legendary card from that specific set. This applies whether you have the golden or non-golden version of a card. What’s more, you will no longer receive more duplicate cards than you can use in a deck. When you open a new set, you will also receive a Legendary within your first ten packs — guaranteed. The result of all these changes? Players will likely have more Legendary cards than they have now.

Additionally, Overwatch’s game director Jeff Kaplan made some interesting announcements in their latest developer video update. If you’ve been frustrated by receiving duplicate cosmetic items in loot boxes, that will no longer be a problem. Kaplan says, “One of the things that we’re going to do is drastically reduce the rate of duplicates that you’ll get out of any loot box.” It’s not eliminating the issue entirely, but the team hopes that the change will be “immediately evident,” and that as a result players will buy many, many more loot boxes.

Via: Gamasutra

Source: Overwatch, Hearthstone

23
Jun

The next video game controller is your voice


For all of modern gaming’s advances, conversation is still a fairly unsophisticated affair. Starship Commander, an upcoming virtual reality game on Oculus and SteamVR, illustrates both the promise and challenge of a new paradigm seeking to remedy that: using your voice.

In an early demo, I control a starship delivering classified goods across treacherous space. Everything is controlled by my voice: flying the ship is as simple as saying “computer, use the autopilot,” while my sergeant pops up in live action video to answer questions.

At one point, my ship is intercepted and disabled by a villain, who pops onto my screen and starts grilling me. After a little back and forth, it turns out he wants a deal: “Tell you what, you take me to the Delta outpost and I’ll let you live.”

I try to shift into character. “What if I attack you?” I say. No response, just an impassive yet expectant stare. “What if I say no?” I add. I try half a dozen responses, but — perhaps because I’m playing an early build of the game or maybe it just can’t decipher my voice– I can’t seem to find the right phrase to unlock the next stage of play.

It’s awkward. My immersion in the game all but breaks down when my conversational partner does not reciprocate. It’s a two-way street: If I’m going to dissect the game’s dialogue closely to craft an interesting point, it has to keep up with mine too.

The situation deteriorates. The villain eventually gets fed up with my inability to carry the conversation. He blows up my ship, ending the game.

Yet there is potential for a natural back and forth conversation with characters. There are over 50 possible responses to one simple question from the sergeant — “Is there anything you’d like to know before we start the mission?” — says Alexander Mejia, the founder and creative director at Human Interact, which is designing the game. The system is powered by Microsoft’s Custom Speech Service (similar technology to Cortana), which sends players’ voice input to the cloud, parses it for true intent, and gets a response in milliseconds. Smooth voice control coupled with virtual reality means a completely hands-free, lifelike interface with almost no learning curve for someone who’s never picked up a gamepad.

Speaking certainly feels more natural than selecting one of four dialogue options from a menu, as a traditional roleplaying game might provide. It makes me more attentive in conversation — I have to pay attention to characters’ monologues, picking up on details and inconsistencies while coming up with insightful questions that might take me down a serendipitous narrative route (much like real life). No, I don’t get to precisely steer a ship to uncharted planets since voice control, after all, is not ideal for navigating physical space. But, what this game offers instead is conversational exploration.

Video games have always been concerned with blurring the lines between art and real life.

Photorealistic 4K graphics, the disintegration of levels into vast open worlds, virtual reality placing players inside the skull of another person: The implicit end goal of every gaming advance seems to be to create an artificial reality indistinguishable from our own. Yet we communicate with these increasingly intelligent games using blunt tools. The joystick/buttons and keyboard/mouse combinations we use to speak to games do little to resemble the actions they represent. Even games that use lifelike controls from the blocky plastic Time Crisis guns to Nintendo Switch Joy-Cons still involve scrolling through menus and clicking on dialogue options. The next step is for us to talk to games.

While games that use the voice have cropped up over the years — Seaman on Sega’s Dreamcast, Lifeline on the PlayStation 2, Mass Effect 3 on the Xbox 360’s Kinect — their commands were often frustratingly clunky and audio input never seemed more than a novelty.

That may be coming to an end. Well-rated audio games have appeared on the iPhone such as Papa Sangre and Zombies, Run! At E3 this month, Dominic Mallinson, a Sony senior vice president for research and development, referred to natural language understanding among “some of the technologies that really excite us in the lab right now.”

More than anything, the rush by Microsoft, Google, Amazon and Apple to dominate digital assistants is pushing the entire voice computing field forward. In March, The Information reported that Amazon CEO Jeff Bezos wants gaming to be a “killer app” for Alexa, and the company has paid developers that produce the best performing skills. Games are now the top category for Alexa, and the number of customers playing games on Echo devices has increased tenfold in the last year, according to an Amazon spokeswoman. “If I think back on the history of the world, there’s always been games,” says Paul Cutsinger, Amazon’s head of Alexa voice design education. “And it seems like the invention of every new technology comes along with games.”

“It seems like the invention of every new technology comes along with games.” – Paul Cutsinger, Amazon

Simply: If voice assistants become the next major computing platform, it’s logical that they will have their own games. “On most new platforms, games are one of the first things that people try,” says Aaron Batalion, a partner focused on consumer technology at venture capital firm Lightspeed Venture Partners. “It’s fun, engaging and, depending on the game mechanics, it’s often viral.” According to eMarketer, 35.6 million Americans will use a voice assistant device like Echo at least once a month this year, while 60.5 million Americans will use some kind of virtual voice assistant like Siri. The question is, what form will these new games take?

Gaming skills on Alexa today predominantly trace their lineage to radio drama — the serialized voice acted fiction of the early 20th century — including RuneScape whodunnit One Piercing Note, Batman mystery game The Wayne Investigation and Sherlock Holmes adventure Baker Street Experience.

Earplay, meanwhile, has emerged as a leading publisher of audio games, receiving over $10,000 from Amazon since May, according to Jon Myers, who co-founded the company in 2013. Myers describes their work as “stories you play with your voice,” and the company crafts both their own games and the tools that enable others to do the same.

For instance, in Codename Cygnus, you play a James Bond-esque spy navigating foreign locales and villains with contrived European accents, receiving instructions via an earpiece. Meanwhile, in Half, you navigate a surreal Groundhog Day scenario, picking up clues on each playthrough to escape the infinitely repeating sequence of events.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade.”

Like a choose-your-own-adventure novel, these experiences intersperse chunks of narrative with pivotal moments where the player gets to make a decision, replying with verbal prompts. Plot the right course through an elaborate dialogue tree and you reach the end. The audio storytelling activates your imagination, yet there is little agency as a player: The story chugs along at its own pace until you reach each waypoint. You are not so much inhabiting a character or world as co-authoring a story with a narrator.

“What you see with the current offerings from Earplay springs a lot out of what we did at Telltale Games over the last decade,” says Dave Grossman, Earplay’s chief creative officer. “I almost don’t even want to call them games. They’re sort of interactive narrative experiences, or narrative games.”

Grossman has had a long career considering storytelling in games. He is widely credited with creating the first game with voice acting all the way through — 1993’s Day of the Tentacle — and also worked on the Monkey Island series. Before arriving at Earplay, he spent a decade with Telltale Games, makers of The Wolf Among Us and The Walking Dead.

Earplay continues this genre’s bloodline: The goal is not immersion but storytelling. “I think [immersion] is an excellent thing for getting the audience involved in what you want, in making them care about it, but I don’t think it’s the be-all-end-all goal of all gaming,” says Grossman. “My primary goal is to entertain the audience. That’s what I care most about, and there are lots of ways to do that that don’t involve immersing them in anything.”

“My primary goal is to entertain the audience … There are lots of ways to do that that don’t involve immersing them in anything.”

In Earplay’s games, the “possibility space”– the degree to which the user can control the world — is kept deliberately narrow. This reflects Earplay’s philosophy. But it also reflects the current limitations of audio games. It’s hard to explore physical environments in detail because you can’t see them. Because Alexa cannot talk and listen at the same time, there can be no exchange of witticisms between player and computer, only each side talking at pre-approved moments. Voice seems like a natural interface, but it’s still essentially making selections from a multiple choice menu. Radio drama may be an obvious inspiration for this new form; its overacted tropes and narrative conventions are also well-established for audiences. But right now, like radio narratives, the experience of these games seem to still be more about listening than speaking.

InGame_SolStation02.jpg

Human Interact

Untethered, too, is inspired by radio drama. Created by Numinous Games, which previously made That Dragon Cancer, it runs on Google’s Daydream virtual reality platform, combining visuals with voice and a hand controller.

Virtual reality and voice control seem to be an ideal fit. On a practical level, speech obviates the need for novice gamers to figure out complicated button placements on a handheld controller they can’t see. On an experiential level, the combination of being able to look around a 360 degree environment and speaking to it naturally brings games one step closer to dissolving the fourth wall.

In the first two episodes, Untethered drops you first into a radio station in the Pacific Northwest and then into a driver’s seat, where you encounter characters whose faces you never see. Their stories slowly intertwine, but you only get to know them through their voice. Physically, you’re mostly rooted to one spot, though you can use the Daydream controller to put on records and answer calls. When given the cue, you speak: your producer gets you to record a radio commercial, and you have to mediate an argument between husband and wife in your back seat. “It’s somewhere maybe between a book and a movie because you’re not imagining every detail,” says head writer Amy Green.

The game runs off Google’s Cloud Speech platform which recognizes voice input, and may return 15 or 20 lines responding to whatever you might say, says Green. While those lines may meander the story in different directions, the outcome of the game is always the same. “If you never speak a word, you’re still gonna have a really good experience,” she says.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are.” -Alexander Mejia, Human Interact

This is a similar design to Starship Commander: anticipating anything the player might say, so as to record a pre-written, voice-acted response.

“It sounds like a daunting task, but you’d be surprised at how limited the types of questions that people ask are,” says Mejia of Human Interact. “What we found out is that 99% of people, when they get in VR, and you put them in the commander’s chair and you say, “You have a spaceship. Why don’t you go out and do something with it?” People don’t try to go to the fast food joint or ask what the weather’s like outside. They get into the character.”

“The script is more like a funnel, where people all want to end up in about the same place,” he adds.

Yet for voice games to be fully responsive to anything a user might say, traditional scripts may not even be useful. The ideal system would use “full stack AI, not just the AI determining what you’re saying and then playing back voice lines, but the AI that you can actually have a conversation with,” says Mejia. “It passes the Turing test with flying colors; you have no idea if it’s a person.”

In this world, there are no script trees, only a soup of knowledge and events that an artificial intelligence picks and prunes from, reacting spontaneously to what the player says. Instead of a tightly scripted route with little room for expression, an ideal conversation could be fluid, veering off subject and back. Right now, instead of voice games being a freeing experience, it’s easy to feel hemmed in, trapped in the worst kind of conversation — overly structured with everyone just waiting their turn to talk.

An example of procedurally generated conversation can be found in Spirit AI’s Character Engine. The system creates characters with their own motivations and changing emotional states. The dialogue is not fully pre-written, but draws on a database of information — people, places, event timeline — to string whole sentences together itself.

“I would describe this as characters being able to improvise based on the thing they know about their knowledge of the world and the types of things they’ve been taught how to say,” says Mitu Khandaker, chief creative officer at Spirit AI and an assistant professor at New York University’s Game Center. Projects using the technology are already going into production, and should appear within two years, she says. If games like Codename Cygnus and Baker Street Experience represent a more structured side of voice gaming, Spirit AI’s engine reflects its freeform opposite.

untethered.jpg

‘Untethered,’ a virtual reality title from Numinous Games.

Every game creator deals with a set of classic storytelling questions: Do they prefer to give their users liberty or control? Immersion or a well-told narrative? An experience led by the player or developer? Free will or meaning?

With the rise of vocal technology that allows us to communicate more and more seamlessly with games, these questions will become even more relevant.

“It’s nice to have this idea that there is an author, or a God, or someone who is giving meaning to things, and that the things over which I have no control are happening for a reason,” says Grossman. “There’s something sort of comforting about that: ‘You’re in good hands now. We’re telling a story, and I’m going to handle all this stuff, and you’re going to enjoy it. Just relax and enjoy that.’”

In Untethered, there were moments when I had no idea if my spoken commands meaningfully impacted the story at all. Part of me appreciated that this mimics how life actually works. “You just live your life and whatever happened that day was what was always going to happen that day,” Green says. But another part of me missed the clearly telegraphed forks in the road that indicated I was about to make a major decision. They are a kind of fantasy of perfect knowledge, of cause and effect, which don’t always appear in real life. Part of the appeal of games is that they simplify and structure the complexity of daily living.

“Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.” – Mitu Khandaker, Spirit AI

As developers wrestle with this balance, they will create a whole new form of game: one that’s centered on complex characters over physical environments; conversation and negotiation over action and traditional gameplay. The idea of what makes a game a game will expand even further. And the voice can reduce gaming’s barrier to entry for a general audience, not to mention the visually and physically impaired (the Able Gamers Foundation estimates 33 million gamers in the US have a disability of some kind). “Making games which are more about characters means that more people can engage with them,” says Khandaker. “Not everybody is necessarily into games which are about violence or shooting but everyone understands what it is to talk to people. Everybody knows what it is to have a human engagement of some kind.”

Still, voice gaming’s ability to bring a naturalistic interface to games matters little if it doesn’t work seamlessly, and that remains the industry’s biggest point to prove. A responsive if abstract gamepad is always preferable to unreliable voice control. An elaborate dialogue tree that obfuscates a lack of true intelligence beats a fledgling AI which can’t understand basic commands.

I’m reminded of this the second time I play the Starship Commander demo. Anticipating the villain’s surprise attack and ultimatum, I’m already resigned to the only option I know will advance the story: agree to his request.

“Take me to the Delta outpost and I’ll let you live,” he says.

“Sure, I’ll take you,” I say.

This time he doesn’t stare blankly at me. “Fire on the ship,” he replies, to my surprise.

A volley of missiles and my game is over, again. I take off my headset and David Kuelz, a writer on the game who set up the demo, has been laughing. He watched the computer convert my speech to text.

“It mistook ‘I’ll take you’ for ‘fuck you,’” he says. “That’s a really common response, actually.”

23
Jun

Symantec refuses Russia request for source code access


Security firm Symantec will no longer allow Russian authorities to inspect its source code, according to Reuters. “It poses a risk to the integrity of our products that we are not willing to accept,” the company’s Kristen Batch said. The worry is that by allowing the supposedly independent Federal Security Service (FSB) to examine source code, it would give Russia an inside view of potential software vulnerabilities and exploits.

Other companies allow this sort of thing so that they can take advantage of the country’s projected $18.4 billion IT industry. While none of Reuters’ sources could cite an explicit example of security breaches that have resulted from the practice, there was a strong sense of unease. “It’s something we have a real concern about,” a former Commerce Department official said. “You have to ask yourself what it is they are trying to do, and clearly they are trying to look for information they can use to their advantage to exploit, and that’s obviously a real problem.”

The US has previously accused the FSB of 2014’s massive Yahoo email hack and cyber attacks that targeted Hillary Clinton during her 2016 presidential campaign.

Russia isn’t the only country that makes these sorts of requests, however. China, for example, has a long history of such, recently taking two years to scour a version of Windows 10 that Microsoft made for the country’s government before finally approving it in May.

Reuters writes that since Russia’s annexation of Crimea in 2014 that these requests have “mushroomed in scope” following a soured relationship between the countries. Between 1996 and 2013, some 13 products had been requested for security review. In the past three years there have been 28.

IBM, Cisco, Hewlett Packard Enterprise and McAfee have given Russia access to their respective source codes.

Source: Reuters

23
Jun

The best wireless outdoor home security camera


By Rachel Cericola

This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.

After spending almost three months looking, listening, adjusting angles, and deleting over 10,000 push notifications and emails, we’ve decided that the Netgear Arlo Pro is the best DIY outdoor Wi-Fi home security camera you can get. Like the other eight units we tested, the Arlo Pro lets you keep an eye on your property and provides smartphone alerts whenever there’s motion. However, it’s one of the few options with built-in rechargeable batteries to make it completely wireless, so it’s easy to place and move. It also delivers an excellent image, clear two-way audio, practical smart-home integration, and seven days of free cloud storage.

Who should get this

A Wi-Fi surveillance camera on your front porch, over your garage, or attached to your back deck can provide a peek at what really goes bump in the night, whether that’s someone stealing packages off your steps or raccoons going through garbage cans. It can alert you to dangers and can create a record of events. It should also help you to identify someone—and if it’s a welcome or unwelcome guest—or just let you monitor pets or kids when you’re not out there with them.

How we picked and tested

Photo: Rachel Cericola

During initial research, we compiled a huge list of outdoor security cameras recommended by professional review sites like PCMag, Safewise, and Safety.com, as well as those available on popular online retailers. We then narrowed this list by considering only Wi-Fi–enabled cameras that will alert your smartphone or tablet whenever motion is detected. We also clipped out all devices that required a networked video recorder (NVR) to capture video, focusing only on products that could stand alone.

Once we had a list of about 27 cameras, we went through Amazon and Google to see what kind of feedback was available. We ultimately decided on a test group based on price, features, and availability.

We mounted our test group to a board outside of our New England house, pointed them at the same spot, and exposed them all to the same lighting conditions and weather. The two exceptions were cameras integrated into outdoor lighting fixtures, both of which were installed on the porch by my husband, a licensed electrician. All nine cameras were connected to the same Verizon FiOS network via a Wi-Fi router indoors.

Besides good Wi-Fi, you may also need a nearby outlet. Only three of the cameras we tested offered the option to use battery power. Most others required an AC connection, which means you won’t be able to place them just anywhere.

We downloaded each camera’s app to an iPhone 5, an iPad, and a Samsung Galaxy S6. The cameras spent weeks guarding our front door, alerting us to friends, family members, packages, and the milkman. Once we got a good enough look at those friendly faces, we tilted the entire collection outward to see what sort of results we got facing the house across the street, which is approximately 50 feet away. To learn more about how we picked and tested, please see our full guide.

Our pick

The Arlo Pro can handle snow, rain, and everything else, and runs for months on a battery charge. Photo: Rachel Cericola

The Arlo Pro is a reliable outdoor Wi-Fi camera that’s compact and completely wireless, thanks to a removable, rechargeable battery that, based on our testing, should provide at least a couple of months of operation on a charge. It’s also the only device on our list that offers seven days of free cloud storage, and packs in motion- and audio-triggered recordings for whenever you get around to reviewing them.

The Arlo Pro requires a bridge unit, known as the Base Station, which needs to be powered and connected to your router. The Base Station is the brains behind the system, but also includes a piercing 100-plus–decibel siren, which can be triggered manually through the app or automatically by motion and/or audio.

With a 130-degree viewing angle and 720p resolution, the Arlo Pro provided clear video footage during both day and night, and the two-way audio was easy to understand on both ends. The system also features the ability to set rules, which can trigger alerts for motion and audio. You can adjust the level of sensitivity so that you don’t get an alert or record a video clip every time a car drives by. You can also set up alerts based on a schedule or geofencing using your mobile device, but you can’t define custom zones for monitoring. All of those controls are easy to find in the Arlo app, which is available for iOS and Android devices.

If you’re looking to add the Arlo Pro to a smart-home system, the camera currently works with Stringify, Wink, and IFTTT (“If This Then That”). SmartThings certification was approved and will be included in a future app update. The Arlo Pro is also compatible with ADT Canopy for a fee.

Runner-up

The Nest Cam Outdoor records continuously and produces better images than most of the competition, but be prepared to pay extra for features other cameras include for free. Photo: Rachel Cericola

The Nest Cam Outdoor is a strong runner-up. It records continuous 1080p video, captures to the cloud 24/7, and can actually distinguish between people and other types of motion. Like the Nest thermostat, the Outdoor Cam is part of the Works With Nest program, which means it can integrate with hundreds of smart-home products. It’s also the only model we tested that has a truly weatherproof cord. However, that cord and the ongoing subscription cost, which runs $100 to $300 per year for the Nest Aware service, is what kept the Nest Cam Outdoor from taking the top spot.

Like our top pick, the Nest Cam Outdoor doesn’t have an integrated mount. Instead, the separate mount is magnetic, so you can attach and position the camera easily. Although it has a lot of flexibility in movement, it needs to be placed within reach of an outlet, which can be a problem outside the house. That said, the power cord is quite lengthy. The camera has a 10-foot USB cable attached, but you can get another 15 feet from the included adapter/power cable.

The Nest Cam Outdoor’s 1080p images and sound were extremely impressive, both during the day and at night. In fact, this camera delivered some of the clearest, most detailed images during our testing, with a wide 130-degree field of view and an 8x digital zoom.

The Nest app is easy to use and can integrate with other Nest products, such as indoor and outdoor cameras, the Nest thermostat, and the Nest Protect Smoke + CO detector. You can set the camera to turn on and off at set times of day, go into away mode based on your mobile device’s location, and more.

This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.

Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.

23
Jun

Twitch’s latest marathon is a a six-day ‘MST3K’ binge


B-movie lovers rejoice, for social video platform Twitch is set to air a Mystery Science Theater 3000 marathon lasting a mind-numbing six days. The stream, which features 38 classic episodes, will air on Shout! Factory TV’s new Twitch channel from June 26 to July 3.

The comic sci-fi show, which emerged as a cult favorite despite two network cancellations, follows hapless host Joel Robinson as he’s trapped by mad scientists in space and forced to watch some of the worst B-movies ever produced. Viewers are not only treated (subjected?) to the appalling films in their entireties, but also to the running commentary of our sorry protagonist and the two robot sidekicks he’s built to keep him sane (which, in the face of absolute fiascos such as Manos: The Hands of Fate, is no mean feat).

Starring Joel Hodgson, Michael J. Nelson, Trace Beaulieu, Kevin Murphy and Bill Corbett, among others, the show’s popularity has grown thanks to fanfare from hardcore MSTies, which saw a new, crowdfunded season hit screens in April.

Netflix dropped 20 classic episodes earlier this year as a teaser for the new season, but Twitch is a particularly apt platform for the show, allowing the audience to access both live and on-demand content accompanied by real-time chat from other viewers. So if you’re about watching someone watch a movie and sharing your thoughts about their thoughts with others watching the same thing — a kind of TV-ception, if you will — then this is for you.

23
Jun

Some Uber employees want Kalanick back in the CEO chair


Travis Kalanick stepped down as Uber’s CEO earlier this week, leaving the CEO, COO, CFO and general counsel positions open. But while he’s no longer head of the company, Kalanick is far from gone. He’s still a member of the board and controls the majority of voting shares, meaning he’ll play a not insignificant role in selecting his successor. And it’s not exactly surprising that he would want to. He is the founder after all.

But some employees at Uber seem to want more than that. According to an email obtained by Recode, at least one person at the company wants him to be reinstated in an operational role and has been circulating an email to gain support in that venture.

Part of the email reads, “Nobody is perfect, but I fundamentally believe he can evolve into the leader Uber needs today and that he’s critical to its future success. I want the Board to hear from Uber employees that it’s made the wrong the decision in pressuring Travis to leave and that he should be reinstated in an operational role.” It then goes on to ask for colleagues’ support, including a link where they can back the effort.

Whether this push to bring Kalanick back in some capacity beyond board member will have any sway remains to be seen. And some suspect that with his aggressive, hands-on style, it will be hard for Kalanick to distance himself regardless of his position in the company. But with all of the company’s many scandals, fresh leadership is probably not a bad thing.

Source: Recode

23
Jun

Watch SpaceX launch and land a reused Falcon 9 rocket


Today, SpaceX will hopefully launch and land a Falcon 9 rocket that it’s already flown to space. The launch window opens at 2:10 PM and lasts for two hours; launch time is currently scheduled for 3:10 PM ET. You can livestream the launch, with commentary, at SpaceX’s website.

This mission is called BulgariaSat-1 and will carry Bulgaria’s first geostationary communications satellite into a high geostationary orbit around the Earth. It’s launching from Kennedy Space Center in Florida, and a drone ship called “Of Course I Still Love You” will be waiting in the Atlantic Ocean for the Falcon 9’s first-stage landing. If things don’t go as planned, there’s another launch window tomorrow at 2:10 PM ET.

This isn’t the only SpaceX launch that’s happening this weekend, though. On Sunday, a Falcon 9 will lift off from Vandenberg Air Force Base in California carrying 10 satellites for the company Iridium. Liftoff is set for 1:25 PM PT, and the company will once again attempt to land the first stage of the rocket.

It’s not the first time SpaceX has flown and landed a flight-proven rocket; that happened on March 30. But rockets are the most expensive part of spaceflight; by reusing Falcon 9 first stages, SpaceX is cutting costs considerably. Reusable components are crucial to the future of human spaceflight, and SpaceX is certainly making steady and significant improvements in that regard.

Update, 6/23/2017, 1:05 PM: SpaceX tweeted that the ground team is taking additional time for checks. New liftoff time is 3:10 PM ET.

Via: The Verge

Source: SpaceX