Seagate and DJI have formed a strategic partnership, and the first data solution to come out of it is the Seagate DJI Fly Drive. The Fly Drive is an external, portable hard drive that lets drone pilots backup photos and videos as soon as they’re taken on location.
- Best drones to buy in 2017, whatever your budget
The Fly Drive has a storage capacity of up to 2TB, which is enough for more than 60 hours of 4K video footage at 30fps, and has a built-in microSD card slot you can instantly transfer data.
Once your photos and video footage are stored on the Fly Drive, you can transfer it quickly and easily to your Mac or Windows PC with USB 3.1 or Thunderbolt 3.
Seagate has also equipped the Fly Drive with a durable outer shell to help it defend itself against knocks and bumps while out filming in various locations or being thrown around in your bag.
Seagate and DJI also know that you’re probably going to want to edit your footage, so will include a two month subscription to Adobe Premiere Pro CC with every Fly Drive purchased.
- DJI Mavic Pro review: One insanely powerful, portable drone
- DJI Phantom 4 Advanced replaces the Phantom 4, almost as good as the Phantom 4 Pro
While the Fly Drive is targeted at DJI drone owners, it could of course be used to backup any data from a microSD card, whether it be from a smartphone, action camera or other drone.
UK pricing for the Seagate DJI Fly Drive has yet to be confirmed, but you’ll be able to pick one up in the US for $119.99 for the 2TB model.
Snapchat is one of the most popular apps around.
It lets you share photos and video which will self-destruct after a few seconds of the recipient viewing them and that makes for a simple, quick and, sometimes, funny way to communicate. And that means you can post embarrassing moments without fear of them spread around the internet, right?
Ahem. Think again.
Thanks to recipients’ screengrabs many a funny foible has been easily shared regardless. So follow us down the road of the best Snapchat fails from around the web. There are some real corkers in here!
Delicious chocolate cake
They say the first bite is with the eye. This Snapchatter certainly regretted taking the second bite when they ended up washing their mouth out instead. At least when they swore they’d already done the punishment.
The bird selfie
Never work with children or animals. There’s a reason why these old clichés hold true. This Snapchatter felt that misery when he got a short sharp peck to the eye from his feathered friend.
The robot uprising
Automated robot vacuum cleaners are great in theory. Never lift a finger to clean your floors again – that is until man’s best friend decides to leave a present on the rug and the automated cleaning cycle kicks in shortly afterwards. There’s no poop avoidance technology built into these mechanical cleaners. A real first world problem if ever we’ve seen one, certainly leads to some hilarious stories though.
Awkwardly spelt names
Coffee shops have a bit of a reputation on the internet for poor spelling of awkward names, but this one tickled our funny bone.
Hotdog legs was a bit of an internet sensation for a while – photographs of legs at the beach that could just as easily be a pair of hotdogs. This Snapchatter has taken it to a new level by not only not using their legs and also by not even being on holiday.
Game vs girl
They say the way to a man’s heart is through his stomach. But whoever said that had never heard of FIFA or the overpowering lure of videogames. Here we can have a good chortle at an oblivious chap focussed on his game rather than on the girl.
Two wrongs don’t make a right
Sometimes, when you think things couldn’t possibly get any worse, they do. This poor Snapchatter tried dislodging their stuck sandwich by inserting more money into the vending machine to buy a drink. It didn’t end well.
Washing machine novice
Quite how you can manage to set fire to a washing machine while doing laundry is beyond us. Perhaps this person didn’t read the instruction labels properly or used petrol instead of washing powder, whatever happened, it’s certainly not great.
Don’t try this at home and please, if there’s a fire, put it out first, Snapchat later.
That is not how you do laundry
Anyone would think that Snapchatters we’re incapable of doing their washing without causing chaos or at least that they’re always around when the carnage ensues. This unlucky homeowner will at least have clean carpets if nothing else.
Eggstordinary workday disaster
We’re not eggsagerating when we say this looks like a clucking bad day at the office. They say you can’t make an omelette without breaking a few eggs, these chaps must be trying to set a record for the world’s biggest omelette. Whatever they’re up to it’s not going to be fun to clean up. At least things can only get better.
You had one job
A classic case of taking things too literally, this bar of soap is now no use to anyone – except perhaps to give us some chortles on the internet.
Don’t Snapchat and drive
Using your phone while driving is irresponsible and dangerous, you never know what you might hit while driving and not paying attention.
*No Unicorns were killed in the making of this Snapchat (we assume).
When life gives you lemons
Sometimes, life just really doesn’t seem like it’s going your way. All you want is a burger, but the system has other ideas.
Gravity and ice cream don’t mix
If you thought missing out on a burger was bad, just look at the size of this ice cream and the mess that followed shortly after it was purchased.
Is there any worse pain on a hot day than an ice cream gone but not forgotten?
Careful driving is important
Driving carefully helps prevent accidents but properly securing paint tins also stops unnecessary painting of vehicle interiors. Let us hope this driver managed to make it out unscathed.
Gordon Ramsay would not be pleased
We’ve all seen some cooking disasters in our time, but this one looks more like a chemistry experiment gone wrong than an explosive chilli on the simmer.
How did that pan lid embed itself in the ceiling? Some mysteries we’ll never know the answer to. Good luck explaining that one to the landlord.
A lesson in using the proper equipment
In this photo, we see how one Snapchatter learnt a valuable lesson in why it’s important to use the right equipment for the job. This shelf above the fireplace was barely strong enough to hold this rather large and weighty television.
Probably time for an upgrade to 4k anyway.
An inexplicable accident
This Snapchat inexplicably catches a driver somehow crashed into a lamp post in an otherwise empty car park.
An impressive display of driving incompetence if ever there was one.
When advertising goes wrong
If your business is predicting the future and knowing the unknown, then it’s no doubt fairly embarrassing when your sign collapses onto your car. Not a great advert for your business either.
Rise of machines
Another work based disaster caught on camera. This time the printer finally had enough and fought back. From the photo it looks like some poor person was caught in the toner explosion as you can see two footprints in the black cloud on the floor. Must have been a Monday.
Man’s best friend?
You think you’re having a bad day? Imagine relaxing in the sunshine and then having your peace disturbed by rain, only it’s not a summer shower, it’s something far worse.
A mess in the bathroom
There is a certain established etiquette to toothpaste use and whether you believe you should squeeze from the bottom or the middle, you can certainly agree that this mess is not right.
Like hot butter through a knife
Another case of the wrong tool for the job or perhaps a classic case of buy cheap, buy twice, this Snapchat catches a bread knife being broken by butter. An impressive feat indeed.
Signs that haven’t been thought through
Some signs need to be checked for their logic. This one clearly wasn’t thought through properly. But maybe we should have a group discussion about it.
Snaps that require further explanation
Some photos require further explanation, some are better left as a mystery. What happened here? Did this woman take a sausage to the eye? Was she attacked by a hungry bear? We’ll never know.
The ignorant tourist
We adore the selfie generation and there’s nothing better than an ignorant tourist blissfully unaware of what they’re snapping.
Puns, you either love them or hate them. This one seems like a lot of mess to just create a mildly funny Snapchat message.
Making a ham of face swaps
The face swapping feature of Snapchat probably warrants an article all of its own. There are plenty of great face swap fails out there, but this one is something pretty special.
Pretty sure that’s not what he meant when he said “send nudes” and again, let us remind you not to Snapchat and drive.
When you’re hungry, you’re hungry
If you’re out late and are feeling peckish, there’s always the McDonald’s drive through, you’ll certainly see some sights, but not usually a coffin hanging out of the car in front.
Unlucky in love
There’s not much in life that pizza can’t fix perhaps with the exception of obesity. Our cheesy friend is certainly a comfort for loveless life but you’d hope that the furry friend would be there for you too though. Remember that sharing is caring.
Not your everyday traffic jam
We all hate getting stuck in traffic, stuck behind a slow driver stopping us from getting from a to b. But when the cause of the jam is a long-legged creature what can you do but sit and wait patiently.
The spoilt cat
Cats have a bad reputation for being pretentious and selfish creatures. Some owners really don’t help breaking these norms. This cat is particularly well catered to with an iPad for gaming, a tent for privacy and no doubt a spot of the finest tuna for dinner.
After the recent WhatsApp update for iOS the messaging app now works with Siri. You can get Apple’s voice assistant to read out your messages, reply to them and even send new ones to people in your WhatsApp contact lists. It can also be used to call people through WhatsApp using a data connection.
Setting it up is fairly uncomplicated, but there are caveats to the way it works. Here then is how to get Siri to read your WhatsApp messages and a few of things it can and can’t do.
What do I need to use Siri with WhatsApp?
Siri will work with WhatsApp as long as you have at least iOS 10.3 and WhatsApp Messenger version 2.17.20, which was updated on Friday 21 April, installed on your iPhone.
You then need to allow Siri to access your WhatsApp data. This can be okayed by voice, by just saying “Hey Siri, read my last WhatsApp message”.
The Siri page will return the following:
“I’ll need to access your WhatsApp data to do this. WhatsApp says: ‘This lets you use Siri to quickly send and read messages and make calls. Some of your WhatsApp data will be sent to Apple to process your requests. Is that OK?’”
Click on “Yes”.
- 13 Secret WhatsApp tricks
How do I get Siri to read my messages?
The next time you say “Hey Siri, read my last WhatsApp message” it will read it out aloud. In fact, it will read any messages you haven’t already seen by opening the WhatsApp Messenger application.
After a message has been read, Siri will ask you if you want to reply, which you can by voice. You can also ask Siri to send a message to a WhatsApp contact or call them through the app.
What can’t Siri do with WhatsApp?
There are a couple of caveats to Siri integration. For example, Siri cannot find messages you’ve already read in the app itself.
Also, if you get Siri to read a message to you it doesn’t flag to the sender that it has been read. When you read a message in the app the sender gets two blue ticks to show you’ve seen it. They don’t get that if Siri reads them aloud.
Still, it’s a great way to keep up with WhatsApp when you don’t have your hands-free, such as when driving.
The artificial intelligence that we hope will exist in our lifetimes is a world away from what’s available right now. A thinking computer that knows us better than we know ourselves, and can make us better than we are, is still the stuff of fantasy. But if our goals are simple and easy to understand, does an AI really need to be that smart to get the job done? For instance, can a pair of swanky headphones with an AI personal trainer make me a better runner?
It’s a question that I’ve been musing about ever since I started testing Vi, which its creators call the “first true AI personal trainer.” It combines a pair of bio-sensing headphones and an app from Lifebeam, a military biosensor firm founded by a pair of former Israeli air force pilots. Lifebeam’s side hustle is to take those same sensors and bake them into consumer products like cycling helmets and baseball caps.
Here, the company has added that technology to a pair of Bluetooth earphones, along with a raft of other fitness tracking equipment. Buried inside the “halo” that sits around your neck is a six-axis gyroscope, barometer and accelerometer. In addition, the $249 device will work as a regular set of headphones, packing a microphone and Harman/Kardon-branded sound.
The device was a smash hit when it launched on Kickstarter last year, earning nearly $1.7 million, far beyond its initial goal of $100,000. It’s worth saying that the hardware concept isn’t new, since Intel and SMS did the same job with 2015’s BioSport. Similarly, Jabra came out with its Sport Coach headphones, although those didn’t have the optical heart rate sensor. Lifebeam is hoping that Vi, the AI personal trainer with voice coaching that ships with the product, will be the big draw for its headphones.
Once you’ve punched in your vital statistics and connected Vi to Healthkit, Google Fit or Strava, you’re ready to run. Vi needs 120 minutes of data before it truly begins to work, so your first few runs are more about training the app than training your own body. Despite suffering a damaged right knee that was close to giving out, I braved the indifferent English weather and got testing.
There are four options from which you can choose: Distance Run, Time Run, Free Run and Cycling, with the app also letting you pick how chatty you want Vi to be. Selecting Free Run and asking Vi to “lead the way,” I began to run through the app’s tutorial level. It begins well, with the voice actress — there is no way to change Vi’s gender — speaking in a faux robot voice before admitting that it was a joke and chatting more naturally.
During these initial runs, Vi will run through a pre-recorded spiel about how the system will work to coach you through your journey. You can tap the right earbud to activate the microphone and bark vocal commands at the platform. So if you ask “How am I doing?” you’ll be told your heart rate, speed, pace and step rate. Simply asking for a specific statistic, such as heart rate, will get Vi to offer that statistic on its own.
You can also set audio beacons at specific distances during your run that gently ping when you approach them. The closer you get, the louder the ping, and I set my beacons to launch every kilometer so that I had a sense of achieving a goal. It’s a neat effect, and one that helps you perk up when your stamina might otherwise begin to flag.
During my runs, Vi told me that my stride length was too long and I wasn’t hitting the ground enough, risking fatigue and injury. As such, it offered up a feature called “Step to the Beat,” in which the background music is silenced in favor of a drum track. You’re then asked to make sure your foot is striking the ground in time with each drum hit, something that I found quite enjoyable.
Some of the things that were promised in Vi’s initial Kickstarter pitch didn’t make themselves apparent during my short testing period. For instance, on a day with bad weather, Vi was meant to suggest that I work out indoors instead of braving the elements. That didn’t happen, although the app offers no options to exercise beyond running or cycling, so perhaps the feature was quietly dropped.
From Vi’s Kickstarter.
Overall, the experience of running with Vi was pleasant, and I can only attribute that to the power of motivation that it offers. Even though it’s little more than a series of pre-recorded statements being played at random, there is something about being cheered on — a sense of support — that’s useful. It would probably wear out its welcome after long enough, but I suspect judicious use of the system could be somewhat useful for fair-weather runners.
Unfortunately, some elements of the Vi experience weren’t quite as smooth or enjoyable. The microphone is woeful at picking up voice commands when you’re running in an area with plenty of ambient sound. Running at full pelt down a busy road, all Vi would do was apologize for not quite picking up the words I was saying.
I got plenty of funny looks when I had to slow down, and then stop, trying to speak clearly enough to be understood. I’m sure the patrons of a local supermarket thought I was ill after seeing me bellow “Step to the beat!” into my own neckband. Unfortunately, after a while I gave up and intentionally sped up my run to trigger the feature on its own.
The app itself is beautiful and minimalist and offers plenty of ways to dive deeper into your data at the end of a run. However, there’s still plenty of work to be done with the music controls, which do not work at all. Playing library music was an utter chore, as the carousel bounces around when you’re trying to skip a spoken-word track in the playlist. Perhaps the designers paid more attention to the Spotify Premium integration than making sure it worked with your own music.
Then there’s the fact that the AI that Vi boasts simply isn’t that intelligent, and the system is doomed never to live up to its promises. After all, it’s simply a slightly more sophisticated pattern recognition machine with pre-recorded audio prompts. You could just as easily listen to a specific coaching podcast and get a similar experience, albeit with a little more customization.
Admittedly, you won’t get the learning experience that Vi offers, and it will tweak its coaching as it learns more about your running. If you set fat-burning as a goal, and you’re getting closer to your fat-burning zone, Vi will offer some judicious encouragement to get you there. Alternatively, if it knows that you’re close to flaking out and slowing down, the system will help you maintain a steadier, more consistent pace. The more data you give it, the better the insights you get back.
Overall, I enjoyed using Vi, because it’s a perfectly elegant way to run, and the voice coaching is friendly and pleasant enough. I’d like to think that I’d continue to use it, in the hope of getting ever smarter recommendations and using it to become a better runner. But one thing, above all of my other gripes, stops me from recommending this to anyone: the price. After all, $249 is a lot of money to spend on a device that does something you can achieve for a hell of a lot less.
If you really want a pair of Bluetooth halo earbuds with Harman/Kardon sound, you can pick up LG’s Tone for a lot less. There are plenty of apps that offer voice coaching, as well as podcasts that’ll do a similar job — sometimes for free. Since you have to bring your phone along with you, it’s not as if you need the motion-tracking abilities of the Vi headset anyway.
If you’re a hardcore fitness enthusiast who wants a premium product, cash be damned, then sure. Vi is a beautiful piece of equipment that offers good sound and doubles as a decent pair of running ‘phones. But for everyone else, it’s probably not worth the extraordinary premium.
Does an artificial intelligence need to be smart to make you a better runner? No, but then, it’s not really an artificial intelligence, is it?
Google’s recognition for context goes beyond conversing with Assistant, it would seem. The search juggernaut is working on a feature that “thinks” of what you were looking at in Chrome and makes it available in other apps. It’s called “Copyless Paste” for now and a glimpse at the code documentation should give a few clues as to how it works:
“Provide suggestions for text input, based on your recent context. For example, if you looked at a restaurant website and switched to the Maps app, the keyboard would offer the name of that restaurant as a suggestion to enter into the search bar. The data is indexed locally and never sent to the server. It’s disabled in incognito mode.”
So, it isn’t the same as copying a string of text from a recipe or article and then Android automatically dropping it in an email or a text message, but it still sounds pretty useful. The info won’t leave your phone, either. VentureBeat writes that you can activate the feature if you’re running Chrome Canary on your device, but that you might not notice any changes.
What’s more, the code review suggests that this won’t be available on low-end devices. With I/O coming up next month, it’s pretty likely we’ll hear more about this feature soon, and what it’ll mean in the context of Android O.
Source: Chromium (1), (2), (3)
When someone mentions “VR filmmaking,” they’re usually referring to 360-degree video, or some kind of video game environment where the story unfolds around you. One developer in Japan, however, has taken the concept in a vastly different direction. ‘Make it Film’ is an experimental project by ‘MuRo’ that lets you operate a camera inside a VR environment. Like a film director, you can frame up the shot and then hit record as characters converse or take part in an action scene. It was built on top of Unity3D and currently works with the Oculus Rift and Touch controllers.
The idea is certainly novel. For aspiring filmmakers, it can be difficult to obtain expensive gear, find talented (but cheap) actors and travel to exotic locations. Inside VR, however, that’s less of a problem. Provided you have the digital assets, you can go anywhere and shoot for as long as you like. You can also stop filming whenever you need a break and choose particular lighting or weather conditions on a dime. Mess something up? Simply reset the scene and start again. If you’re a student, or someone that’s interested in the art of cinematography, a tool like this could be invaluable.
MuRo’s example film (above) uses ‘Adam,’ a collection of 3D, high-resolution characters and environments made by Unity. He says the tool could, in theory, work with other scenes and characters too. So if you have the necessary models, you could make home movies or recreate scenes with any of your favorite heroes and actors. The real genius, though, is using the Touch controllers to manually move the camera rig around. We just hope future versions will have virtual cranes, sliders and drones too.
Via: Prosthetic Knowledge
Source: MuRo (YouTube)
Two years ago, Google introduced Jump, a VR platform that uses cloud-based software and smart stitching algorithms to make 360-video creation easier than ever before. It also partnered with GoPro to make the Odyssey, a 16-camera rig that was the first-ever camera to have the Jump software built right in. Now, Google is ready for the next generation of Jump, and for that, it’s partnered with a new company and made a brand new camera. The device is called the Yi Halo, it’ll retail for $16,999 and Google is touting it as the “next generation Jump camera.”
The Yi Halo was built in collaboration with Yi Technology, a Xiaomi-backed GoPro competitor mostly known for its action cameras. Indeed, the Halo is comprised of 17 different Yi 4K cams — 16 along the circumference and one placed in the middle facing up, which should result in seamless stitching of upward-facing views. The camera can generate 8K x 8k stereoscopic VR content at 30fps, as well as 6K x 6K content at 60fps. Yi’s CEO Sean Da says that the camera gives creators full control of a myriad of adjustments like ISO, white balance and flat color mode.
What’s more, the entire rig is only 7.5 pounds. I lifted and carried one around at an event in Google’s San Francisco office, and was surprised at how light it was. This portability is a unique thing in the VR world, where professional 360 cameras are often heavy and cumbersome. There’s also a built-in battery that promises around 100 hours of continuous footage, or you can plug it into the wall with an AC adapter. Plus, Da says that the Yi Halo can accommodate batteries from third parties, which would allow for greater flexibility when you’re on the field.
On the base of the Halo is a small touch screen panel where you can monitor the rig’s various components. At a single glance, you’ll be able to know the battery life and SD card status of each camera. What’s more, there’s even a companion Android app that works as a remote control and a live preview tool. And as for upgrading the firmware, that can be done either via the aforementioned panel or through the app.
“The camera was designed with the software in mind,” says Emily Price, a Jump product manager. An example of this is the position of that upwards-facing camera. It’s not actually on top of the rig; it’s actually located in a slightly sunken position in the middle. “We made this geometric decision because it leads to much better results in automatic stitching,” says Price.
“The upward-facing camera’s view of the world is pretty different from the cameras on the main ring,” adds Price. “If it were on top, it doesn’t stitch as smoothly as you’d like.” The solution was to drop the camera as low as possible in the center without the view being occluded by its construction. I had a look at some video shot with the upwards-facing camera located in those two positions — on top and in the middle — and the difference between the two is pretty stark. With the former, the ceiling looked a little warped, while on the latter, you couldn’t see much wrong at all.
Which brings us to the Jump software, which is really what makes the Halo special. The Jump Assembler combines Google’s computer vision technology and cloud infrastructure to take the inputs from the 17 distinct sensors and return one seamless 360-degree video. Thanks to Google’s algorithms, the stitching is completely automated. It’s a job that used to take weeks, but can now be done in just a few hours.
“We’re investing in tools and onset workflow as well,” adds Price. For example, the team has come up with something called a “rough stitch,” which allows you to create a preview stitch locally on your laptop in mere minutes. This way you can check out what you’ve shot while you’re still on set and know that you’ve got the shot before you move on to the next scene. Price says the software also does exposure correction and tone mapping, which helps adjust for the different lighting conditions on each side of the Halo’s 17 different cameras.
And that’s not all. Other features of the Halo include a built-in timelapse mode, automatic depth map generation and crisp focus even when the subject is close to the camera. Armando Kirwin, a co-founder of Milk VR and a VR director who’s tried the Halo for a few months, also had high praise for the device’s “dynamic range” and “color reproduction,” especially when compared to the previous Jump camera. It also performed great in low light, he said. “In our internal technical evaluations, this camera is just better across the board,” he said.
Julina Tatlock, also a VR director and the founder of 30 Ninjas, a VR content creation company, says that she appreciates the depth map data provided by the Halo. “You can see where you can move your head,” she says. “It’s far more immersive.” It’s also far easier to add CG or other kinds of motion graphics, says Tatlock.
The factor that sets Google and the Yi Halo apart from other VR camera companies, Kirwin says, is that they actually worked with film creators from the get-go. “Simple things like exposure control — it took us like two years to get that,” he says, referring to early 360 cameras that didn’t have that function. “But if they bothered asking a film student, they’d have known about it immediately.”
The Yi Halo is just the latest 3D 360 camera to make the news recently. Just last week, Facebook released its own additions to the Surround 360 family with the x24 and the x6. Both Facebook cameras have the ability to shoot in six degrees of freedom, which allows viewers to move freely in live action scenes as if they were in VR. But Kirwin, who was one of the first directors to use the original Surround 360 camera, says editing this sort of video requires hiring specialized editors, which costs a lot of money. “It’s a totally different approach” to what Google is doing, he says.
Indeed, while Facebook and others like Nokia and Lytro aim for the high-end, Google wants its Jump cameras in the hands of the average content creator. “The Jump program is really designed around enabling a large number of creators today,” says Jake Mintz, a VR video product manager at Google. “We’ve made very specific design choices to make this technology accessible and scalable to a wide number of people.”
That’s why Google is also announcing a new program called Jump Start, which aims to give over 100 filmmakers free access to a Jump camera (at least until their film is made) and unlimited use of the Jump Assembler over the next year. So if you don’t fancy forking over that $17,000 just to get your hands on the Halo, you can send in an application to the program starting today. The deadline to apply is May 22nd.
“We want to reduce the complexity and cost in creating this kind of content,” says Price. “By offering seamless stitching and saving the costly manual work of post-production, we can enable creators to think of the world in 360, rather than in slices.”
Source: Yi Technology
Tesla is working hard to make it easier for its customers to charge their electric cars on the go. Back in February, the company revealed that it’ll be doubling its Supercharger network in 2017. It doubled down on that claim this morning, with the announcement that it’ll have 10,000 Superchargers available by the end of 2017. Additionally, Tesla is planning to build larger Supercharger sites to fit “several dozen” cars, and it’s increasing the amount of Destination Charging locations (for slower refueling) to 15,000 globally, up from 9,000 last year.
The Supercharger expansion is particularly impressive, since it took the company five years to build 5,400 chargers. It makes sense, though, as Tesla is gearing up for the launch of its mass market Model 3 vehicle, which is set to begin production in July. If the company wants EVs to be accepted by consumers, it needs to make it easy for them to refuel. It’ll still take you around 40 minutes to get an 80 percent charge on the current Superchargers, but the company has hinted that next-generation units will be significantly faster.
While the Destination Charger expansion is a bit less exciting, since they refuel much more slowly, it’s still very useful for overnight charging. And, in addition to larger Supercharger sites, Tesla claims it’s planning to position some of those away from highways so local drivers can refuel easily. Tesla owners still have to deal with a bit more inconvenience than gas drivers, but at least it looks like the company is working quickly to ease their pain.
Spotify, the most popular music streaming service, might be getting ready to jump into the hardware game — if a few job postings are to be believed. The company recently posted a handful of openings that make clear references to designing and selling hardware direct to Spotify users. A posting for a senior hardware product manager says that the eventual hire would work on an initiative to “deliver hardware directly from Spotify to existing and new customers.” It also indicates that the hardware would be “a category defining product akin to Pebble Watch, Amazon Echo, and Snap Spectacles.”
Spotify indicates that this would be a “fully-connected” hardware device; the senior product manager would define both the internet-connected hardware requirements as well as its software. The position is located at Spotify’s Stockholm, Sweden headquarters, but the listing doesn’t give specifics on what the hardware would do.
A few other listings give some clues, however. A product manager posting calls for someone specifically to work in the “voice” team. “Voice is quickly becoming a key interaction mechanism for control of digital devices and services,” the posting reads. “As a Product Owner for voice you will be responsible for the strategy and execution of Spotify’s voice efforts beyond our core apps.” This listing notes that this candidate will be working with the “major external platform providers within the voice space,” which would indicate this job involves making Spotify work better on platforms like Amazon’s Echo and Google Home.
But the job isn’t exclusive to working with external companies. “You will be working closely with our User Interaction team and Product Managers for applicable platforms to ensure they are voice enabled in a consistent way to the benefit of our end users,” the listing reads. While there’s no guarantee that this person would work on the hardware initiative referred to in the other job posting, someone in this role could certainly take part in the project.
Lastly, Spotify is also looking to hire a product director who would focus on “natural language understanding.” That candidate would work on building teams “dedicated to building the components of the Spotify conversational interface.” That’s certainly something that could be utilized in any voice-connected hardware that the company could be working on.
Piecing together a company’s future moves based on job postings is naturally fraught with peril, but it looks like Spotify is at least investigating what it can do in the hardware space and how it can improve voice interactions that work with its software. Whether that means we see an official Spotify connected speaker with voice commands remains to be seen, but it’s obviously an area of intense interest for many companies right now. It’s also one of intense competition, so it’s hard to say if it’ll be worth it for Spotify to jump in and compete with the likes of Amazon and Google.
Via: Zatz Not Funny
Source: Spotify (1), (2), (3)
Apple and Alphabet aren’t the only tech heavyweights branching out into self-driving vehicle technology. Amazon wants in on the action, too. Wall Street Journal sources hear that Amazon formed a small (roughly 12 people) team about a year ago to explore the possibilities of autonomous vehicles. This doesn’t guarantee you’ll see Amazon-made driverless vans roaming the streets, to be clear. Rather, the group is an “in-house think tank” looking at ways to take advantage of self-driving tech for the company’s online shopping business. With that said, smarter delivery vehicles are a real possibility.
It’s no secret that Amazon is taking increasing control of its delivery chain, whether it’s buying its own jets or managing ocean freight — anything to lower costs and ship your goods a little bit faster. Autonomous vehicles are a logical extension of that. If it reduced the need for human drivers, delivery trucks could operate almost non-stop. This would sadly eliminate jobs, but it could also mean getting that big Amazon Prime order in one day versus two.
And to some extent, research into autonomous tech might be necessary for Amazon’s drone delivery plans — that’s still the company’s big focus, according to the tipsters. While there aren’t specific details, Amazon may “coordinate” drones with ground-based self-driving vehicles (say, vans that double as launchpads) to help with deliveries. The driverless team may have a lot of work to do, but there could be a day where your Amazon purchases are handled almost exclusively by machines.
Source: Wall Street Journal