Skip to content

Archive for


Sony Xperia X Compact breaks cover, and you didn’t expect that

There’s been a lot of talk about Sony Mobile recently. After the ditching of the Z series at MWC and the launch of a new X series, the question of where the next flagship will come from has been the subject of lots of speculation.

We’ve spent plenty of time pondering what the next Xperia flagship might look like, but now we have another phone in the mix and it’s an Xperia Compact model. 

Presented by @evleaks, all we have is an image of the handset and the name: Sony Xperia X Compact. We’ve heard nothing about this phone so far and we have no details of that level it might be pitched at, so all we can use is logic.

Sony has released a number of Compact editions in the past, the most recent being the Xperia Z5 Compact.

Sony Xperia X Compact

— Evan Blass (@evleaks) August 25, 2016

Typically, the Compact will offer specs that are close to those of the flagship phone that’s its namesake. The Z5 Compact, for example, has the same Snapdragon 810, camera and features as the Z5, the difference being the display size.

For the Xperia X Compact, then, we can only surmise that it will be pinned on the Xperia X. That would suggest it’s a compact mid-ranger, with a Snapdragon 650, 23MP rear and 13MP front camera. For the display, we can only guess, but 4.6-inches was the size of the last Compact Sony released, with a 720p resolution.

From this image you can’t deduce much: it looks entirely typical for a Sony handset, with dedicated camera button, twin front speakers. Judging by the lines on the photo, it might also pick up the slight curves to the edges of the display that the Xperia X offers. 

Admittedly, this is all speculation and there’s no concrete evidence that this device is even legitimate at this point in time – except for the fact that it’s being shared by a reliable source.


Amazon announces a special forum just for car enthusiasts

Amazon has launched Amazon Vehicles, a special “automotive community” meant for users to research information about their vehicles while shopping for vehicles, parts and accessories. The key word here is “shopping.” It’s essentially an online destination to chat about cars and buy new parts when you need or want them.

In addition to checking out parts and information on vehicles like videos, reviews, images and specs customers can chat with other members of the community and ask their burning vehicle questions. It’s meant as an “extension” of the pre-existing Amazon Automotive store, where you can already purchase vehicles, tires, parts and various other pieces of cars you might need. Plus, you can already add information about cars you own to the Amazon Garage.

Though Amazon Vehicles seems mainly focused on getting you to search for the components you need to build the best car possible, it could be a cool resource for anyone obsessed with cars and the culture surrounding them, as well as a jumping off point for Amazon’s Top Gear spinoff The Grand Tour. Either way, if you’re looking to spruce up your car, it looks like Amazon’s going to be a pretty decent place to start.

Source: Amazon


Uber is quietly rolling out flat rates for riders in select cities

Uber is testing out a new pricing plan that could make using the service as cheap as paying bus fare in a city like San Francisco.

Uber’s current pricing structure, like Lyft, depends on the distance you’re traveling from one point to another as well as surge pricing, which makes your ride more expensive if you hail one during a busier time than normal. This can cause prices to skyrocket when you’re only traveling a short distance.

The company is opting to shy away from surge pricing by offering a new flat fee per ride for groups of riders this September, according to Business Insider, who first caught wind of the new program via new beta invite.

Uber offered a package of 20 trips for $20 or 40 trips for $30 and a flat fare of $2 for an UberPool ride, where you share a car with others. For an UberX, or a private car, it’s $7.

The 20 rides via UberPool, when the pricing is broken down, essentially makes it about $3 apiece. Uber has yet to offer commentary on the test, but it is currently rolling out the structure to six locations: San Diego, Boston, Seattle, Miami and Washington, DC.

It will be interesting to see how these pricing alterations work out for Uber in the future and if the test is rolled out to other cities going forward.

Via: Business Insider UK


Facebook opens its advanced AI vision tech to everyone

Over the past two years, Facebook’s artificial intelligence research team (also known as FAIR) has been hard at work figuring out how to make computer vision as good as human vision. The crew has made a lot of progress so far (Facebook has already incorporated some of that tech for the benefit of its blind users), but there’s still room for improvement. In a post published today, Facebook details not only its latest computer-vision findings but also announces that it’s open-sourcing them to the public so that everyone can pitch in to develop the tech. And as FAIR tells us, improved computer vision will not only make image recognition easier but could also lead to applications in augmented reality.

There are essentially three sets of code that Facebook is putting on GitHub today. They’re called DeepMask, SharpMask and MultiPathNet: DeepMask figures out if there’s an object in the image, SharpMask delineates those objects and MultiPathNet attempts to identify what they are. Combined, they make up a visual-recognition system that Facebook says is able to understand images at the pixel level, a surprisingly complex task for machines.

“There’s a view that a lot of computer vision has progressed and a lot of things are solved,” says Piotr Dollar, a research scientist at Facebook. “The reality is we’re just starting to scratch the surface.” For example, he says, computer vision can currently tell you if an image has a dog or a person. But a photo is more than just the objects that are in it. Is the person tall or short? Is it a man or a woman? Is the person happy or sad? What is the person doing with the dog? These are questions that machines have a lot of difficulty answering.

In the blog post, he describes a photo of a man next to an old-fashioned camera. He’s standing in a grassy field with buildings in the background. But a machine sees none of this; to a machine, it’s just a bunch of pixels. It’s up to computer-vision technology like the one developed at FAIR to segment each object out. Considering that real-world objects come in so many shapes and sizes as well as the fact that photos are subject to varying backgrounds and lighting conditions, it’s easy to see why visual recognition is so complex.

The answer, Dollar writes, lies in deep convolutional neural networks that are “trained rather than designed.” The networks essentially learn from millions of annotated examples over time to identify the objects. “The first stage would be to look at different parts of the image that could be interesting,” he says. “The second step is to then say, ‘OK, that’s a sheep,’ or ‘that’s a dog.’

“Our whole goal is to get at all the pixels, to get at all the information in the image,” he says. “It’s still sort of a first step in the grand scheme of computer vision and having a visual recognition system that’s on par with the human visual system. We’re starting to move in that direction.”

By open-sourcing the project on GitHub, he hopes that the community will start working together to solve any problems with the algorithm. It’s a step that Facebook has taken before with other AI projects, like fasText (AI language processing) and Big Sur (the hardware that runs its AI programs). “As a company, we care more about using AI than owning AI,” says Larry Zitnick, a research manager at FAIR. “The faster AI moves forward, the better it is for Facebook.”

One of the reasons Facebook is so excited about computer vision is that visual content has exploded on the site in the past few years. Photos and videos practically rule News Feed. In a statement, Facebook said that computer vision could be used for anything from searching for images with just a few keywords (think Google Photos) to helping those with vision loss understand what’s in a photo.

There are also some interesting augmented reality possibilities. Computer vision could identify how many calories are in a photo of a sandwich, for example, or it could see if a runner has the proper form. Now imagine if this kind of information was accessible on Facebook. It could bring a whole new level of interaction to the photos and videos you already have. Ads could let you arrange furniture in a room or try on virtual clothes. “It’s critical to understand not just what’s in the image, but where it is,” says Zitnick about what it would take for augmented reality applications to take off.

Dollar brought up Pokémon Go as an example. Right now the cartoon monsters are mostly just floating in the middle of the capture scene. “Imagine if the creature can interact with the environment,” he says. “If it could hide behind objects, or jump on top of them.”

The next step would be to bring this computer-vision research into the realm of video, which is especially challenging because the objects are always moving. FAIR says that some progress has already been made: It’s able to figure out certain items in a video, like cats or food. If this identification could happen in real time, then it could theoretically be that much easier to surface the Live videos that are the most relevant to your interests.

Still, with so many possibilities, Zitnick says FAIR’s focus right now is on the underlying tech. “The fundamental goal here is to create the technologies that enable these different potential applications,” he says. Making the code open-source is a start.


Google offers 360-degree tours of US National Parks

To celebrate the 100th anniversary of the US National Parks Service, Google has put together a collection of virtual tours combining 360-degree video, panoramic photos and expert narration. It’s called “The Hidden Worlds of the National Parks” and is accessible right from the browser. You can choose from one of five different locales, including the Kenai Fjords in Alaska and Bryce Canyon in Utah, and get a guided “tour” from a local park ranger. Each one has a few virtual vistas to explore, with documentary-style voiceovers and extra media hidden behind clickable thumbnails.

There’s plenty to sit through, along with a larger exhibit put together by the Google Arts & Culture team. The site is essentially a hub for Google’s various exhibits — each section is like a miniature museum, containing high-res photographs of important places, documents and artefacts.

While both websites are suitable for the classroom, teachers in the US might want to try a “Hidden Worlds Expedition” instead. They’re designed for the Expeditions app — a two-pronged approach to virtual school trips. While the children look around with Google Cardboard headsets, the teacher can give a running commentary using a fact sheet accessible from their smartphone or tablet. Together, it’s hoped that these resources can get people interested in the great outdoors, and appreciate what the National Parks Service is fighting to preserve on a daily basis.

Via: Google (Blog Post)

Source: The Hidden Worlds of the National Parks


Uber has already lost more than $1.2 billion this year

Uber is a fairly well-established business at this point, so you’d be forgiven for thinking that it makes a profit at this point. But today Uber held a conference call with investors and revealed that it is losing a ton of money so far this year. According to a report from Bloomberg, Uber has lost $1.27 billion in just the first half of 2016. Head of finance Gautam Gupta reportedly said that most of those losses comes from compensation for its drivers worldwide.

This is far from an isolated incident — the company is infamous for losing a lot of cash throughout its existence. In 2015, Uber lost more than $2 billion total and has lost over $4 billion in its seven-year history. Of course, it’s not uncommon for startups to bleed cash while they get their footing and figure out how to make a profit; investors expect it. But the amount of money that Uber has been losing — and the fact that those losses keep growing — has little precedent.

This comes after Uber claimed profitability in the US earlier this year — but the company has been losing serious money trying to gain a foothold in China, where it has lost about $2 billion over the last two years. Bloomberg also says that the company is doing its best to maintain its lead in the ridesharing space in the US — to that end, it has engaged in a price war with Lyft which is contributing to its losses. The company specifically said this week that it was willing to spend money to maintain its lead in the US, but the larger question of how long Uber can continue to lose this much cash remains. The company’s strategy may be to totally dominate the space before turning to profitability, but these losses will have to stop sooner or later.

Source: Bloomberg


New tourism app has IBM’s Watson guide you around Orlando

There’s plenty to do in Orlando, Florida besides infect yourself with Zika — what with Universal Studios, Disney World, the Epcot Center and SeaWorld. And a new app, backed by the supercomputing power of IBM’s Watson, will tell you how to get the most out of every one of your minutes in the Sunshine State.

The Visit Orlando app is designed to help visitors figure out what they want to do while in the city. Users can ask the app virtually any question within reason and receive helpful travel and booking tips in reply. Want to eat somewhere with live music? Want to know where you can watch your kids torment a dude making minimum wage while dressed as a corporate mascot? All you have to do is ask the app. Users will also be able to order tickets to popular attractions, find interesting things to do in their immediate vicinity and play a variety of augmented reality games throughout the city.


Unicode’s next emoji update focuses on gender and jobs

The latest proposed updates to Unicode’s emoji rules add a handful of dual-gendered jobs and give basically every human emoji both male and female versions, Emojipedia reports. Those two ladies dancing in bunny ears? Now there’s a male version. The policeman’s face? Emoji 4.0 adds a female option. The beta of iOS 10 already showcases these changes, despite the fact that Emoji 4.0 is still in draft form for two more months, during which period the public can provide feedback to Unicode.

New emoji jobs include astronaut, cook, teacher, factory worker, firefighter, scientist, judge, pilot and artist, all with male and female options. Emoji 4.0 includes 16 new professions, which expands to 32 when accounting for both genders.

Emoji 4.0 also allows vendors like Apple, Microsoft and Google to implement a third, gender-neutral option, though this might be more difficult than a male or female swap. Unicode president Mark Davis told Emojipedia about the challenges of creating a non-gendered emoji, saying, “One of the things that designers have struggled with is what makes a form look neutral. They have a lot of difficulty coming up with a form that looks neither male nor female.”

The Emoji 4.0 draft also includes a rainbow flag, United Nations flag, and it recommends skin tone support for a lineup of existing emojis, including the women with bunny ears, people holding hands, golfer and family icons.

Emoji 4.0 should go live for vendors in November and show up on public systems in late 2016 or early 2017, according to Emojipedia.

Source: Emojipedia


DJI Launches iPhone-Compatible Osmo+ Handheld Gimbal Camera With Up to 7x Zoom

Drone manufacturer DJI today launched the Osmo+ camera, the company’s first handheld gimbal with an integrated zoom camera that delivers “unprecedented stability and image quality for handheld still photography and video creation.” DJI’s new gimbal system is compatible with iOS and the DJI GO app [Direct Link] and packs in technology that the company compares similarly to its new Zenmuse Z3 zoom camera used in the Inspire 1 drone.

DJI’s Osmo+ has a 7x zoom consisting of 3.5× optical and 2× digital lossless zoom when shooting with 1080p. These specs give the system a focal length ranging from 22mm to 77mm, all “without sacrificing HD quality,” allowing for motion timelapse shots, advanced stabilization, and a powerful camera with the ability of capturing 4K/30fps video and 1080p/100fps in slow motion.

“The Osmo+ opens up entirely new capabilities for creators who love the Osmo’s ability to deliver crisp, sharp and detailed handheld imagery,” said Paul Pan, Senior Product Manager. “From action selfies to detailed panoramas to motion timelapses, the zoom features of the new Osmo+ once again expand the capabilities of handheld photography to push the limits of the imagination.”

Users can extend the arm of the Osmo+ to take advantage of a “moving selfies” feature, which takes crisp front-facing action shots after a quick triple tap of the trigger button. The camera also takes detailed panorama shots that blend nine separate photos into one, and long exposure photographs without the need for a tripod, thanks to its advanced stabilization ability.

DJI osmo plus 2
DJI is selling the Osmo+ for $649.00 on its website, with a collection of accessories including a bike mount ($49.00) and universal mount ($25.00), to add extra equipment like a microphone or LED light to the camera.

The company also offers a warranty plan for the Osmo+, called Osmo Shield ($65), which doubles the device’s warranty to two years with “unlimited maintenance (conditions apply) and one-time only accidental hardware damage coverage, including water damage.” For more detailed specs of DJI’s new Osmo+ camera system, check out the company’s official website.

Tag: DJI
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs


Apple Releases iOS 9.3.5 With Fix for Three Critical Vulnerabilities Exploited by Hacking Group

Apple today released an iOS 9.3.5 update for the iOS 9 operating system, almost a month after releasing iOS 9.3.4 and a few weeks before we expect to see the public release of iOS 10, currently in beta testing.

iOS 9.3.5 is available immediately to all devices running iOS 9 via an over-the-air update.

iOS 9.3.5 is likely to be the last update to the iOS 9 operating system, introducing final bug fixes, security improvements, and performance optimizations before iOS 9 is retired in favor of iOS 10. iOS 9.3.4, the update prior to iOS 9.3.5, included a critical security fix patching the Pangu iOS 9.3.3 jailbreak exploit. iOS 9.3.5 features major security fixes for three zero-day exploits and should be downloaded by all iOS users right away.

iOS 10, coming in September alongside new iOS devices, brings a slew of new features, including a revamped Lock screen experience, an overhauled Messages app with new functionality and its own App Store, a new Photos app with object and facial recognition, a redesigned Music app, a centralized HomeKit app, and a Siri SDK for developers.

Update: According to The New York Times, today’s iOS update patches three security vulnerabilities that may have been exploited by surveillance software created by NSO Group to do things like read text messages and emails and track calls and contracts.

Related Roundup: iOS 9
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

%d bloggers like this: