California will start charging electric vehicle fees in 2020
While some states are still offering incentives for electric vehicle buyers, California will soon become the biggest state to start charging fees for EV ownership. California is estimated to account for about half of the country’s EV sales, so the state is keen on recuperating some of the money it won’t be making from gasoline taxes.
The fees will take effect starting with 2020 model year plug-in vehicles, Autoblog reports. Those vehicles will have one-time $100 registration fee upfront, followed by and annual registration fee that varies based on the market value of the vehicle. On the low end, the fees are $25 for a vehicle valued at less than $5,000, but anyone with a $60,000-plus plug-in vehicle will be paying $175 per year to keep their tags up to date. On the other hand, California has the highest gas prices in the country, and even on the high end, those registration fees will end up costing less than three or four tanks of gas. Internal combustion fans won’t be getting a break either: California’s gas tax will hit 30 cents per gallon by November 2017. All told, California’s EV fees are expected to generate $52 billion over 10 years, which will be put back into the state’s budget for infrastructure repairs.
Elsewhere in the US, EV fees have already caught on. According to the Sierra Club, 10 states plus Washington, DC already have similar fees while eight others are currently considering similar legislation.
Via: Autoblog
Source: California Senate
The promise of self-driving cars starts with better ‘eye-sight’
San Francisco’s Pier 35 usually hosts cruise ship guests boarding and unboarding their giant floating hotels. It’s a cavernous building hundreds of meters long which actually makes it the perfect indoor facility for demoing what 22 year-old Luminar CEO Austin Russell hopes is the future of LiDAR. The company has developed a higher-quality laser sensor that just might make it the darling of the autonomous car world.
Laser-based LiDAR — along with cameras and radar — is one of the main components on most semi-autonomous vehicles. It creates a real-time three dimensional map of the area it’s scanning and relays that information back to the car’s self-driving system. The technology is so important that Alphabet’s Waymo is suing Uber because it believes the ride-hailing company is using some of its circuit board designs. So it’s a big deal to the future of driverless cars and Luminar thinks it can do better than what’s already on the market.
Inside the giant building, Luminar CTO Jason Eichenholz demos on a screen what current LiDAR systems see. The walls and columns are visible and when someone rides by on a bike, a few moving pixels track the movement. Then he turns on Luminar’s system and the difference is impressive. But it’s not just the quality of the items being scanned that’s important, it’s how far out the sensor sees. Luminar placed a black panel 200 meters away from the system, and it was clearly visible on the display. Typical systems see about 30 meters away and when a car is barreling down the highway at 70 miles per hour, the further a sensor can see, the better.

Luminar’s big jump in quality and distance took Russell five years and if you’re keeping score, that means he started the company at age 17. “I looked really deeply into the LiDAR space and saw there was a severe lack of innovation — for even the past decade — in terms of advancing the performance to any significant extent with any new architecture,” Russell said. So he built his own from the chip-level up.
Russell says his LiDAR has 50 times greater resolution and 10 times longer range than legacy systems and in the process he noted Luminar had to find “2,000 ways not to make this LiDAR.” After a closed demo in the giant building, Luminar drove me down San Francisco’s Embarcadero. The parade of cars, bikes and pedestrians were not just visible, but the detail of those people and machines was higher than I’ve seen on competing systems. It was a rainbow colored world of lines and shadows that when translated by a computer is the difference between an autonomous vehicle seeing a box and recognizing that mass of pixels in the distance as a small dog.

But is Luminar’s technology enough to overtake Velodyne, the established leader in the space? Unless an automaker has their own proprietary system, there’s a good chance the sensor on the top of that car is from Velodyne. Even Uber is using Velodyne while it’s building its own LiDAR system (the one that triggered the Waymo lawsuit). But unlike the five years it took Luminar to get its LiDAR just right, the company is moving quickly to get the sensor into the market.
While Luminar wouldn’t comment on the final price of their laser-based sensor, Russell did say that the company wants to create hardware that’s affordable for all makes of vehicles. He told Engadget. “we tried to be able to make this affordable long term for all types of cars, from the Honda Fit all the way up to the Bentley.” Meanwhile the high-end Velodyne HDL-64E costs about $75,000 with the less-expensive, but also less powerful Puck clocking in at about $8,000.

Luminar says it’s been working with 100 partners in the autonomous driving space (but won’t name any of them) and those companies will be receiving units very soon. Those partners will essentially beta test the system and share data and thoughts with Luminar. The company says it will then build and ship an additional 10,000 units from its Orlando facility by the end of the year. That’s incredibly aggressive. But if it can pull it off, it’ll be almost as impressive as the hardware they demoed on the streets of San Francisco.
HP introduces new Pavilion laptops at… Coachella
HP has picked an unlikely event for the launch of its new Pavilion laptops: Coachella. As wacky as it sounds, the company actually has a somewhat logical reason for the choice. It’s showing off the laptops’ new stylus input support, and is betting that this feature will appeal to the (presumably) expressive, artsy folks at the music festival. And it’s luring them in by setting up DIY bandana-designing stations at its air-conditioned spot in the festival’s Colorado Desert venue.
The Pavilion laptops come in convertible (x360) and non-convertible (Notebook) options. I tried designing my own bandana on one of the new HP Pavilion x360s last month, and found the stylus comfortable, responsive and capable of drawing smooth lines of varying thickness. The new pen supports Windows Ink, and will be included with the new convertible Pavilions. In the well-lit demo room, the Pavilion’s screen was colorful and easy to read, and I had no trouble seeing the intricate pattern I was sketching.
The Pavilion Notebooks have also been updated with stylus input support, as well as new Intel and AMD processors going up to Core i7 or A10 for high-end performance. You’ll also have the option of adding an infrared camera for Windows Hello facial-recognition logins. If not, the standard webcam should suffice for your Skype calls, and it offers an 88-degree field of view compared to the typical 77-degree angle on other systems.

HP paid particular attention to the design of the new Pavilions, which now come in several eye-catching colors. The clamshells are available in blue, red, pink, gold and silver, while the notebooks have gold and silver options. The company also slimmed down the bezel around the display, and you can pick a screen resolution of HD (720p) or full HD (1080p). The lack of a higher-res (2K or 4K) option is disappointing, though. Also, the metal keyboard deck features color-matched speaker grilles, giving the system a pleasing accent. I liked the sleek design and the attractive hues, although the preview units I checked out felt somewhat hefty.
Since they’re meant to be multi-purpose rigs, the Pavilions sport the standard ports you’d expect, including a media card reader and jacks for USB-C, USB-A and HDMI.
The Pavilion series is the most affordable of HP’s personal laptop brands, which include the Envy line and the high-end Spectres. The new Pavilion Notebooks are available in 14-, 15.6- and 17.3-inch versions, while the x360s come in 11.6-, 13- and 14-inch models.
They’re all highly configurable, and their prices differ depending on what you pick. A 15-inch Pavilion Notebook with an AMD chip, 8GB of RAM and an HD touch screen costs $600, while a 14-inch convertible with Intel’s Core i5, 8GB of RAM and a full HD IPS touch screen comes up to $720. You can also opt for more capacious HDDs (up to 1TB), speedier SSDs (up to 512GB) or dual-storage (both HDD and SSD). For the truly hardcore, there’s also the possibility of adding NVIDIA discrete graphics chips for smoother gaming. Of course, all the bonuses come at a price, although HP didn’t provide specifics on those.
If you happen to be at Coachella this year, you can stop by the HP tent to check out the new laptops or, let’s be real, design and print your own free bandanas in cool, air-conditioned comfort.
Burger King TV Ad Highlights Voice Recognition Challenge For Smart Speakers
Burger King made headlines yesterday when it began running a 15-second television ad made to intentionally activate Google Home speakers and Android phones within earshot.
The simple commercial involves someone posing as a Burger King employee who leans into the camera to ask the question “OK Google, what is the Whopper burger?” – a request designed to prompt Google virtual assistants nearby to start reading the burger’s Wikipedia entry. To the relief of many, Google quickly moved to prevent its Home speakers from responding to the ad by registering the sound clip and disabling the trigger.
Voices on TV have been inadvertently triggering smart speakers for months now, but the ad represents the first attempt by a company to purposely hijack users’ devices for commercial gain. One likely reason Burger King chose to target Google Home rather than iPhones is that unlike Apple’s Siri, the virtual assistant cannot be trained to recognize a particular user’s voice, which highlights one of the main issues with connected smart speakers currently on the market.
As it stands, Google Home can only be used with a single Google account at a time, and lacks the ability to differentiate users by their voice patterns. Google has said its ultimate goal for Home is to be able to identify different people in the same room – and hints of multi-user functionality have briefly appeared in the Google Home app – suggesting some sort of voice identification feature is likely coming.
Likewise, Amazon is known to be working on a similar system that would allow its Echo range of smart speakers to distinguish between individual users based on the sound of their voices. According to sources, Amazon’s feature would work by matching the person speaking to a pre-recorded voice sample, or “voice print”, to verify the speaker’s identity. Currently, Echo users can set up multiple profiles and jump between them, but the user must say “switch accounts” or use the Alexa app to do so.
Last May it was rumored that Apple will launch an Echo-like speaker with Siri integration, enabling users to play music, get news headlines, and more, without needing to interact with their iPhone.
According to one source, Apple’s speaker could process many typical iPhone Siri commands. For example, users may be able to ask the device to read e-mails, send text messages and tweets, and stream content from Apple Music. Apple is even said to have considered integrating map information into the speaker, allowing the device to notify a user when it’s time to leave the house for an appointment.
However, all of these capabilities would require Siri to know exactly who it is interacting with – a feat which, in a communal setting, could pose significant technical challenges for a company high on privacy. In this sense, the appearance of an Apple smart speaker may ride on Apple’s ability to make user voice recognition as secure as biometric authentication features like Touch ID.
On the other hand, Apple could bring in biometric features to augment the speaker’s user identification system. Indeed, some of Apple’s speaker prototypes in testing are said to include technology related to facial recognition, potentially aided by Apple’s acquisition of Faceshift and Emotient, which may help the device act based on who is in a room or a person’s emotional state. How secure such systems would be in these scenarios remains unclear, however. According to rumors, Apple’s smart speaker-esque device could release later this year.
Tags: Siri, Amazon Echo, Google Home
Discuss this article in our forums
First Course Launches to Develop Apps For Android in Apple’s Swift Language
An Italian school has launched the first Android-specific course in Apple’s increasingly popular open source Swift programming language.
The Swift University based in Reggio Emilia claims to be the first, globally, to offer the course for Android, and aims to show students how to use the programming language across both platforms while avoiding the limitations associated with cross-platform middleware such as Xamarin.
At the heart of the course is the use of a bespoke integrated development environment (IDE), rather than a converter, that allows coders to program in Swift instead of Java while using the normal classes of the Android SDK. The course summary, through Google Translate, is as follows:
By attending this course you will learn how to program apps for Android devices via the Android SDK but written in the Swift language. Thanks to this innovative course, students can easily port iOS projects to Android and/or develop a multi-platform app without using a middleware. This course is suitable for those who are already programmers in Swift, Java, C #, Objective-C and other programming languages. Topics are updated to the latest version of Android SDK.
Swift was introduced by Apple in 2014, with the aim of replacing Objective-C as an easier-to-learn language, and garnered major support from IBM and a variety of apps like Lyft, Pixelmator, and Vimeo. Since then it has steadily risen to prominence among both emerging and established developers, and last month broke into the top 10 in the TIOBE Index, which ranks programming languages by popularity.
Apple has actively promoted Swift as ideal for children who are keen to code, demonstrating its gentle learning curve in Swift Playgrounds, an app that teaches children how to use the language. Apple has been updating and refining Swift since its debut, and unveiled Swift 3.1 on March 27.
(Thanks, Marcello!)
Tags: Swift, Android
Discuss this article in our forums
Apple said to be secretly developing a breakthrough device to help diabetics
Why it matters to you
The rewards would be great for any company that succeeds in developing an effective device capable of non-invasively monitoring blood sugar levels. And the world’s nearly half a billion diabetics would certainly warmly welcome it, too.
Apple is reported to be conducting secret research that could ultimately lead to a major breakthrough in how diabetics test their blood sugar levels.
The work is geared toward creating a sensor capable of non-invasively monitoring blood sugar levels, three people with knowledge of the research told CNBC this week.
Aimed at diabetics who regularly have to go through the laborious and uncomfortable procedure of pricking their finger to test blood sugar levels — or have a glucose monitor embedded beneath the skin — a sensor that can perform the same function would be a significant step forward for the medical industry as well as hugely beneficial for diabetics themselves.
Such a device, which apparently involves the use of optical sensors with a light that shines through the skin to measure blood sugar levels, would act as a constant monitor and flag up when levels drop too low, a situation that can turn extremely serious for a diabetic if not quickly treated.
Small team
As of last year, Apple reportedly had around 30 individuals — including “a small team of biomedical engineers” — conducting the research at “a nondescript location in Palo Alto,” a few miles from the tech giant’s Cupertino headquarters, according to CNBC.
The research is reported to have started at least five years ago after the late Apple co-founder Steve Jobs expressed an interest in the idea.
If the sources’ accounts are accurate, it seems that Apple is making real progress toward its goal, with feasibility trials reportedly already taking place at clinical sites in and around San Francisco. Consultants have also been hired to examine regulatory issues related to the new technology.
Standalone device?
Apple CEO Tim Cook hinted in a 2015 interview that the company was working on some kind of new medical-related technology and since then Apple has posted several job ads for biomedical engineers and other similar positions. At the same time, Cook suggested that the new technology might not be incorporated into the Apple Watch because he didn’t want “to put the watch through the Food and Drug Administration (FDA) process,” suggesting any blood sugar monitor could land as a standalone device.
Other tech firms are known to have been carrying out similar research. Google, for example, said in 2014 that it was working on developing a smart contact lens capable of measuring blood sugar levels, and a year later it revealed it was also working with glucose monitoring company Dexcom to develop a wearable monitor.
To create such technology is clearly a monumental challenge, but if, as the sources suggest, Apple is already testing out its work, it may not be too long before the company reveals precisely what it’s been up to. Diabetics, for one, would certainly love to hear about it.
Yes, that’s a selfie drone flying around in your local Apple Store
Why it matters to you
Apple is so impressed with ZeroZero Robotics’ little
If you visit an Apple Store from this week and hear a loud buzzing noise in the vicinity, don’t worry — it’s not a giant mosquito flying toward your head at speed. Instead, it’ll most likely be the Hover Camera Passport … in which case you might still want to watch your head.
The diminutive selfie drone, which launched toward the end of last year, will go on sale shortly at Apple Stores around the world, though from this week staff will be happy to offer interested visitors an in-store demonstration.
Apple inked an exclusivity deal with Beijing-based ZeroZero Robotics to become the sole seller of the Passport. The tech giant is offering a bundle (one drone, two batteries, a charger, an adapter, and an easy-carry bag) for $500 — that’s $100 cheaper than what it’s selling for on the Chinese firm’s own web store.
“We’re thrilled to bring autonomous flying photography into the hands of consumers who are excited by truly innovative technology that impact their everyday lives,” ZeroZero Robotics’ CEO Meng Qiu Wang said in a release. “By selling in Apple, we want more customers to capture their memories in a near-effortless way through breathtaking perspectives that can only be achieved through Hover Camera Passport.”
The Passport drone is marketed as a fun way to take photos and is “like having a 65-foot selfie stick,” according to Digital Trends’ own review of the machine. Built-in features include face and body tracking, orbit mode where it circles around you, and a 360-degree “Spin” for panoramic video shots. It’s lightweight and small, and has protected propellers, too, so you can just grab it and go without fear of having your fingers sliced off. With that in mind, Enrique Iglesias will probably want the Passport.
Newly released software updates to coincide with the Apple Store launch include an all-new user interface for the companion iOS/Android app designed to offer drone novices simple control, automated media editing, and compatibility with iMovie and Final Cut Pro X.
You’ll find ZeroZero Robotics’ quadcopter flying around Apple Stores in the U.S., Canada, the U.K., China, and Hong Kong from this week, with Apple Stores in other countries also likely to have it buzzing about before too long.
Researchers are breeding fluorescent bacteria to uncover landmines
One of the many tragedies of war are the dangers that persist long after conflicts formally end — dangers like abandoned minefields peppered with active, deadly ordnance. Buried landmines threaten the lives of ordinary people near former battlefields all over the world, and disarming them has always been a dangerous effort. Now, researchers at the Hebrew University of Jerusalem are working on a way to make landmine identification easier and safer. No, the trick isn’t to build a better metal detector, it’s to cultivate bacteria that glows in the presence of deadly explosives.
A paper by researchers at the Hebrew University of Jerusalem describes a system that uses specifically engineered bacteria that respond with a fluorescent signal when it detects the kind of explosive vapors that seep out of old landmines. This signal can be picked up by a laser scanning system that can identify the location of buried mines. Because the laser scanning portion can be operated remotely, the system could potentially remove the human element from a large part of the detection process.
If this method of detection can be proven to work consistently and be successfully deployed, it could help create a safer and more accurate method for clearing old minefields. The project also just sounds kind of cool — what’s better than using genetically engineered microorganisms and lasers to save lives?
Via: Eureka Alert
Uber’s ‘Hell’ program tracked and targeted Lyft drivers
In its quest to ensure Lyft remains in second place, Uber reportedly ran a program that exploited a vulnerability in its rival’s system. According to The Information, the ride-hailing company’s covert software-based program called “Hell” spied on its staunchest competitor’s drivers from 2014 to early 2016. It’s called Hell, because it served as the counterpart to “God View” or “Heaven,” Uber’s in-company app that tracked its own drivers and passengers. Unlike God View, which was widely available to corporate employees, only top executives along with select data scientists and personnel knew about Hell.
The program apparently started when Uber decided to create fake Lyft rider accounts and fooled its rival’s system into thinking they were in various locations around the city. Those fake riders were positioned in a grid to give Uber the entire view of a city and all of Lyft’s drivers within it. As a result, the company can see info on up to eight of its competitor’s nearest drivers per fake rider.
While keeping an eye on its rivals’ cars, though, Uber noticed that Lyft’s drivers are identified by special numbered IDs that never change like its own tokens do. That allowed the team running Hell to learn of each driver’s habits, which, in turn, helped them to figure out which drivers practice “double-apping.” In other words, they used the data they gathered to pinpoint the Lyft drivers that drove for them, as well.
Travis Kalanick and his select employees then executed a plan meant to entice double-appers to drive exclusively for them. First, the Hell program would send more riders to double-appers than to those who drove solely for Uber. Then, the company would give them special bonuses for meeting a certain number of rides per week. Considering the program’s data revealed that 60 percent of Lyft’s drivers were double-apping, Uber ended up doling out tens of millions of dollars a week in bonuses. Clearly, loyalty didn’t pay for those who stuck with Uber those years.
The program eventually ended in 2016 after Lyft raised a billion and started expanding to more cities. It would’ve caused the program’s bonus costs to shoot through the roof. Still, Kalanick would apparently often praise those who ran it while it was still active and comment on how perfectly it fitted his company’s culture of “hustle.”
We’re still waiting for Uber’s response to a request for statement. As for Lyft, a spokesperson told The Information: “We are in a competitive industry. However, if true, these allegations are very concerning.” A couple of law firms that worked with Uber in the past also told the publication that the company could face a number of allegations, including breach of contract, unfair business practices, misappropriation of trade secrets and violation of the federal Computer Fraud and Abuse Act.
Source: The Information
GM’s self-driving car operation in San Francisco will keep growing
Every carmaker is pushing to develop autonomous vehicles, and GM is no different. Despite having tech rated in second place by Navigant Research and the announcement of a Super Cruise-equipped Cadillac on the way, the company will do more. Bloomberg reporter Dana Hull tweeted the link to a California tax credit filing (saving GM $8 million) showing that the company plans to take its San Francisco operations from 485 employees last year to 1,648 by 2021. That office is home to Cruise Automation, a startup it acquired last year for $1 billion that had previously built self-driving kits for the Audi S4 and A4.

In a statement provided to Axios, GM said its team and test fleet will grow to continue developing the safety-enhancing technology. Along with others like Uber and Waymo /Google it appears GM will be testing autonomous cars on the West Coast over the next few years while it perfects the technology.
Source: Axios, Dana Hull (Twitter), CCTC Agreement (PDF)



