Skip to content

Archive for

16
Jun

Lyft relies on autonomous EVs to meet climate impact goals


While Uber has been engulfed in a hurricane of scandal, its ride-hailing competitor Lyft has published its climate impact goals. The company says that with the help of autonomous and electric vehicles it’ll be able to reduce CO2 emissions “by at least 5 million tons per year by 2025.”

An impressive goal that’s relying heavily on automakers to step up and actually build these vehicles. While Lyft recently partnered with Nutonomy to help bring autonomous vehicles to the company’s network, it’ll still be automakers that’ll have to deliver.

Fortunately, Lyft’s 2025 timeline to have “at least 1 billion rides per year using electric autonomous vehicles” is in line with what the automotive world is promising. At least for highly autonomous electric vehicles coming to market. It also helps that the company has a substantial investment from GM which has been working to get its all-electric Chevy Bolt ready for an autonomous future.

Still it’s good to know that Lyft is thinking about how it affects the environment and I’m sure the timing has nothing to do with Uber’s internal shenanigans.

Source: Lyft

16
Jun

Next-gen supercomputers to get $258M in funding from Department of Energy


Why it matters to you

The Department of Energy is putting its money where its mouth is, in an attempt to put the U.S. back at the forefront of supercomputer development.

U.S. Secretary of Energy Rick Perry has detailed plans for $258 million in funding that is set to be distributed via the Department of Energy’s Exascale Computing Project. The PathForward program will issue the money to six leading technology firms to help further their research into exascale supercomputers.

AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia are the six companies chosen to receive financial support from the Department of Energy. The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project.

“Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” Perry said. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing — exascale-capable systems.”

The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.

The term exascale refers to a system that’s capable of one or more exaflops — in other words, a billion billion calculations per second. This is a significant milestone, as it’s widely believed to be equivalent to the processing power of the human brain at the neural level.

The PathForward program should help produce systems that are much more powerful than current standouts, with the broader goal of reasserting the U.S. as a leader in the field. In June 2016, the biannual Top500 list of the most powerful supercomputers in the world featured more systems from China than the U.S. for the first time, with the China Sunway TaihuLight claiming the top spot. In a few years time, we may well see a supercomputer spawned by this funding debut on the list.




16
Jun

AMD’s Ryzen Threadripper gets its first benchmark results — and it’s fast


Why it matters to you

According to some benchmark results, that AMD Ryzen Threadripper CPU you’re waiting for is one really fast chip.

As the CPU wars continue to heat up, both Intel and AMD have some crazy-fast processors coming soon. Intel will be shipping its Kaby Lake-X and Skylake-X processors starting this month, and AMD’s Ryzen Threadripper monster is coming in the summer in Dell’s Alienware Area-51 Threadripper Edition.

So far, while we have some of the specifications for the new chips, performance benchmarks have been lacking. That’s slowing changing, as it tends to do prior to a new component’s release, as people test the chips and those results accidentally get uploaded to various sites. That’s exactly what happened with AMD’s Ryzen Threadripper, which now appears to have a Geekbench test to look at, as Hexus.net reports.

Someone running a 16-core Ryzen Threadripper on an ASRock X399 motherboard tested the configuration using Geekbench 4.1.0. The results were uploaded and are quite fast indeed.

As Hexus.net mentions, these are likely unoptimized results, and while they compare well against other high-end processors today there’s likely still lots of room for improvement. By comparison, an AMD Ryzen 7 1800X at 3.6GHz with eight cores and 16 threads scored 4,208/23,188 and quad-core, eight-thread Core i7-7700K at 4.2GHz scored 5,805/19,942.

We’ll get our first look at a shipping system equipped with the AMD Ryzen Threadripper in the Dell Alienware Area-15 Threadripper Edition that’s due this summer. That machine will offer up to triple-GPU options and up to 64GB of fast DDR4-2,933MHz RAM. We don’t know pricing yet for AMD’s highest-end processors, but the equivalent Intel Core X-Series CPUs cost as much as $1,000 and so we’re likely looking at an expensive machine.

Even if you’re an Intel fan, you have to love the impending release of the AMD Ryzen Threadripper. Competition is a good thing, and whatever pushes Intel to release faster chips at reasonable prices does nothing but push the industry forward. Once AMD releases its upcoming Vega GPUs, the options for building a superfast gaming system will likely be better than they’ve ever been.




16
Jun

Marimba-playing robot uses deep-learning AI to compose and perform its own music


Why it matters to you

While the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

When the inevitable robot invasion happens, we now know what the accompanying soundtrack will be — and we have to admit that it’s way less epic than the Terminator 2: Judgment Day theme. Unless you’re a massive fan of the marimba, that is!

That assertion is based on research coming out of the Georgia Institute of Technology, where engineers have developed a marimba-playing robot with four arms and eight sticks that is able to write and perform its own musical compositions. To do this, it uses a dataset of 5,000 pieces of music, combined with the latest in deep learning neural network-based AI.

“This is the first example of a robot composing its own music using deep neural networks,” Ph.D. student Mason Bretan, who first began working on the so-called Shimon robot seven years ago, told Digital Trends. “Unlike some of the other recent advances in autonomous music generation from research being done in academia and places like Google, which is all simulation done in software, there is an extra layer of complexity when a robotic system that lives in real physical three-dimensional space generates music. It not only needs to understand music in general, but also to understand characteristics about its embodiment and how to bring its musical ‘ideas’ to fruition.”

Training Shimon to generate new pieces of music involves first coming up with a numerical representation of small chunks of music, such as a few beats or a single measure, and then learning how to sequence these chunks. Two separate neural networks are used for the work — with one being an “autoencoder” that comes up with a concise numerical representation, and the second being a long short-term memory (LSTM) network that models sequences from these chunks.

“These sequences come from what is seen in human compositions such as a Chopin concerto or Beatles’ piece,” Bretan continued. “The LSTM is tasked with predicting forward, which means given the first eight musical chunks, it must predict the ninth. If it is able to successfully to do this, then we can provide the LSTM a starting seed and let it continue to predict and generate from there. When Shimon generates, it makes decisions that are not only based off this musical model, but also include information about its physical self so that its musical decisions are optimized for its specific physical constraints.”

It’s pretty fascinating stuff. And while the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

“Though we are focusing on music, the more general questions and applications pertain to understanding the processes of human creativity and decision-making,” Bretan said. “If we are able to replicate these processes, then we are getting closer to having a robot successfully survive in the real world, in which creative decision-making is a must when encountering new scenarios and problems each day.”




16
Jun

Marimba-playing robot uses deep-learning AI to compose and perform its own music


Why it matters to you

While the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

When the inevitable robot invasion happens, we now know what the accompanying soundtrack will be — and we have to admit that it’s way less epic than the Terminator 2: Judgment Day theme. Unless you’re a massive fan of the marimba, that is!

That assertion is based on research coming out of the Georgia Institute of Technology, where engineers have developed a marimba-playing robot with four arms and eight sticks that is able to write and perform its own musical compositions. To do this, it uses a dataset of 5,000 pieces of music, combined with the latest in deep learning neural network-based AI.

“This is the first example of a robot composing its own music using deep neural networks,” Ph.D. student Mason Bretan, who first began working on the so-called Shimon robot seven years ago, told Digital Trends. “Unlike some of the other recent advances in autonomous music generation from research being done in academia and places like Google, which is all simulation done in software, there is an extra layer of complexity when a robotic system that lives in real physical three-dimensional space generates music. It not only needs to understand music in general, but also to understand characteristics about its embodiment and how to bring its musical ‘ideas’ to fruition.”

Training Shimon to generate new pieces of music involves first coming up with a numerical representation of small chunks of music, such as a few beats or a single measure, and then learning how to sequence these chunks. Two separate neural networks are used for the work — with one being an “autoencoder” that comes up with a concise numerical representation, and the second being a long short-term memory (LSTM) network that models sequences from these chunks.

“These sequences come from what is seen in human compositions such as a Chopin concerto or Beatles’ piece,” Bretan continued. “The LSTM is tasked with predicting forward, which means given the first eight musical chunks, it must predict the ninth. If it is able to successfully to do this, then we can provide the LSTM a starting seed and let it continue to predict and generate from there. When Shimon generates, it makes decisions that are not only based off this musical model, but also include information about its physical self so that its musical decisions are optimized for its specific physical constraints.”

It’s pretty fascinating stuff. And while the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

“Though we are focusing on music, the more general questions and applications pertain to understanding the processes of human creativity and decision-making,” Bretan said. “If we are able to replicate these processes, then we are getting closer to having a robot successfully survive in the real world, in which creative decision-making is a must when encountering new scenarios and problems each day.”




16
Jun

Smartphone cameras will soon identify objects without an internet connection


Why it matters to you

Several apps use object-recognition technology, but Google’s new programming does it without requiring an internet connection.

Artificial intelligence is giving a simple photograph the power to recognize objects, faces, and landmarks — sometimes with more detail than a set of human eyes can assign. Now, more of those features will be coming to mobile devices, thanks to Google’s release of MobileNets software.

Google released MobileNets as open source software on Wednesday, opening up a neural network of computational imaging for other programmers to incorporate into their apps. The programming is designed specifically to run on the smaller hardware of mobile devices, overcoming some of the biggest obstacles in bringing computer imaging to smartphones through a design that maximizes the power of mobile processors. The program does not create new capabilities but brings computational imaging into a package small enough to run off a mobile device without storing data on a cloud, which means apps using the programming would not need an internet connection.

The programming gives smartphones and tablets the ability to recognize objects and people, along with even recognition popular landmarks. Google even lists fine-grain classification — like determining what breed a particular dog is — among the possible uses for the program.

For mobile users, the release means that third-party apps may soon be getting new or enhanced computational imaging features. By making the programming open source, Google is opening up the software for use in more than just Google-owned apps. The programming can be expanded for a number of different uses, from reverse image searches to augmented reality.

The ability to recognize objects and faces in a photography using a neural network is not new, but Google’s MobileNets are more efficient, creating a smaller, faster program for using the features on mobile devices — even when an internet connection is not available.

“Deep learning has fueled tremendous progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology,” wrote Andrew Howard and Menglong Zhu, both Google software engineers. “While many of those technologies such as object, landmark, logo and text recognition are provided for internet-connected devices through the Cloud Vision API, we believe that the ever-increasing computational power of mobile devices can enable the delivery of these technologies into the hands of our users, anytime, anywhere, regardless of internet connection.”




16
Jun

Smartphone cameras will soon identify objects without an internet connection


Why it matters to you

Several apps use object-recognition technology, but Google’s new programming does it without requiring an internet connection.

Artificial intelligence is giving a simple photograph the power to recognize objects, faces, and landmarks — sometimes with more detail than a set of human eyes can assign. Now, more of those features will be coming to mobile devices, thanks to Google’s release of MobileNets software.

Google released MobileNets as open source software on Wednesday, opening up a neural network of computational imaging for other programmers to incorporate into their apps. The programming is designed specifically to run on the smaller hardware of mobile devices, overcoming some of the biggest obstacles in bringing computer imaging to smartphones through a design that maximizes the power of mobile processors. The program does not create new capabilities but brings computational imaging into a package small enough to run off a mobile device without storing data on a cloud, which means apps using the programming would not need an internet connection.

The programming gives smartphones and tablets the ability to recognize objects and people, along with even recognition popular landmarks. Google even lists fine-grain classification — like determining what breed a particular dog is — among the possible uses for the program.

For mobile users, the release means that third-party apps may soon be getting new or enhanced computational imaging features. By making the programming open source, Google is opening up the software for use in more than just Google-owned apps. The programming can be expanded for a number of different uses, from reverse image searches to augmented reality.

The ability to recognize objects and faces in a photography using a neural network is not new, but Google’s MobileNets are more efficient, creating a smaller, faster program for using the features on mobile devices — even when an internet connection is not available.

“Deep learning has fueled tremendous progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology,” wrote Andrew Howard and Menglong Zhu, both Google software engineers. “While many of those technologies such as object, landmark, logo and text recognition are provided for internet-connected devices through the Cloud Vision API, we believe that the ever-increasing computational power of mobile devices can enable the delivery of these technologies into the hands of our users, anytime, anywhere, regardless of internet connection.”




16
Jun

T-Mobile will begin prepping 5G as soon as this summer


Why it matters to you

Mobile device users interested in faster data-transfer speeds will be pleased to learn that the race for 5G is on.

T-Mobile is wasting no time in expanding high-speed cell coverage across the contiguous United States. On Thursday, June 15, just a day after the Federal Communications Commission (FCC) officially granted it the 600 MHz spectrum it nabbed in a broadcast incentive auction earlier this year, the self-coined “Un-carrier” has begun prepping deployments in select cities.

“[Verizon and AT&T] hope that fixed wireless will allow them to compete with big cable for your home broadband,” T-Mobile CEO John Legere said in a video earlier this year. “Of course, that should be really fun to watch, because if there’s anyone that consumers hate more than […] duopoly, it’s probably big cable.”

Subscribers will begin to see the first vestiges of coverage this summer, T-Mobile says, when it rolls out coverage on its 31 MHz to 600 MHz spectrum licenses. Thanks to the support of the FCC and broadcasters and an engineering timeline that’s “well ahead of expectations,” the carrier expects to have it ready in time for 600 MHz smartphones from Samsung and other manufacturers this summer.

“[We] expect more than 1 million square miles of 600 MHz spectrum the Un-carrier [sic] owns to be clear and ready for deployment,” T-Mobile said in a blog post.

Securing the spectrum wasn’t easy. In April, T-Mobile spent a massive $8 billion on blocks of wireless frequency owned by 175 TV stations. Shortly afterward, those TV stations began a 39-month transition period.

The June 15 announcement follows on the heels of T-Mobile’s earlier related pronouncements. In May, the Deutsche Telekom-owned operator pledged to launch a nationwide 5G network in three years, with the aim of wrapping up a rollout by 2020.

T-Mobile is taking a two-pronged approach to achieve that goal: It will deploy high-band, high-speed 5G in select areas, and low-frequency 600 MHz in other regions. That’s in contrast to competitors like Verizon and AT&T, both of which have tapped  “millimeter wave” technology that transmits over airwaves with narrower-than-average — and sometimes interference-prone — wavelengths.

But they have had some success. Verizon announced 5G trials in 11 U.S. markets this year, following a partnership with Samsung, Qualcomm, and others. And AT&T said that it will begin streaming DirecTV over 5G to some residential customers later this year, ahead of a “5G Evolution” program that will see high-speed wireless trials conducted in 20 major cities.

T-Mobile contends that it has a superior strategy, though.

“T-Mobile [is positioned] to deliver a 5G network that offers both breadth and depth nationwide,” T-Mobile chief technology officer Neville Ray said in a blog post. “We’re going to run at it and run hard. We’re saying that you’re going to see it at T-Mobile first.”




16
Jun

Can virtual reality make visiting the dentist bearable? Science says it can


Why it matters to you

Virtual reality may actually make the dentist bearable — if not a bit fun.

On the spectrum of fun things to do, going to the dentist is somewhere in between scrubbing the toilet and filing taxes. But growing up means doing responsible things even if you don’t want to, so the wise ones among us begrudgingly recline in an off-white chair once or twice a year and let a stranger stick their fingers into our mouth.

Visiting the dentist does not have to be so detestable, though. Researchers used virtual reality to calm patients in a new study out of the universities of Plymouth, Exeter, and Birmingham, and the patients reported promising results.

The study included three groups of participants: One that underwent a standard dental visit, one that wore VR experience that simulated a walk through a city, and one with a VR experience that simulated a stroll around Wembury Beach in Devon, England. Those who strolled the virtual beach reported less pain and anxiety from the procedure than either of the other groups.

The findings suggest that VR can help but that the environment has a big impact on a patient’s stress reduction. In previous studies, the creator of Virtual Wembury, Bob Stone, and his team compared how natural and urban settings affected patients, including sounds from both environments.

“We found that when urban sound, such as moving traffic, was included with the virtual town scene, the ratings of anxiety increased, whilst those for relaxation dropped,” Stone told Digital Trends. “In contrast, with the sound of the coastal area, such as lapping waves and gentle wind effects, a reduction of anxiety and increased ratings of relaxation was revealed.”

Stone and his team have experimented with Virtual Wembury outside of the dentist’s office. He said it’s been used to asses mental states of residents at a remote facility in the Arctic and will soon be tested at a research habitat atop a mountain in Hawaii, where groups of volunteers undergo months of isolation to study how humans will fare on long space journeys.

“These ‘restorative environments’ are now recognized as powerful tools in the treatment of a range of psychological conditions and a number of hospital-based projects are being conducted to encourage engagement with the natural environment to promote both psychological well-being and physical recovery,” Stone said.




16
Jun

15 handy Amazon Fire tablet tips and tricks


Update: We’ve added tips for changing the wallpaper, blue light, private browsing, closing all tabs, and instant recommendations.

Amazon offers a range of tablets, from the entry-level Fire Tablet, which starts at $50, up to the Fire HD 10 for $230. They all run Amazon’s Fire Operating System, which is based on Android. If you’ve never used it before, then you might not be aware of the possibilities it offers. That’s why we’ve put together this roundup of tips and tricks. We’ve got simple tips for beginners and more advanced pointers for those looking to get a bit more out of your Amazon Fire tablet, whether it be the new Amazon Fire HD 8 or a dated Fire HD 10.

How to name your Fire tablet

If you use a number of different devices with your Amazon account, then things can quickly get confusing. Why not pick a descriptive name for your Fire tablet, rather than sticking with “Mr’s 3rd Fire”? All you have to do to change the name of your Fire tablet is pull down the notification shade from the top and tap Settings > Device Options > Change Your Device Name.

How to uninstall apps

You generally tap and hold on an app, or another piece of content, if you want to remove it from your Fire tablet. If you’re in the carousel, then you should get the pop-up option to remove or uninstall whatever you’ve long pressed on.

If you’re on the home screen, then you can tap and hold on an app icon to get the Uninstall option to appear in the top right. Now, you can tap to select multiple apps and then tap Uninstall to get rid of all of them at once.

You can also uninstall apps or games one by one by going to Settings > Apps & Games > Manage All Applications. Tap on the app you want to get rid of, and then tap Uninstall in the top right.

How to change your wallpaper

If you’d like to change the background image on your home screen, then you need to choose a new wallpaper. To do so, go to Settings > Display > Wallpaper. You’ll see a few options here, but you can also tap Pick image to use one of your own photos as your wallpaper.

How to manage notifications

Some apps on your Fire tablet will send you notifications that pop up in the notification shade. That can be useful when you have an incoming email or there’s an update worth downloading, but sometimes you’ll get notifications that you simply have no interest in receiving.

If you find that a particular app or game is sending you too many pointless notifications, then you should turn them off. You can do so by going to Settings > Sound & Notification > App Notifications. Tap on the app in question and you can block notifications completely. Conversely, if there’s an app you always want to hear from, toggle Priority on and the app’s notifications will always appear at the top of your notification shade.

How to free up storage space

You may find that you run short on storage space after having your Fire tablet for a while, especially if you use it to take photos or shoot video. If you want to check on how much storage you have, go to Settings > Storage.

If you tap on Internal Storage, you’ll get a detailed breakdown of what’s on your tablet. You can go into each category, and choose to delete files to free up additional space. We’ll look at how to automatically upload photos and videos to the cloud in the next tip.

You can also free up some space by offloading items you haven’t used in a while under the 1-Tap Archive option. Tap View Content to review the candidates for archiving and Archive Now to go ahead and do it. If you need to get the items back, you can always tap on them to download them again from the cloud.

How to back up photos and videos

To preserve your memories and keep the photos and videos you take with your Fire tablet safe, you can automatically back them up to Amazon Drive. Every customer gets 5GB for free, but Prime members also enjoy free unlimited photo storage.

If you want to turn on the automatic backup option, then go to the Photos app, tap to expand the menu via the three horizontal lines in the top left, and choose Settings. You’ll see separate options to turn Auto-Save on for Photos and Videos. You can also choose which files you’d like to back up, choose to only back up when your Fire tablet is plugged in and charging, and manage the backup for your child’s profile if you have one set up on the device.

When a photo or video has not been backed up, it will have a wee icon of a cloud with a line through it in the bottom-right corner. If there’s an arrow, then the file is currently uploading. When photos and videos have been backed up, you can access them in any browser by visiting Amazon Cloud Drive and signing in with your Amazon account.

How to filter out blue light

There’s evidence that blue light can keep you up at night, but Amazon has included a handy feature called Blue Shade that filters out the blue light from your Fire Tablet display. To enable the feature, swipe down from the top and tap the Blue Shade icon. You’ll see a notification that it’s turned on, and your screen color will change. Tap the notification to adjust the color. There’s also an option to set up Automatic Activation, so that Blue Shade turns on by itself when it’s late at night, and turns off again during the day.