Skip to content

Archive for

16
Jun

Spotify adds Eventbrite to its roster of concert listings


Spotify has had concert listing recommendations for awhile now thanks to a partnership with Songkick. You could purchase tickets from providers such as Ticketmaster, LiveNation and of course Songkick itself. Starting today, you can add yet another company to the list: Eventbrite. And unlike the other services, the beauty is that you can buy tickets in just a couple of taps.

As with the other concert listings, Eventbrite events will show up in your feed based on your listening habits. If you’re interested in going to one, you can just tap “Find Tickets,” and you’ll be kicked over to the concert page where you can buy tickets directly. According to Eventbrite, you’ll be able to buy them without having to login or enter in a captcha, which would make it a lot faster — and easier! — to get those tickets before they sell out.

Seeing as Eventbrite bought Ticketfly from Pandora a week or so ago, it wouldn’t be surprising if this new concert Spotify-integration came out of that deal. It would also fall in line with rumors that Spotify was trying to see if it can get into ticketing directly. After all, if Spotify already has your credit card info, it would make sense to just use that for concert sales as well.

Source: Eventbrite

16
Jun

‘Steven Universe: Save the Light’ has all of the show’s charm


Cartoon Network’s Steven Universe practically begs for the role-playing-game treatment, with its deep lore, unique characters and gorgeous hand-drawn art style. The show got its first video-game treatment a few years ago with the mobile game Steven Universe: Attack the Light. Now that game’s sequel, Save the Light, is finally giving the show’s fans the full-fledged RPG they’ve been waiting for. And after playing it for a bit at E3, I think they have plenty of reasons to be excited.

For one, the game simply looks amazing, with all the bright colors you’d expect from Steven Universe. It’s not an exact re-creation of the show because the world is rendered in 3D instead of hand-drawn 2D animation. But it still manages to make you feel like you’re walking through familiar environments from the series. The characters are also polygonal and don’t have as much detail as their traditionally animated versions. But at least the flat style they’re presented in — similar to Nintendo’s Paper Mario games — seems reminiscent of a 2D cartoon.

The game starts off by asking you to pick one of the Crystal Gems — the group of superpowered aliens mentoring and caring for Steven — for your party. I chose Garnet, and she was joined by Steven himself, his sword-wielding best friend Connie and his father, Greg Universe. Our quest: To figure out what happened to the Prism, the villain from the first game. And yes, you should probably play Attack the Light to keep track of what’s going on. Rebecca Sugar, the creator of Steven Universe, was also involved with the development of the game, so it could eventually tie into events in the series.

This time around, developer GrumpyFace Studios implemented a more fully-realized battle system that’s reminiscent of Final Fantasy’s Active Time Battle method. Your actions are determined by a star meter, which slowly builds up over time. All the while, enemies can also attack you. You also have a chance at inflicting more damage or defending yourself, by hitting the action button during attacks. That’s yet another aspect that’s reminiscent of Nintendo’s Paper Mario games — it’s not a battle system where you just want to blindly hit the attack button.

There’s a definite strategy to every fight becaise every character has their own strengths and weaknesses. Just like in the mobile game, Steven plays more of a support role, while Connie and the Gems are on offense. Greg Universe is basically a band, playing songs that can heal the team or help in other ways (yes, the songs affect the game’s soundtrack). There’s also a major emphasis on characters working together and building their relationships, one of the show’s major themes. During a boss battle, Steven and Connie were so simpatico they fused into their more powerful form, Stevonnie.

The big takeaway from my short play session with Save the Light: Steven Universe is that fans will be pleased. But it also feels like a solid RPG that will also entice people who aren’t familiar with the series. You’ll be able to play Save the Light this fall on the Xbox One and PlayStation 4.

Follow all the latest news from E3 2017 here!

16
Jun

Microsoft Pix Camera imitates Prisma with its AI-powered filters


Microsoft Pix Camera uses artificial intelligence to make your pictures of people better. It uses algorithms behind the scenes to analyze the 10 frames it snaps for every picture you take, looking for sharpness, exposure and even facial expressions to make sure you get the very best shot. It even takes good data from the pictures it doesn’t use to enhance the photos it chooses. The app, launched last summer and just updated, now offers new filters that can help you make your photos look like real works of art.

These artsy filters may sound a lot like what standalone app, Prisma, does, but Microsoft’s implementation was developed by Microsoft’s Asia research lab in collaboration with Skype. According to a company blog post, Pix Styles use texture, pattern, and tones learned by deep neural networks from famous works of art instead of altering the photo uniformly like other similar apps. Microsoft researcher Josh Weisberg told Engadget that the app uses two different techniques, run in tandem to save time, to produce these effects. “Our approach lends itself to styles based on source images (that are used to train the network) that are not paintings, such as the fire effect,” he said in an email.

The initial 11 Styles filters are named Glass, Petals, Bacau, Charcoal, Heart, Fire, Honolulu, Zing, Pop, Glitter and Ripples — more will be added in the coming weeks. Pix Paintings creates a timeline of your picture as if it were being painted in real time, giving you a short video of its creation. The Paintings feature is accessed with a button that shows up when you apply a new Style, and you can share or save the resulting short video (or GIF) it makes, too.

“These are meant to be fun features,” said Microsoft’s Josh Weisberg in a blog post. “In the past, a lot of our efforts were focused on using AI and deep learning to capture better moments and better image quality. This is more about fun. I want to do something cool and artistic with my photos.”

All this AI magic works right on your iPhone or iPad and won’t access the cloud, saving your data plan and decreasing your wait time. You can still use Pix’s other features with the new styles, adding frames and cropping your still photos. Microsoft Pix Camera is available now in the App Store and as a free update to existing owners, as well.

Source: Microsoft Blog, Microsoft/Twitter

16
Jun

I used E3 to take a very public crash course in ‘Arms’


In hindsight, this was a bad idea. I’m rubbish at almost every kind of competitive, multiplayer game, and motion controls are a waggling “am I doing this right?” nightmare. Still, at E3, I couldn’t resist the chance to try Arms, the spring-loaded boxing game for the Nintendo Switch. So I recruited my handsome colleague (and Arms player extraordinaire) Sean Buckley to give me a crash course on the Engadget E3 stage. The results were, well, mixed. The game is loads of fun, and I love its cast of colorful characters, but I have to accept a harsh truth: I am absolutely dreadful.

If you want to see an Arms master ridicule and pulverize a beginner, click on the video player above. I apologize in advance for the random mess of jabs and grabs that follow.

Follow all the latest news from E3 2017 here!

16
Jun

‘Starlink’ blends gaming and toys in a genuinely intriguing way


Following the likes of Amiibo, Skylanders and the rest, Ubisoft’s latest take on the physical toy/videogame hybrid, Starlink: Battle for Atlas, already feels like an exciting proposition — even if we didn’t quite get to play the title itself. We saw a hands-off presentation of the spaceship-based gameplay (customizable load-outs, pilot-based superpowers and weird alien threats), as well as how easily the add-on guns and mods appear in-game. We then also got to handle the physical toys themselves. All told, it’s clear Ubisoft has done a good job.

To do this, the company apparently poached toy makers and designers from across the industry — Hasbro, to name just one example — and it’s resulted in toys that feel solid, fun and, well, nice. They feel like proper playthings that won’t suddenly break and crack. I also liked how the company has created different controller mounts across the PS4, Xbox One and Switch to ensure weight is distributed evenly when you’re playing with your ship attached to the game. (Naturally, you can still fly the spaceship around in your hand while making swooshing noises.)

Starlink remains over a year away, which means we’re likely to see it at next year’s E3, too, before it finally launches. For now, however, we know that Ubisoft is going in the right direction with the toys. Now it has to ensure that the game itself does them justice.

Follow all the latest news from E3 2017 here!

16
Jun

Lyft relies on autonomous EVs to meet climate impact goals


While Uber has been engulfed in a hurricane of scandal, its ride-hailing competitor Lyft has published its climate impact goals. The company says that with the help of autonomous and electric vehicles it’ll be able to reduce CO2 emissions “by at least 5 million tons per year by 2025.”

An impressive goal that’s relying heavily on automakers to step up and actually build these vehicles. While Lyft recently partnered with Nutonomy to help bring autonomous vehicles to the company’s network, it’ll still be automakers that’ll have to deliver.

Fortunately, Lyft’s 2025 timeline to have “at least 1 billion rides per year using electric autonomous vehicles” is in line with what the automotive world is promising. At least for highly autonomous electric vehicles coming to market. It also helps that the company has a substantial investment from GM which has been working to get its all-electric Chevy Bolt ready for an autonomous future.

Still it’s good to know that Lyft is thinking about how it affects the environment and I’m sure the timing has nothing to do with Uber’s internal shenanigans.

Source: Lyft

16
Jun

Next-gen supercomputers to get $258M in funding from Department of Energy


Why it matters to you

The Department of Energy is putting its money where its mouth is, in an attempt to put the U.S. back at the forefront of supercomputer development.

U.S. Secretary of Energy Rick Perry has detailed plans for $258 million in funding that is set to be distributed via the Department of Energy’s Exascale Computing Project. The PathForward program will issue the money to six leading technology firms to help further their research into exascale supercomputers.

AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia are the six companies chosen to receive financial support from the Department of Energy. The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project.

“Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” Perry said. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing — exascale-capable systems.”

The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.

The term exascale refers to a system that’s capable of one or more exaflops — in other words, a billion billion calculations per second. This is a significant milestone, as it’s widely believed to be equivalent to the processing power of the human brain at the neural level.

The PathForward program should help produce systems that are much more powerful than current standouts, with the broader goal of reasserting the U.S. as a leader in the field. In June 2016, the biannual Top500 list of the most powerful supercomputers in the world featured more systems from China than the U.S. for the first time, with the China Sunway TaihuLight claiming the top spot. In a few years time, we may well see a supercomputer spawned by this funding debut on the list.




16
Jun

AMD’s Ryzen Threadripper gets its first benchmark results — and it’s fast


Why it matters to you

According to some benchmark results, that AMD Ryzen Threadripper CPU you’re waiting for is one really fast chip.

As the CPU wars continue to heat up, both Intel and AMD have some crazy-fast processors coming soon. Intel will be shipping its Kaby Lake-X and Skylake-X processors starting this month, and AMD’s Ryzen Threadripper monster is coming in the summer in Dell’s Alienware Area-51 Threadripper Edition.

So far, while we have some of the specifications for the new chips, performance benchmarks have been lacking. That’s slowing changing, as it tends to do prior to a new component’s release, as people test the chips and those results accidentally get uploaded to various sites. That’s exactly what happened with AMD’s Ryzen Threadripper, which now appears to have a Geekbench test to look at, as Hexus.net reports.

Someone running a 16-core Ryzen Threadripper on an ASRock X399 motherboard tested the configuration using Geekbench 4.1.0. The results were uploaded and are quite fast indeed.

As Hexus.net mentions, these are likely unoptimized results, and while they compare well against other high-end processors today there’s likely still lots of room for improvement. By comparison, an AMD Ryzen 7 1800X at 3.6GHz with eight cores and 16 threads scored 4,208/23,188 and quad-core, eight-thread Core i7-7700K at 4.2GHz scored 5,805/19,942.

We’ll get our first look at a shipping system equipped with the AMD Ryzen Threadripper in the Dell Alienware Area-15 Threadripper Edition that’s due this summer. That machine will offer up to triple-GPU options and up to 64GB of fast DDR4-2,933MHz RAM. We don’t know pricing yet for AMD’s highest-end processors, but the equivalent Intel Core X-Series CPUs cost as much as $1,000 and so we’re likely looking at an expensive machine.

Even if you’re an Intel fan, you have to love the impending release of the AMD Ryzen Threadripper. Competition is a good thing, and whatever pushes Intel to release faster chips at reasonable prices does nothing but push the industry forward. Once AMD releases its upcoming Vega GPUs, the options for building a superfast gaming system will likely be better than they’ve ever been.




16
Jun

Marimba-playing robot uses deep-learning AI to compose and perform its own music


Why it matters to you

While the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

When the inevitable robot invasion happens, we now know what the accompanying soundtrack will be — and we have to admit that it’s way less epic than the Terminator 2: Judgment Day theme. Unless you’re a massive fan of the marimba, that is!

That assertion is based on research coming out of the Georgia Institute of Technology, where engineers have developed a marimba-playing robot with four arms and eight sticks that is able to write and perform its own musical compositions. To do this, it uses a dataset of 5,000 pieces of music, combined with the latest in deep learning neural network-based AI.

“This is the first example of a robot composing its own music using deep neural networks,” Ph.D. student Mason Bretan, who first began working on the so-called Shimon robot seven years ago, told Digital Trends. “Unlike some of the other recent advances in autonomous music generation from research being done in academia and places like Google, which is all simulation done in software, there is an extra layer of complexity when a robotic system that lives in real physical three-dimensional space generates music. It not only needs to understand music in general, but also to understand characteristics about its embodiment and how to bring its musical ‘ideas’ to fruition.”

Training Shimon to generate new pieces of music involves first coming up with a numerical representation of small chunks of music, such as a few beats or a single measure, and then learning how to sequence these chunks. Two separate neural networks are used for the work — with one being an “autoencoder” that comes up with a concise numerical representation, and the second being a long short-term memory (LSTM) network that models sequences from these chunks.

“These sequences come from what is seen in human compositions such as a Chopin concerto or Beatles’ piece,” Bretan continued. “The LSTM is tasked with predicting forward, which means given the first eight musical chunks, it must predict the ninth. If it is able to successfully to do this, then we can provide the LSTM a starting seed and let it continue to predict and generate from there. When Shimon generates, it makes decisions that are not only based off this musical model, but also include information about its physical self so that its musical decisions are optimized for its specific physical constraints.”

It’s pretty fascinating stuff. And while the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

“Though we are focusing on music, the more general questions and applications pertain to understanding the processes of human creativity and decision-making,” Bretan said. “If we are able to replicate these processes, then we are getting closer to having a robot successfully survive in the real world, in which creative decision-making is a must when encountering new scenarios and problems each day.”




16
Jun

Marimba-playing robot uses deep-learning AI to compose and perform its own music


Why it matters to you

While the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

When the inevitable robot invasion happens, we now know what the accompanying soundtrack will be — and we have to admit that it’s way less epic than the Terminator 2: Judgment Day theme. Unless you’re a massive fan of the marimba, that is!

That assertion is based on research coming out of the Georgia Institute of Technology, where engineers have developed a marimba-playing robot with four arms and eight sticks that is able to write and perform its own musical compositions. To do this, it uses a dataset of 5,000 pieces of music, combined with the latest in deep learning neural network-based AI.

“This is the first example of a robot composing its own music using deep neural networks,” Ph.D. student Mason Bretan, who first began working on the so-called Shimon robot seven years ago, told Digital Trends. “Unlike some of the other recent advances in autonomous music generation from research being done in academia and places like Google, which is all simulation done in software, there is an extra layer of complexity when a robotic system that lives in real physical three-dimensional space generates music. It not only needs to understand music in general, but also to understand characteristics about its embodiment and how to bring its musical ‘ideas’ to fruition.”

Training Shimon to generate new pieces of music involves first coming up with a numerical representation of small chunks of music, such as a few beats or a single measure, and then learning how to sequence these chunks. Two separate neural networks are used for the work — with one being an “autoencoder” that comes up with a concise numerical representation, and the second being a long short-term memory (LSTM) network that models sequences from these chunks.

“These sequences come from what is seen in human compositions such as a Chopin concerto or Beatles’ piece,” Bretan continued. “The LSTM is tasked with predicting forward, which means given the first eight musical chunks, it must predict the ninth. If it is able to successfully to do this, then we can provide the LSTM a starting seed and let it continue to predict and generate from there. When Shimon generates, it makes decisions that are not only based off this musical model, but also include information about its physical self so that its musical decisions are optimized for its specific physical constraints.”

It’s pretty fascinating stuff. And while the idea of a music-generating bot might sound of interest only to people studying music, the bigger questions it raises about computational creativity are only going to get more important as time goes on.

“Though we are focusing on music, the more general questions and applications pertain to understanding the processes of human creativity and decision-making,” Bretan said. “If we are able to replicate these processes, then we are getting closer to having a robot successfully survive in the real world, in which creative decision-making is a must when encountering new scenarios and problems each day.”