Skip to content

Archive for


Amazon’s Handmade store comes to Europe

Amazon has become home to millions of products over the years, but hand-crafted items weren’t something you’d typically come across. That changed late last year when the online retailer launched its Handmade store in the US, giving artisan-goods company Etsy a run for its money in the process. Now, that same store has come to Europe, launching today in the UK, Germany, France, Italy and Spain.

Amazon says that customers across Europe can now shop for over 30,000 “genuinely handcrafted items” on its new store, with more being added every day. Those “factory-free” products range from baby gifts to jewelry and also include lots of interior decorations and artwork. Oh, and if you’re after furniture, there’s also a small selection of tables, chairs and cupboards to choose from.

The store features artisans from over 40 European countries and Amazon will invite you to learn more about its makers from time to time. Like Etsy, artisans are invited to create a storefront or profile, which tells you about them, their inspirations and how their products are made.

Via: Amazon (Businesswire)

Source: Amazon UK


‘Super Mario Maker’ for the 3DS only plays in 2D

If you were hoping that the handheld version of Super Mario Maker played in three dimensions, take a seat. Polygon has stumbled across the GameStop listing for the 3DS edition, the box for which comes with a prominent caveat that it only plays in two dimensions. It’s not that much of a surprise, given how few 3DS titles really harness stereoscopy in a meaningful way — even Pokémon X and Y mostly saved it for battles. Not to mention, of course, that Super Mario Maker is the most two-dimensional of games, and certainly won’t need any extra depth. If you can’t wait to try your hand at becoming the next Miyamoto (spoiler: it’s hard), then it’ll set you back $39.99 on December 2nd.

Via: Polygon

Source: GameStop


Uber adds an advance booking option in NYC

Hailing an Uber is pretty straightforward. You launch the app, choose a pickup location and hit the “request” button. Easy. Uber’s simplicity has been to the detriment of flexibility, however. For the longest time, you couldn’t schedule a ride in advance — say, if you were planning for an early flight, or an important work meeting. That’s now changing, however. Following roll-outs in Seattle, London, and other parts of the UK, Uber is bringing its early booking system to New York City. Starting today, you can hail a ride anywhere between 15 minutes and 30 days in advance.

It’s another blow for the taxi industry. Uber has built its business on convenience, and this new feature will only further that reputation. Given the company’s track record — it’s known for fast, aggressive expansions — we suspect it won’t be long before this option is offered elsewhere too.

Source: Mashable, New York Post


Flywheel’s phone-based taxi meter arrives in NYC

Flywheel is giving taxis in New York City the modern touch. The company is bringing its software meter and app-based hailing system to the Big Apple after its initial release in a handful of cities last year. TaxiOS puts a cab’s navigation, payment, meter and dispatch system on a single phone. It can automatically calculate fares, including tolls, split the total amount between passengers, accept credit card and cash payments and email receipts to customers. Plus, it will allow taxis in the city to accept app payments made through Flywheel’s Uber-like ride-hailing app even if you flag them down in the street.

“New York City is the heart of the taxi industry in the U.S., but its drivers and passengers have been left in the dark with outdated technology for quite some time,” the company said in a statement. “Now that TaxiOS has been approved for citywide operation, we’re giving NYC Yellow Cabs the tools needed to compete with rideshare giants. By providing drivers with a smartphone, we’re getting rid of the outdated, and frankly, annoying technology currently in cabs.”

Flywheel is rolling out TaxiOS to a thousand cabs in the Big Apple by the end of the year, making NYC its seventh location after San Francisco, Los Angeles, Seattle, San Diego, Sacramento and Portland. Besides modernizing the taxi meter, the startup also plans to launch a backseat entertainment system for cabs that will allow passengers to enjoy “personalized content, social channels and more” later this year.

Source: Flywheel


Force your pals to make decisions with Facebook Messenger polls

Instead of having lengthy discussions with your friends on which movie to watch or where to go for brunch, you could just offer them a poll with a list of suggestions. Starting today, you can now do so on Facebook Messenger. While in a group convo, you can tap a Polls icon in the compose window, or you can also just hit More and then choose Poll. Create your list of choices, submit it, and then your friends will be able to see the poll in the conversation and then vote accordingly.

Another new feature you might see in Messenger is a chat assist that’ll apparently make it that much easier to send money. You can already do so thanks to a Messenger update last year, but now certain phrases will actually prompt an optional payment link in the conversation. So if you use words like “I owe you” or “pay me back,” you might see a send money button pop up. You can then either choose to use it or not. This feature is still in the testing stages, but it should roll out to a few users starting today. Both the poll and the chat assist test are available only in the latest version of Messenger and only for those in the US.


Build your own Lego drone with these affordable kits

Lego bricks have been the foundation of so many awesome and elaborate creations, it’s no wonder people have already had the idea to send them skyward in drone form. But while there are plenty of DIY tutorials around, as well as the odd prebuilt model, we haven’t seen anything quite as accessible and affordable as these new Lego UAV kits from Flybrix.

Available today for an introductory price of $149 (increasing to $189 after a “limited time”), the basic Flybrix kit contains everything you need to build a quad, hexa or octocopter drone in as little as 15 minutes. We’re talking plenty of Lego bricks, eight motors and propellers, a pre-programmed Arduino brain and all the other necessary bits and bobs — even a minifig captain for you to fly around. And if you want a dedicated controller to pilot your creation instead of using Flybrix’s mobile apps, the $189 deluxe kit comes bundled with one (this will jump to $249 after the introductory period).

Also, there’s no need to be apprehensive when getting to grips with the controls. If anything, you’re encouraged to crash the things, watch them disintegrate, then simply pick up the pieces and build a new one. Though Flybrix does include ten games as part of the software package that are designed to hone budding pilot’s skills.

While these drone kits promise to be good clean fun, Flybrix’s founders — MIT, Caltech and UW Madison alums — are keen to stress the educational element. Aimed at kids aged 14 and over, the hope is building and flying these Lego drones will impart basic engineering, technology and physics knowledge, among other skills. But who are we kidding? They look just as appealing to big kids in the market for an affordable starter drone, too.

Source: Flybrix


Star and galaxy birth causes giant space blobs’ intense glow

A team of scientists have unraveled the secret behind lyman-alpha blobs and their intense glow. These massive clouds of hydrogen gas baffled astronomers from the time they first found out about their existence. A scientist named Charles Steidel discovered Lyman-alpha blob 1 or LAB-1, called as such because it’s first of its kind we’ve ever seen, in 2000. In an effort to get to the bottom of things, a team led by Jim Geach, an astrophysicist from the University of Hertfordshire in the UK, took a closer look at the massive structure.

With the help of the Atacama Large Millimeter Array of radio telescopes and the European Southern Observatory’s Very Large Telescope (VLT), the team found that LAB-1, which is thrice the size of the Milky Way, has two large galaxies in its center. They also found that the two are surrounded by a number of smaller galaxies, based on their observations using the Hubble Space Telescope and the Keck Observatory in Hawaii.

Now, here’s the important part: the two bigger galaxies are in midst of birthing one star after another. And thanks to cosmic materials from the smaller ones, stars are forming within the duo at a rate of over 100 times that of our own galaxy. All the ultraviolet light given off by this rapid star formation is what’s making the blob shine brightly.

Geach explained:

“Think of a streetlight on a foggy night — you see the diffuse glow because light is scattering off the tiny water droplets. A similar thing is happening here, except the streetlight is an intensely star-forming galaxy and the fog is a huge cloud of intergalactic gas. The galaxies are illuminating their surroundings.”

Since LAB-1 is 11.5 billion light-years away, the team believes we’re seeing the early stages of the formation of a gigantic elliptical galaxy — the two in its center are bound to merge — that will become the heart of a cluster. The light from Lab-1 that reaches us today was from 11.5 billion years ago, after all, shortly after the Big Bang. Astronomers now believe that the biggest galaxies in the universe form within bright, colossal blobs, just like Lab-1.

Source: Atacama Large Millimeter/Submillimeter Array, Arxiv (PDF)


Oakley and Intel’s sunglasses put a personal trainer in your ears

Running can be a pretty lonely sport, but you may soon get a companion that’s always ready to go. Oakley and Intel teamed up to create a sunglasses-with-smart-earbuds hybrid that will tell you how you’re doing during your run or bike ride. The Oakley Radar Pace will be available on October 1st for $449. I tried out a preview unit and, even though I’m not a serious runner, I’m actually really excited about what the device can do.

I had a love/hate relationship with my former personal trainer, but it was always great to have someone to turn to for feedback on how I was performing. That coaching is the biggest draw of the Pace system. It monitors your distance traveled by tapping into your phone’s GPS and studies your heart rate if you’re wearing a third-party Bluetooth-enabled monitor. Oakley says this feature “will work with any Bluetooth enabled smartwatch or fitness tracker with a heart rate monitor.”

The Radar Pace has what Intel and Oakley call a dual-initiative system, which, in layman’s terms, means that either you or the device can start a conversation. You can ask the Pace how you’re doing, or it can tell you, without any prompts and after some time, how to improve your progress. And in case you interrupt each other, the Pace will cache your questions while it’s speaking, and get back to you after it finishes what it had to say.

During my demo, Oakley’s rep asked a slew of questions about his pace and cadence while running on a treadmill. The device told him that his stride rate was 85, and then, when he asked how good that was, it told him he needed to speed up and hit 88. All this in a calm, Siri-like voice that, let’s be real, isn’t nearly as motivating as a gruff, buff trainer yelling, “FASTER!” Still, it’s nice to know how you’re doing as you’re running so you can correct your technique during the workout rather than try to fix it afterwards.

Once you’re done, you can tell the Pace to end the workout, and if you haven’t completed the session it designed for you through the companion app (for iOS and Android), it will ask you, tentatively, if you really want to give up (you weakling, you). Through the app, you can create workouts, monitor your heart, cadence, distance and pace history and overlay graphs of each on top of each other. The interface I saw seemed dead-simple, and appeared to have tons of information that avid runners would find useful. Novices like myself will probably be more taken by the glasses themselves, which meet IPx5 standards for resistance against rain, sweat and some splashes.

The lightweight shades don’t have a lot of components onboard. The team didn’t try to squeeze a GPS or heart rate sensors on the Pace, which helped it achieve a 56-gram weight. All it is is the existing Oakley Radar shades with a little micro-USB port on each arm. On the glasses are a touch panel on the left for music playback and Siri control, a three-mic array that Oakley says is optimized to hear you even with wind whipping by at top speed, as well as an embedded system that’s the brains of the Pace. There’s also a battery that will last four hours with continuous music playback and six hours without.

You’ll have to plug in the included earbuds, which can be bent to fit in your ear or stick out parallel to the frame when you don’t need them. During my brief time with them, the ‘buds felt like they were firmly attached to the sunglasses.

I tried on the Radar Pace and it fit snugly on my relatively wide-set face, but was still light and comfortable. Oakley doesn’t yet offer different sizes for the Pace, but said it may do so in future. I had some trouble trying to put the frames on my face, since I had to keep the earbuds from folding out in the process, but once I figured out what was happening it wasn’t difficult to handle.

Oakley isn’t the first to bring fitness tech to your ears. Samsung, SMS Audio and Bragi are just three of the more notable companies working on earphones with heart rate monitors. Although it doesn’t use Intel’s heart rate tracking earbud technology, the Radar Pace is the first to introduce something similar to sunglasses. And while I balked at the $449 price tag (twice the average $220 price of Oakley’s existing non-tech Radar shades), the device itself is pretty unique. It appeals to a niche market of hardcore fitness enthusiasts willing to shell out for fancy gear, but I can see the Pace taking off and gaining widespread appeal if it adds more features and comes down in price. In the meantime, though, this is a wearable that hardcore joggers will likely love.


Facebook and Intel reign supreme in ‘Doom’ AI deathmatch

On the island of Santorini, Greece, a group of AIs has been facing off in an epic battle of Doom.

This is VizDoom, a contest born from one man’s idea: To improve the state of artificial intelligence by teaching computers the art of fragging. That simple notion then spiraled into a battle between tech giants, universities and coders. Over the past few months they’ve all been honing their bots (known as “agents”), building up to one, final death match.

Okay, it was a lot more than one match. But that doesn’t sound nearly as dramatic.

The competition is all about machine visual learning. Just like when you or I play Doom, the agents can only make decisions based on what they “see,” and have no access to information within the game’s code.

There were two “tracks” for agents to compete on, offering very different challenges. Track 1 featured a map known to the teams, and rocket launchers were the only weapons. The agents started with a weapon, but were able to collect ammo and health kits.

Track 2 was a far harder challenge. It featured three maps, unknown to teams, and a full array of weapons and items. While Track 1 agents could learn by repeating a map over and over, agents competing in Track 2 needed more general AI capabilities to navigate their unknown environments. Both maps were played for a total of two hours, with Track 1 consisting of 12 10-minute matches, and Track 2 consisting of three sets of four 10-minute matches (one for each map).

As you might have expected, the winners for both categories came from the private sector. The agent “F1,” programmed by Facebook AI researchers Yuxin Wu and Yuandong Tian, won Track 1 overall, besting its opponents in 10 of 12 rounds. For Track 2, “IntelAct,” programmed by Intel Labs researchers Alexey Dosovitskiy and Vladlen Koltun, put in a similarly dominating performance, taking the victory and winning 10 of 12 rounds. But while Intel and Facebook may have won the overall prizes, there were other impressive performances. Three standout bots, “Arnold,” “Clyde” and “Tuho” came from students.


Arnold is the product of Devendra Singh Chaplot and Guillaume Lample, two masters students from Carnegie Mellon University’s School of Computer Science. Their team “The Terminators” competed on Tracks 1 and 2, and saw success on both. In fact, Arnold was the only agent outside of Facebook and Intel to win rounds. On Track 1, each bot had to skip one round, and F1’s departure gifted round 3 to Arnold. In round 6, though, Arnold won outright, besting F1 by 2 frags. The result never looked in doubt, though, and Arnold ended in second place, 146 frags behind F1.

Track 2 was where things got interesting. Arnold was competitive in the first map, but IntelAct already had a 19-frag lead heading into map two. On the second map, however, Arnold suddenly came alive. It won the first two rounds, closing the gap down to just 11 frags at one point, and ending the map 15 behind. But it wasn’t to be. IntelAct excelled at the final map, scoring 130 frags in just four rounds, and destroying the plucky underdog’s hopes of pulling off an upset. Arnold lost the overall count 256 to 164, again ending in second place.

Behind the scenes, though, all the work as long as several months ago. Arnold is one of the more ambitious efforts in the VizDoom competition, combining multiple techniques. It’s actually the result of two distinct networks. The first is a deep Q-network (DQN), a technique Google DeepMind pioneered to master 49 Atari 2600 games. The second is an deep recurrent Q-network (DRQN). It’s similar to a DQN, but it processes information in a directed cycle, and uses its internal memory of what’s come before to decide what to do next. Arnold’s DRQN has been augmented to help the agent detect when an enemy is visible in the frame.

In a death match, Arnold can be in one of two states: Navigation (exploring the map to pick objects and find enemies) or Action (combat with enemies), with separate neural networks handling each. The DQN is for navigation. It’s responsible for moving the agent around the level when nothing much is happening, hunting down items and other players. As soon as an enemy shows up on the screen, however, it hands control to the DRQN, which sets about shooting things. Combining these two methods, which can be trained in parallel independently, is the key to Arnold’s success.

But Arnold’s creators aren’t interested in pursuing an unbeatable Doom agent. Instead, they saw VizDoom as a nice application to test their ideas on reinforcement learning. Speaking by phone, Chaplot explained that the networks deployed in Arnold can be applied to robotics in the real world. Navigation and self-localization are a real challenge for machines, and the team is now focused on solving those issues. They’ve published their initial findings from Arnold and VizDoom, and are using what they’ve learned to try and create better robots.


Clyde was created by Dino Ratcliffe, a PhD candidate at the University of Essex in the Intelligent Games and Game Intelligence program. A one-person effort, the AI competed on Track 1 only. Though Clyde never won a round, it was extremely competitive throughout, besting Arnold in five rounds and, in one match, losing to F1 by only one frag. It ended the competition in third place with 393 frags, putting it 20 behind Arnold and 166 behind F1.

It could have gone so differently for Clyde. Ratcliffe began development in order to understand “what the state of the art in general video game playing” was for AI right now. He used asynchronous advantage actor-critic (A3C), an advancement in the DQN method that uses multiple neural networks learning in parallel to update a global network.

Ratcliffe told me he took a hands-off approach to training, preferring the agent to learn by itself what enemies are, what death is, what health packs are and so on. “I think it’s dangerous to start encoding your own domain knowledge into these agents as it inhibits their ability to generalize across games,” he explained. “I simply gave it a reward for killing opponents and increasing its health, ammo or armor.”

But a catastrophic failure — Ratcliffe’s PC power supply blew up 24 hours before the competition deadline — caused Clyde to only complete around 40 percent of its training regimen. That meant that it had learned from 30 million frames, rather than the 80 million necessary. The biggest downside of this incomplete training, Ratcliffe explains, is that the agent still occasionally commits suicide. It’s for this reason that Clyde got his moniker — he’s named for the weakest ghost in Pac-Man, who rather than pursuing or holding position, just moves around at random.

Clyde learned a simple form of spawn camping

The fully trained Clyde, which wasn’t submitted, is far stronger. Ratcliffe said he’s observed Clyde using a simple form of “spawn camping,” a much-maligned tactic in multiplayer shooters where you wait at strategic points on a map and kill players as they spawn in. “It notices certain corridors that have spawn points close by and shoots more,” he explained. This behavior is apparently in the competition version of Clyde, but not as noticeable.

Before the results were published, Ratcliffe said he didn’t think Clyde would be competitive, so a third place rosette is definitely above expectations. Ratcliffe has already moved onto a new project: 2D platformers. “I had only started looking into deep reinforcement learning around one week before the competition was announced,” he said. “I pretty much had to learn the whole field in the process of competing, and that was the point of me taking part. So I now have a solid foundation to start my own research this year.” While other agents have mastered 2D platformers, he wants to teach one to learn Mario, and then try to apply that learning set to other games without retraining.


The final prize-winning spot was taken by Anssi “Miffyli” Kanervisto, an MsC student at the University of Eastern Finland’s School of Computing. His agent Tuho (Finnish for “doom”) is a one-person effort, created with oversight by Ville Hautamäki PhD, from the same University.

Some of Tuho’s best performances came on Track 1, where it managed to finish second place behind F1 in three rounds. It ultimately placed fourth, just outside of the prize rankings. On Track 2, it didn’t get close to challenging F1 or Arnold. It put in a solid performance, though, on the first and last map, which was enough to balance out a disastrous showing on the second map. Tuho ended up in third place with 51 frags. That’s despite spending the four middle rounds killing itself more than others.

Kanervisto built a complex agent in Tuho, with a navigation system based on multiple techniques. The most important aspect is a dueling DQN — two networks using different methodology to provide a better end result. Tuho’s shooting and firing system is largely based on image recognition, matching potential enemies against a manually recorded library of images.

It was trained to prioritize movement speed in order to get it running in straight lines, and the result, Kanervisto says, is a “well-behaving model that was able to move around and not get stuck, although it struggled with doorways.” But the entire training regimen took place on his personal computer with an Ivy Bridge i7 processor and GTX 760 graphics card. You typically need a very powerful computer, or better yet several, to train an AI at a reasonable speed. Because of this, he was limited in the size of the network and input image size.

Everyone’s a winner

It may be a mostly false cliché, but at least with VizDoom, it feels like everyone here is a winner. Arnold’s creators will receive €300 for their agent’s performance on Track 1, and €1,000 for Track 2, leaving them with around $1,450 to share. Ratcliffe earned €200 ($222) for Clyde’s third place. Tuho bagged Kanervisto €500 ($558) for its exploits.

Some are going home with prizes, but all the teams I’ve spoken to have gained a lot from their experience. Take Olivier Dressler, and his agent “Abyss II.” Dressler is a PhD candidate in Microfluidics (bioengineering) at ETH in Switzerland, and had no previous experience in AI. I asked him what he’d learned from participating in VizDoom. “Literally all my machine learning knowledge” was the answer.

Dressler based Abyss II on the A3C algorithm, and had to learn everything as he went along. This led to some big mistakes, but lots of gained knowledge. One such lesson came in training. “Shooting is required to win,” he explained, “but shooting at the wrong moment (which is nearly every moment) will results in suicide.” The map was full of small corridors, and any explosion nearby will kill the agent. Just overcoming that is a challenge in itself.

Abyss II placed seventh on Track 1, but from speaking to Dressler before the contest, it was apparent he would be happy regardless of the result. “Given the short time frame I really don’t expect my bot to perform particularly well but it has been an amazing challenge,” he added. “It has even paid off more than I expected and I can use this knowledge very well in my current work.”

VizDoom will also have knock-on effects. Google DeepMind and other leaders in machine learning, despite not formally entering the competition, will also have learned a few things. Doom is a highly complex title, and various DQN, DRQN and A3C-based agents have performed to great success.

I don’t know what methods Facebook and Intel employees used to win the top prizes in their categories, but it’s likely we’ll see papers published from them soon. Regardless, as is often the case with AI, the innovative techniques used to win VizDoom will serve to strengthen every researcher’s knowledge of vision-based machine learning.


watchOS 3: How to Share Activity With Your Friends

When watchOS 3 launched alongside iOS 10, it brought a handful of feature additions and speed improvements to the Apple Watch. One of the new social features is a way for users to share their Activity Rings with friends and family through “Activity Sharing.”

Mainly focused in iOS 10’s Activity app, the ability to share workout data nevertheless requires an Apple Watch updated to watchOS 3, and for you to be comfortable with certain people receiving live updates on your fitness activity. When you’re ready to start sharing your Activity Rings, and you’ve updated to both iOS 10 and watchOS 3, follow these steps:

Activity Sharing on iPhone

Open “Activity” on your iPhone.
Navigate to the “Sharing” tab on the bottom right.
Tap the “+” icon in the top right corner to add a friend.
The app will offer suggestions from your contacts of users who may own an Apple Watch, so you can tap one of those or type in someone specific in the text box.
You can include multiple invitations at one time, and once you have everyone included in the “To” box, tap “Send.”
From there, simply wait for your friend to accept the invitation. Afterwards, you’ll begin to see one another’s Calorie, Workout, and Standing rings in the same Sharing tab you sent the invite from. Once you amass a group of friends and family members, you can also sort the data in helpful partitions depending on what you’d like to see. Simply tap “Sort” in the top left corner of the Sharing tab and choose from Name, Move, Exercise, Steps, and Workouts as the primary focus.

Tapping on anyone in the list will bring you into a deeper menu about that individual’s Activity that day. The app will break down each ring, as well as showcase a step count, distance walked, and list any completed workouts or earned achievements. At the very bottom, there’s a few options to mute a friend’s notifications, hide your Activity from them, or delete them as a friend. Within your main friends list you will see your own Activity as well.

activity-how-to-2An example of a friend’s Sharing card (left), and your own (right)

Activity Sharing on Apple Watch

Although Activity Sharing is more in-depth within Activity on iOS, most of the interactivity of the new social feature takes place on the Apple Watch. Whenever friends begin closing their Activity Rings, completing workouts, and earning achievements, you’ll get push notifications about each accomplishment. From these pop-ups, you can send friendly encouragements (or sly digs) about their hard work.


When you receive an Activity notification from a friend on your Apple Watch, scroll down.
Tap “Reply.”
You can choose from the traditional speech-to-text, emoji, Digital Touch, and Scribble options, or scroll more for some of Apple’s stock Activity responses.
Tap on any phrase to send it.
These responses are integrated directly into Messages in iOS 10, so if you sent “You’re on fire!” for example, Messages would provide slight context above the message with “Mitchel completed a workout.” Until they otherwise mute you for the day, or turn off your notifications completely within Activity on iPhone, each of your friends will receive a notification upon the completion of every workout and earning of every achievement.

If you visit the Activity app on Apple Watch, you can scroll down to get more detailed readouts of each Activity Ring. Swiping right-to-left shows the new Sharing tab, similar to the one on iPhone. You can scroll to see each friend, tap on them for an individualized view, and send a message through a button at the bottom of their profile.

It should be noted that Apple’s stock fitness-related phrases will only appear following a Sharing notification from a friend, and then only on Apple Watch. Any other Messages-related prompt outside of a notification from a friend will simply guide you over to Messages with all of the expected features introduced in the app in iOS 10.

There are plenty of other interesting and notable features to discover in watchOS 3, so be sure to check out the MacRumors roundups for both watchOS 3 and Apple Watch Series 2 to find out more information on each.

Related Roundups: Apple Watch Series 2, watchOS 3
Buyer’s Guide: Apple Watch (Buy Now)
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

%d bloggers like this: