One developer can play ‘Street Fighter II’ anywhere with his AR port
While the mainstream applications of augmented reality have been pretty limited, there has been a bevy of tech demos showcasing what the technology could do with better support. One example was the AR version of Super Mario Bros. that a few folks tried out in 2017. But now, developer Abhishek Singh has made something of a sequel, this time with an updated Street Fighter II AR port.
We included a video above. It shows Singh playing the game all over the city including in parking ramps, indoors, and even on the city streets themselves. It’s powered by Apple’s nearly released ARKit for iPhone, but it’s not currently an official app, nor is there anywhere to download it for yourself. Even so, it’s a remarkably well-executed concept that Singh claims was a tribute to his younger years.
“I loved playing this game on an actual arcade as a kid with my sister and wanted to experiment with multiplayer shared AR experiences, and this kinda just popped in my head,” Singh told CNET. “Also realized the linear motion would work well in this kind of shared experience and I also thought it would look cool.”
Even better, perhaps, is that the whole thing even has multiplayer support — a rarity for AR games.
“One player sets up the stage by pointing their phone at any flat surface (streets, tables, etc.), the stage automatically adjusts for smaller surfaces and then the second person points their own phone at the same surface and joins in,” Singh said. He describes it as digital gladiatorial combat wherever you want it.
“I hope to release it publicly but need to figure out copyright issues with Capcom before doing anything,” Singh told Digital Trends in an email. “I enjoy AR, think it has a lot of potential to create amazing experiences and bring experience to life in our world, so while I am bullish on it’s future, only mass consumer adoption can guarantee that and we aren’t quite there yet.”
The Game Developer’s Conference as well as its companion, the Virtual Reality Developer’s Conference, take place in San Francisco next week. The two conferences should bring a wave of new game announcements and similar projects that could prove to be pretty compelling in their own right. But for now, this is perhaps one of the more impressive implementations of the technology.
Editors’ Recommendations
- AR stickers are now available on most Android smartphones with Motion Stills
- Escape reality with the best augmented reality apps for Android and iOS
- ‘Pokémon Go’ levels up its augmented reality abilities with Apple’s ARKit
- Google kills augmented reality project Tango to focus on ARCore
- Apple AR glasses: News and rumors about ‘Project Mirrorshades’
Google Maps is open to mobile AR game developers using Unity
Google said on Wednesday. March 14 that Google Maps now supports the Unity engine to develop mobile games with an augmented reality component. The company’s new APIs — tools for building software — will turn buildings, roads, and parks into “GameObjects.” In turn, developers can add their own textures, styles, and customizations to these objects so they blend in with the game’s theme. This will reduce the rendering overhead caused by generating an entire virtual world on a global scale.
“Game studios can easily reimagine our world as a medieval fantasy, a bubble gum candy land, or a zombie-infested post-apocalyptic city,” Clementine Jacoby, product manager of Google Maps APIs, said in a statement. “With Google Maps’ real-time updates and rich location data, developers can find the best places for playing games, no matter where their players are.”
According to Google, developers using the Unity game engine now have access to more than 100 million 3D buildings, landmarks, parks, and roads scattered across more than 200 countries. Google Maps removes the need to know more about the player’s physical environment no matter where they are located across the globe. Google Maps also provides quick means to locate gameplay areas that are safe, pleasant, and fun for AR-based experiences.
“Building on top of Google Maps’ global infrastructure means faster response times, the ability to scale on demand, and peace of mind knowing that your game will just work,” Jacoby adds.
Google Maps support in Unity follows the launch of the company’s ARCore mobile augmented reality platform just before Barcelona’s Mobile World Congress show in February. Built specifically for Android, the kit allows developers to create apps supporting augmented reality on more than 100 million Android smartphones although, right now, it’s only compatible with 13 different handsets ranging from the Google Pixel to the Samsung Galaxy S8. More devices will support Google’s proprietary AR platform later this year.
Augmented reality is a method of generating digital objects in the real world. For instance, Pokémon Go will use the phone’s camera to generate a live video feed on the screen while rendering a virtual Pokémon in that space. You can walk around the virtual creature, approach it or move away and it will still remain in its original position while scaling with the environment.
But that is a simplistic case. Google’s new APIs for Google Maps will take that idea a huge step further by relying on existing objects but allowing developers to convert their original appearance. That means wherever gamers move in the physical space, the “augmented” environment stays true at every angle and position on the screen.
The beauty of Google Maps is that not only do developers have access to all the current mapping information, the AR-based experiences will stay updated as Google continuously adds more buildings, roads, parks, and so on. According to Next Games CEO Teemu Huuhtanen, the added support for Google Maps will make “exploring your surroundings a breathtaking experience.”
Google will display a live demo next week during the Game Developers Conference in San Francisco.
Editors’ Recommendations
- Learn how to use Google Maps with these handy tips and tricks
- Waze vs. Google Maps: Which map app should you be using?
- Google kills augmented reality project Tango to focus on ARCore
- Who you gonna call? ‘Ghostbusters World’ AR game will slime phones in 2018
- ‘Far Cry 5’ brings back the series’ map editor, and it’s deeper than ever
Lyft team-up will build self-driving car systems on a large scale
If Lyft is going to translate self-driving car experiments into production vehicles offering rides, it’s going to need some help — and it’s on the way. The company has formed a partnership with Magna that will see the two jointly fund and develop autonomous car systems both for Lyft and the broader automotive industry. Lyft will lead the development, while Magna will take charge of manufacturing as well as contribute its know-how in vehicle systems, driver assistance and safety. The two hope to make the technology available to the industry on a large scale in the “next few years.”
Magna is also pouring $200 million into Lyft on top of the money needed for the project.
Both sides have strong incentives to team up. If Lyft is going to counter Uber and make driverless ridesharing a staple of its fleet, it needs self-driving systems that are refined and economical at the kind of volumes it needs. Magna, meanwhile, is one of the most influential automotive suppliers in the business — this fast tracks the development of self-driving platforms it can sell to car makers. Whatever money the two invest right now could reap many rewards down the line.
Source: Magna
Brain interface adds sense of presence to bionic limbs
It’s been possible for a while to control bionic limbs with your brain, but there’s been something missing: kinesthetic feedback, or the nervous system signals that give your limbs a sense of presence. You frequently have to stare at your artificial arm to ensure that you’re grabbing a cup instead of operating on instinct. That elusive quality may soon be a regular staple of prosthetics, however. Researchers have developed a neural interface that generate this feedback and make bionic feel like they’re part of your body.
The trick was to create a two-way interface that vibrating nerves at reinnervation sites (places where amputated nerves have been redirected to remaining muscles), convincing their brain that the limb is really there. Patients in tests could not only perform actions without looking at their prosthetic arms, they could often perform actions as elegantly as someone with natural limbs. And importantly, the prosthetics feel like their limbs, not just electronics strapped to their body.
This feedback isn’t very complex at the moment, and researchers intend to develop subtler signals that more closely replicate what you get from natural arms. However, it doesn’t have to be perfect to be effective — it just has to establish that instinctual link that would otherwise be lost. Amputees could spend less time training themselves to use prosthetics and more time actually using them.
Via: Wired
Source: Science, Cleveland Clinic
Switzerland’s new air traffic control system to put drones, planes in same skies
Switzerland is way ahead of the rest of the world when it comes to drones. The country has already used drones to transport lab samples between hospitals and given the necessary approvals for drone deliveries in populated areas. Now, the nation appears to be in line for another world’s first as it will integrate drones into its air traffic management system.
This system will track drones and register drone operators in order to make the airspace safer, whether it’s for unmanned aerial vehicles or, well, much larger manned aerial vehicles.
The initiative will spread its wings in June 2018, when Swiss air traffic control operator Skyguide will start merging its own data and traffic management applications with the digital airspace-mapping platform developed by California-based AirMap Inc. AirMap is the world’s leading global airspace management platform for drones, with existing integration with drones developed by DJI, 3D Robotics, Yuneec, and others.
The project is part of U-space, a European effort aimed at laying down the digital infrastructure to support safe and secure access to European skies for millions of drones. The part of the project debuting in jube is described as a pilot phase (no pun intended), after which Skyguide and AirMap will work together to develop a road map for deploying a fully operational drone traffic management system in 2019.
“With Swiss U-space, Switzerland aims to safely open the skies for drone commerce,” said Ben Marcus, CEO of AirMap, in a statement.
There are a number of innovative aspects to the project, such as blockchain-based registration for users and drones, real-time alerts for drone pilots, and dynamic geofencing and instant digital airspace authorization.
“The establishment of a U-space is the key to improve drone operations’ safety and to satisfy the security and privacy concerns of our citizens,” said Christian Hegner, director general of Switzerland’s Federal Office of Civil Aviation, in a statement. “In order to achieve these objectives, a seamless cooperation between all the partners involved is crucial. I am glad to see that a further important step to tackle this challenge was taken today.”
Depending on how Switzerland’s trials go, we cold potentially see similar initiatives arrive in the U.S. Maybe then we’ll finally get to see those Amazon Prime Air drone deliveries we’ve been dreaming of.
Editors’ Recommendations
- Amazon-style drone deliveries come a step closer for U.K. shoppers
- Amazon’s delivery drones could hitch rides on trucks to save power
- Watch a hybrid drone break records by hovering for more than four hours
- This ‘drone gun’ can down rogue quadcopters with the pull of a trigger
- Drone-catching drones to bolster security at this week’s Winter Olympics
A coat of diamonds could make implants more biocompatible
Diamonds are forever but our bodies can barely last a century. We break down, tear up, and eventually decay. Along our way from dust to dust, surgeons try to help keep us intact as best they can, often with implants to keep our physical selves together. But our bodies are fastidious things and have to be tricked into letting foreign objects stay, which can complicate biomedical implants, typically made of titanium and occasionally rejected by the body.
Now researchers from the Royal Melbourne Institute of Technology (RMIT) University may have devised a way to better coerce the body to accept implants, with a strategy that includes 3D-printing diamond-coated devices. It may sound like a luxury afforded only to the materialistic few but it could actually make implants more accessible and biocompatible.
“3D printing of metals for medical implants is quickly becoming commonplace,” Kate Fox, the RMIT biomedical engineer who led the research, told Digital Trends. “Everyone wants to have an implant that fits their bodies. As a result, many researchers are designing complicated implants which can be 3D-printed specific to need. That is, if you want a hip implant, it can be made the same size and shape as your damaged hip. Titanium which is the most common material used for medical implants, as it is inert with the body. This means though that the cells inside the body and the bone won’t ever grow onto it. By adding a diamond coating, we now provide a carbon coating … which the cells can interact with, whilst keeping the personalized 3D-printed shape.”
3D printing has helped create things like complex art and yachts, but some of the most life-changing applications have been made in biotech, where the relatively cheap process makes implants and bionic limbs accessible to those who may not otherwise have them. The method proposed by Fox and her team would entail coating titanium implants with a film of diamonds to be more biocompatible.
“Carbon is 20 percent of the human body,” Fox said. “As such, diamond, which is also carbon, provides a material that the body will readily accept as its own. This means that the body will be less likely to try and remove it. We therefore believe that rejection will be reduced and post-surgical complications due to material compatibility will be ameliorated.”
The diamond that would cover an implant is not the kind that has to be mined. Fox and her colleagues propose synthetic diamonds, made from concentrated carbon called nanodiamonds, that have been chemically altered to form a film and coated onto a 3D-printed titanium part in a plasma microwave.
There is still plenty of work ahead before patients can expect to have diamond-coated implants. Fox and her team need to run pre-clinical and clinical trials, but hope this technology will make it to the market in the next five years.
A paper detailing the research was published last month in the journal ACS Applied Materials and Interfaces.
Editors’ Recommendations
- It may look like a car part, but this is actually a working artificial heart
- Next-gen pacemakers will keep hearts beating with tech inspired by electric eels
- Graphene’s next trick? Creating foil-thin body armor that’s harder than diamonds
- New pressure sensor for medical uses dissolves in the patient’s body
- 14 major milestones along the brief history of 3D printing
Cross a John Deere with a Roomba, and you get this crop-monitoring robot
L. Brian Stauffer
Farms are a hotbed for automation. Robots, drones, and artificial intelligence have been assisting in agriculture for years and 2017 showed they could farm an acre and a half of barley, from planting to tending and harvesting, without a human stepping foot on the field.
Now there is a small but robust robot that could take care of the more tedious agricultural tasks. It’s called TerraSentia and the four-wheeled robot developed by engineers at the University of Illinois boasts a variety of sensors that can monitor and transmit crop data in real time. It won’t take full autonomy over a farm but is designed to serve as a little cog in a bigger machine.
“TerraSentia is a small, ultra-compact autonomous robot that can go through plots of crop and determine which plants are doing better than others,” Girish Chowdhary, an agricultural biological engineer at the University of Illinois who designed TerraSentia, told Digital Trends. “It has great utility for breeders who are trying to differentiate between different genotypes of plants. Using this robot, they can determine which variety of plants are doing better for a given environment. Currently, this is all done manually but TerraSentia augments that manual labor to not only get things done quickly but at a higher quality, to keep all of that data available to the breeders.”
TerraSentia is just over a foot wide and weighs in at 24 pounds, making it lightweight enough to traverse a field without seriously damaging crops. With its sensors, the robot monitors plant health by looking at things like growth rate and coloration. Its sensors are also designed to be flexible and customizable to suit the needs of the breeder and growers alike.
“Different people have different requirements,” Chowdhary said. “We’re trying to make the robot teachable so that people can teach it things that they care about. To begin, at the university we’ve taught it to count corn and estimate the width of plants. … The idea is that it can do other things over time, such as disease detection and detection of pests.”
Currently, the robot can cover an 80-acre field in about a day. As such, Chowdhary hopes TerrSentia can offer a “scale neutral” technology that can offer a small-sized solution for work on farms both big and small. The more land that needs to be covered, the more robots to do the job.
The robot is available for $5,000 through Chowdhary’s company, EarthSense.
Editors’ Recommendations
- Before ‘plantscrapers’ can grow food in the city, they’ll need to grow money
- The world’s first floating wind farm has already exceeded expectations
- The ‘world’s biggest wind farm’ could send power to as many as five countries
- How ‘speed breeding’ will supercharge farming to save us from starvation
- At long last, researchers develop a wearable fit for plants
Social media is flooded with illegal wildlife trade but A.I. can help
Alberto Ghizzi Panizza/Getty Images
In the fight against poachers and illegal trade, animals can use all the help they can get. Thanks to researchers at the University of Helsinki’s Digital Geography Lab, wildlife may find that aid through a popular tool traffickers use to deal their illegal wares — social media.
“With an estimated two and a half billion users, easy access has turned social media into an important venue for illegal wildlife trade,” Enrico Di Minin, a conservation scientist working on the project, told Digital Trends. “Wildlife dealers active on social media release photos and information about wildlife products to attract and interact with potential customers, while also informing their existing network of contacts about available products. Currently, the lack of tools for efficient monitoring of high volume social media data limits the capability of law enforcement agencies to curb illegal wildlife trade. We plan to develop and use methods from artificial intelligence to efficiently monitor illegal wildlife trade on social media.”
Di Minin and his colleagues are designing a system that they hope will be able to comb through social media posts to identify images, metadata, and phrases associated with illegal wildlife trade, be it products or animals themselves. The task is too big for humans to do alone so they are enlisting software, including image recognition and natural language processing (NLP) algorithms, to filter through all the noise and spot suspicious activity.
“Illegal wildlife trade is booming online, in particular on social media,” Di Minin said. “However, big data derived from social media requires filtering out information irrelevant to illegal wildlife trade. Without automating the process with methods from artificial intelligence, filtering high-volume content for relevant information demands excessive time and resources. As time is running out for many targeted species, algorithms from artificial intelligence provide an innovative way to efficiently monitor the illegal wildlife trade on social media.”
As an example, Di Minin said image-recognition software can be trained to detect specific products, such as a rhinoceros horn or an elephant tusk, while metadata can give clues to an image’s location. Audio-video cues, such a particular bird call, can signal illegal pet trade while NLP algorithms can differentiate between a post that features an animal for sale or in the wild. “Potentially, such algorithms can also identify code words that illegal smugglers use in place of the real names … by processing verbal, visual and audio-visual content simultaneously,” Di Minin said.
A.I. may help spot illegal trade but it can’t alone stop it. For that Di Minin encourages social media platforms to actively crack down on sellers. He admits there is a risk that these dealers will move to other platforms but stressed that partnerships between companies, scientists, and law enforcement are key.
Editors’ Recommendations
- Poachers don’t stand a chance against these A.I.-powered camera drones
- Got A.I? Facial recognition now works on cows, with goal of better milk
- Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good
- Panasonic built a robot gentle enough to pick tomatoes, but not exactly graceful
- Social (Net)Work: What can A.I. catch — and where does it fail miserably?
Huawei Mate 10 Pro vs. Honor View 10: Leica rock

It’s Huawei or the highway.
The Mate 10 Pro is Huawei’s most powerful phone to date, a showcase for the Kirin 970 chipset and its powerful Neural Processing Unit. But there’s a new, cheaper contender from the company’s sub-brand, Honor, in the form of the View 10 — which bears many of the same features as the Mate 10 Pro, including its high-end silicon.
With so many similarities, no one could blame you for wondering why you should spend the extra money on Huawei’s flagship versus Honor’s. We’re here to help you figure that out by taking a deeper dive into what each phone does well — and where each could stand to improve.
Specifications
The View 10 and Mate 10 Pro are very similar on paper … but exactly how similar? Before we get into the intangibles like user experience, battery life, and overall value, it’s good to quickly familiarize yourself with each phone by skimming over the spec sheets.
| Operating System | Android 8.0 Oreo | Android 8.0 Oreo |
| Display | 5.99-inch 18:9 IPS LCD display2160 x 1080, 403PPI pixel density | 6.0-inch 18:9 AMOLED display2160 x 1080, 402PPI pixel density |
| Chipset | Octa-core Hisilicon Kirin 970, four 2.4GHz Cortex A73 cores, four 1.8GHz Cortex A53 cores, 10nm | Octa-core Hisilicon Kirin 970, four 2.4GHz Cortex A73 cores, four 1.8GHz Cortex A53 cores, 10nm |
| GPU | Mali-G72 | Mali-G72 |
| RAM | 4GB/6GB | 4GB/6GB |
| Storage | 64GB/128GB | 64GB/128GB |
| Expandable | Yes (dedicated microSD slot) | No |
| Battery | 3750mAh | 4000mAh |
| Water resistance | No | IP67 |
| Rear Camera | 16MP f/1.8 + 20MP f/1.8, PDAF, 4K at 30fps | 12MP f/1.6 + 20MP f/1.6, PDAF + laser autofocus, 4K at 30fps |
| Front Camera | 13MP f/2.0, 1080p video | 8MP f/2.0, 1080p video |
| Connectivity | WiFi ac, Bluetooth 4.2, NFC, GPS, GLONASS, USB-C | WiFi ac, Bluetooth 4.2, NFC, GPS, GLONASS, USB-C |
| Security | Fingerprint sensor (front) | Fingerprint sensor (back) |
| SIM | Dual Nano SIM | Dual Nano SIM |
| Dimensions | 157 x 74.9 x 6.9mm | 154.2 x 74.5 x 7.9mm |
| Weight | 172g | 178g |
| Colors | Black, Aurora Blue, Gold, Red | Diamond Black, Midnight Blue, Titanium Gray |
Why you should buy the View 10
The View 10 isn’t just similar to the Mate 10 Pro on paper; it’s made by the same company, and it has the same EMUI 8 software interface on top of Android Oreo. Where it really starts to differ is in its design; contrary to the Mate 10 Pro, which features a gorgeous, yet fragile glass back, the Honor View 10 boasts a more durable and utilitarian aluminum casing.
It’s also the only phone of the two with a 3.5mm headphone jack. Sure, the Mate 10 Pro comes with an adapter in the box, but let’s be real — having to carry around a dongle everywhere just to listen to wired audio isn’t nearly as convenient, and USB-C headphones are still a crapshoot.

Another key difference between the View 10 and Mate 10 Pro is the design and functionality of the fingerprint sensors. On the View 10, the sensor takes the form of a narrow strip just below the display, as opposed to the Mate 10 Pro’s positioning around back. Thanks to this location, you’re able to reallocate navigation controls to the fingerprint sensor on the View 10, eliminating the need for on-screen buttons and making room for more content at the bottom of the display.
Despite a lower price tag, the Honor View 10 has the same Kirin 970 chipset as the Mate 10 Pro. It’s Huawei’s most powerful processor, and it features the company’s new NPU, which uses artificial intelligence to improve the camera software and prevent each phone from performance degradation over time — meaning you don’t have to spend a fortune to get a long-lasting phone.
See at Honor
Why the Mate 10 Pro is worth the extra money
The View 10 is a fantastic phone, but for all its merit, Huawei isn’t going to cannibalize itself with a phone from its subsidiary brand. The Mate 10 Pro immediately feels more premium than the VIew 10 — though again, that comes at the cost of a much more breakable glass back.
On that back, you’ll notice Leica branding next to the cameras, and if that name sounds familiar it’s because Leica is one of the biggest names in optics and photography. The result of this partnership is a pair of cameras that produce significantly better photos than those of the View 10.

Though both phones feature 6-inch displays, only the Mate 10 Pro utilizes AMOLED technology, which consumes less power and produces more vivid colors than the LCD panel on the View 10. Speaking of power, the Mate 10 Pro lasts significantly longer on a charge than the View 10, thanks to its larger 4000mAh battery. It takes a lot of effort to fully run down the Mate 10 Pro’s battery in one day.
Unfortunately, neither phone is fully water resistant, but the Mate 10 Pro is at least IP67-certified, ensuring resistance against dust and up to one meter of water.
See at Amazon
Which would you buy?
The Mate 10 Pro is a formidable foe. Its larger battery is longer-lasting, the fingerprint sensor is more reliable, the Leica-branded cameras take better photos, and the IP67 rating helps to better protect it from the elements. Despite all of their similarities, the Mate 10 Pro is simply the better phone — that is, if you’re willing to spend $800 to get one.
The Mate 10 Pro is the better phone, but its hefty price may steer some to the View 10 anyway.
On the other hand, if you don’t need all of the aforementioned benefits of the Mate 10 Pro, you might be better off buying the Honor View 10 at a much more reasonable $499. With a $300 price difference, the View 10 may be the better option after all. Its spec sheet nearly mirrors that of the Mate 10 Pro, and many users will enjoy some of the features that apparently don’t qualify as flagship-grade anymore, including microSD expandability and a 3.5mm headphone jack.
The good news is that with phones so similar, you really can’t wrong with either one. Assuming you’re shopping for a new phone, the only question left to answer is … which one are you getting?
Huawei Mate 10
- Huawei Mate 10 Pro review
- Huawei Mate 10 series specs
- Huawei Mate 10 Pro U.S. review: Close to greatness
- Join the discussion in the forums
- More on 2016’s Mate 9
Over 3.2 billion bad online ads were removed by Google in 2017
That’s over 100 ads every second.
The Internet is a wonderful thing and wealth of endless information, but obtrusive online advertisements can quickly dimish the joy of using it. We already knew that Google made a lot of efforts last year to keep bad ads off the web, but now the company’s shed light on just how many violations it dealt with.

In 2017 alone, Google says it removed more than 3.2 billion ads that violated its advertising policies. To put things into perspective, that translates to around 100+ ads being removed every single second for the entire year. Looking more closely at that 3.2 billion number, Google notes it removed 79 million ads that tried sending people to malware-heavy sites, completely eliminated 400,000 of said sites, blocked 66 million “trick to click” ads, and got rid of 48 million ads that tricked users into installing unsafe software.
On the publisher side of things, Google notes:
Last year, we removed 320,000 publishers from our ad network for violating our publisher policies, and blacklisted nearly 90,000 websites and 700,000 mobile apps.
In that same breath, 2 million web pages were removed each month due to policy violations, and 8,700 pages had their Google Ads taken away after Google expanded its online advertising policy last April.
Rounding this out, Google says:
Does an ad with the headline “Ellen DeGeneres adopts a baby elephant!” make you want to click on it? You’re not alone. In recent years, scammers have tried to sell diet pills and weight-loss scams by buying ads that look like sensational news headlines but ultimately lead to a website selling something other than news. We suspended more than 7,000 AdWords accounts for tabloid cloaking violations, up from 1,400 in 2016.
Legitimate online ads are needed to make the Internet work the way it does, but pesky, abusive ones do nothing but ruin that whole experience for everyone. We’re still a ways away from an Internet that’s totally free of these, but it’s reassuring to see Google working so diligently on this matter.
Google’s bringing its Maps APIs to augmented reality game development



