Skip to content

Archive for

18
Mar

Android Auto can unlock your phone with a swipe


Google understandably shuts down easy access to your phone when Android Auto is turned on, since you’re supposed to be focused on the road. But what if you do need access to your phone while it’s still paired with your car? It should be easy from now on: Google has quietly added swipe-to-unlock access to your phone while Android Auto is active. If you absolutely need to pull over and check a third-party messaging app (or have a passenger do it for you), you don’t have to jump through hoops to get to your home screen.

The feature only just appears to be rolling out, so don’t be surprised if it’s unavailable for a while. Google hasn’t formally announced the feature. The objective seems clear at this point, though: it’s a balance between attempts to curb distracted driving and the convenience of accessing the phone directly if and when you really need it.

Via: 9to5Google

Source: Reddit

18
Mar

FIFA approves use of video referees at 2018 World Cup


Video assistant referees are about to get their biggest test to date. In the wake of an earlier general approval, the FIFA Council has authorized the use of VARs at the upcoming 2018 World Cup in Russia. The tool will help refs make decisions on difficult calls involving goals and penalties, any offenses leading up to those moments, mistaken identities and red cards. In theory, at least, this reduces the chances of a country going home early due to a bad call — a distinct possibility given the messes from the last World Cup.

As with virtually any early use of VARs, there are bound to be criticisms. While officials have vowed to minimize the disruption while observers check an incident, there’s no shortage of people arguing that any use of VARs slows down matches by an unacceptable level. Also, they tend to operate out of central hubs, such as the Moscow hub that will be used for the 2018 World Cup. There are concerns this could lead to overseers exerting undue influence over decisions across multiple matches, such as the alleged manipulation of VAR calls in Germany’s Bundesliga. And of course, VARs only come out for glaring errors or serious missed incidents. If a player takes a dive, only the ref will be involved in the call.

The World Cup implementation could have a significant effect on soccer (aka football) matches around the world. Leagues governed by FIFA aren’t required to use VARs at this stage, but they may be more likely to adopt the tech if they see it in action… especially if it keeps their national team in the running.

Via: Fox Soccer (Twitter)

Source: FIFA.com

18
Mar

Cosmetics giant L’Oreal buys AR beauty company ModiFace


Brenda Stolyar/Digital Trends

Cosmetics juggernaut L’Oreal has bought Modiface, the Canadian augmented reality (AR) beauty company that powers the makeup filters on the Galaxy S9‘s Bixby Vision.

We’ve had our eye on ModiFace for some time now, since it first looked into the tech in December 2015. Interest in using AR to see physical changes on one’s face in real time has exploded in the last year, with ModiFace’s software coming to both iOS and Android. The tech works similarly to Snapchat’s filters, applying a layer of makeup directly over your face’s image on your screen, and tracking your head movements in real-time so you can see your digital makeover from multiple angles.

ModiFace’s tech rapidly became industry-leading, powering multiple AR makeup apps, including those by Sephora and Benefit. With the Galaxy S9‘s Bixby Vision featuring this tech as well as the ability to buy the makeup you “try on,” it’s becoming increasingly obvious that the cosmetics world is taking a strong interest in the potential of AR.

This isn’t L’Oreal’s first foray into tech-savvy business — the company previously launched a smart brush that told users how to care for their hair, and recently showcased wearable tech that warned wearers when they’ve had enough sunlight. L’Oreal has also worked with ModiFace in the past, when the two companies collaborated on L’Oreal’s “Style My Hair” tool, which allowed users to see the effect of different hair dyes.

L’Oreal, along with other cosmetics companies, has long been curious of the application of AR in advertising cosmetics when combined with increasing online sales and social media marketing. MAC’s Virtual Try-On Mirror is one example of the tech being used in this way.

The amount that L’Oreal paid for ModiFace has not yet been disclosed, but given L’Oreal now spends 38 percent of its marketing budget on digital campaigns and the broad application of ModiFace’s tech, it’s unlikely that it skimped on the purchase.

“With ModiFace we’ve acquired … the stock of inventions they’ve already created, but more than that, the ability to look at reinventing the beauty experience in the years to come,” L’Oreal’s chief digital officer Lubomira Rochet said.

L’Oreal’s acquisition of ModiFace is likely to be a major blow to other manufacturers who have previously used ModiFace’s technology in their own apps and websites.

Editors’ Recommendations

  • ModiFace replaces makeup brushes with neural networks, and it’s coming to the S9
  • 5 features you may not have heard about on the Samsung Galaxy S9
  • How to use Samsung’s Bixby assistant for all of your smartphone tasks
  • The Samsung Galaxy S9 has finally arrived — here’s everything you need to know
  • Samsung Galaxy S9 review


18
Mar

Cosmetics giant L’Oreal buys AR beauty company ModiFace


Brenda Stolyar/Digital Trends

Cosmetics juggernaut L’Oreal has bought Modiface, the Canadian augmented reality (AR) beauty company that powers the makeup filters on the Galaxy S9‘s Bixby Vision.

We’ve had our eye on ModiFace for some time now, since it first looked into the tech in December 2015. Interest in using AR to see physical changes on one’s face in real time has exploded in the last year, with ModiFace’s software coming to both iOS and Android. The tech works similarly to Snapchat’s filters, applying a layer of makeup directly over your face’s image on your screen, and tracking your head movements in real-time so you can see your digital makeover from multiple angles.

ModiFace’s tech rapidly became industry-leading, powering multiple AR makeup apps, including those by Sephora and Benefit. With the Galaxy S9‘s Bixby Vision featuring this tech as well as the ability to buy the makeup you “try on,” it’s becoming increasingly obvious that the cosmetics world is taking a strong interest in the potential of AR.

This isn’t L’Oreal’s first foray into tech-savvy business — the company previously launched a smart brush that told users how to care for their hair, and recently showcased wearable tech that warned wearers when they’ve had enough sunlight. L’Oreal has also worked with ModiFace in the past, when the two companies collaborated on L’Oreal’s “Style My Hair” tool, which allowed users to see the effect of different hair dyes.

L’Oreal, along with other cosmetics companies, has long been curious of the application of AR in advertising cosmetics when combined with increasing online sales and social media marketing. MAC’s Virtual Try-On Mirror is one example of the tech being used in this way.

The amount that L’Oreal paid for ModiFace has not yet been disclosed, but given L’Oreal now spends 38 percent of its marketing budget on digital campaigns and the broad application of ModiFace’s tech, it’s unlikely that it skimped on the purchase.

“With ModiFace we’ve acquired … the stock of inventions they’ve already created, but more than that, the ability to look at reinventing the beauty experience in the years to come,” L’Oreal’s chief digital officer Lubomira Rochet said.

L’Oreal’s acquisition of ModiFace is likely to be a major blow to other manufacturers who have previously used ModiFace’s technology in their own apps and websites.

Editors’ Recommendations

  • ModiFace replaces makeup brushes with neural networks, and it’s coming to the S9
  • 5 features you may not have heard about on the Samsung Galaxy S9
  • How to use Samsung’s Bixby assistant for all of your smartphone tasks
  • The Samsung Galaxy S9 has finally arrived — here’s everything you need to know
  • Samsung Galaxy S9 review


18
Mar

Computers can’t keep shrinking, but they’ll keep getting better. Here’s how


Why are modern computers so much better than old ones? One explanation relates to the enormous number of advances which have taken place in microprocessing power over the past several decades. Roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.

This trend was first spotted in 1965 by Intel co-founder Gordon Moore, and is popularly referred to as “Moore’s Law.” The results have propelled technology forward and transformed it into a trillion dollar industry, in which unimaginably powerful chips can be found in everything from home computers to autonomous cars to smart household devices.

But Moore’s Law may not be able to go on indefinitely. The high tech industry might love its talk of exponential growth and a digitally-driven “end of scarcity,” but there are physical limits to the ability to continually shrink the size of components on a chip.

What is Moore’s Law?

Moore’s Law is an observation made by Intel co-founder Gordon Moore in 1965. It states that roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.

Already the billions of transistors on the latest chips are invisible to the human eye. If Moore’s Law was to continue through 2050, engineers will have to build transistors from components that are smaller than a single atom of hydrogen. It’s also increasingly expensive for companies to keep up. Building fabrication plants for new chips costs billions.

As a result of these factors, many people predict Moore’s Law will peter out some time in the early 2020s, when chips feature components that are only around 5 nanometers apart. What happens after that? Does technological progress grind to a halt, as thought we were stuck today using the same Windows 95 PC we owned a couple of decades ago?

Not really. Here are seven reasons why the end of Moore’s Law won’t mean the end of computing progress as we know it.

Moore’s Law won’t end ‘just like that’

Imagine the disaster that would befall us if, tomorrow, the law of thermodynamics or Newton’s three laws of motion ceased to function. Moore’s Law, despite its name, isn’t a universal law of this kind. Instead, it’s an observable trend like the fact that Michael Bay tends to release a new Transformers movie in the summer — except, you know, good.

SSPL/Getty Images

SSPL/Getty Images

Court Mast/Intel via Getty Images

Two Intel 8080 chips from the 1970s (top-left), the Intel 486 and Pentium from 1989 and 1992 (top-right), the Dual-Core Xeon Processor 5100 from 2006, and the i7 8th Generation from 2017.

Why do we bring this up? Because Moore’s Law isn’t going to just end like someone turning off gravity. Just because we no longer have a doubling of transistors on a chip every 18 months doesn’t mean that progress will come to a complete stop. It just means that the speed of improvements will happen a bit slower.

Picture it like oil. We’ve gotten the easy-to-reach stuff on the surface, now we need to use technologies like fracking to gain access to the tougher-to-get resources.

Better algorithms and software

Think of those NFL or NBA stars who makes so much money that they don’t have to worry about making their existing savings last longer. That’s a slightly messy, but still pertinent, metaphor for the relationship between Moore’s Law and software.

Squeezing more performance out of the same chips will become a much higher priority.

While there’s beautifully coded software out there, a lot of the time programmers haven’t had to worry too much about streamlining their code to make it less sluggish year after year because they know that next year’s computer processors will be able to run it better. If Moore’s Law no longer makes the same advances, however, this approach can no longer be relied upon.

Squeezing more software performance out of the same chips will therefore become a much higher priority. For speed and efficiency, that means creating better algorithms. Beyond speed, hopefully it will mean more elegant software with a great level of focus on user experience, look-and-feel, and quality.

Even if Moore’s Law was to end tomorrow, optimizing today’s software would still provide years, if not decades, of growth — even without hardware improvements.

More specialized chips

With that said, one way for chip designers to overcome the slowing down of advances in general purpose chips is to make ever more specialized processors instead. Graphics processing units (GPUs) are just one example of this. Custom specialized processors can also be used for neural networks, computer vision for self-driving cars, voice recognition, and Internet of Things devices.

As Moore’s Law slows, chipmakers will ramp up production on specialized chips. GPUs, for example, are already a driving force for computer vision in autonomous cars and vehicle to infrastructure networks.

These special designs can boast a range of improvements, such as greater levels of performance per watt. Companies jumping on this custom bandwagon include market leader Intel, Google, Wave Computing, Nvidia, IBM, and more.

Just like better programming, the slowdown in manufacturing advances compels chip designers to be more thoughtful when it comes to dreaming up new architectural breakthroughs.

It’s no longer just about the chips

Moore’s Law was born in the mid-1960s, a quarter century before computer scientist Tim Berners-Lee invented the World Wide Web. While the theory has held true ever since then, there’s also less need to rely on localized processing in an age of connected devices. Sure, a lot of the functions on your PC, tablet or smartphone are processed on the device itself, but a growing number aren’t.

With Cloud computing  a lot of the heavy lifting can be carried out elsewhere.

Cloud computing means that a lot of the heavy lifting for big computational problems can be carried out elsewhere in large data centers, using massively parallel systems that utilize many, many times the number of transistors in a regular single computer. That’s especially true for A.I. intensive tasks, such as the smart assistants we use on our devices.

By having this processing carried out elsewhere, and the answer delivered back to your local machine when it’s calculated, machines can get exponentially smarter without having to change their processors every 18 months or so.

New materials and configurations

Silicon Valley earned its name for a reason, but researchers are busy investigating future chips which could be made of materials other than silicon.

For example, Intel is doing some amazing work with transistors which are built in an upwards 3D pattern instead of laying flat to experiment with different ways to pack transistors onto a circuit board. Other materials such as those based on elements from the third and fifth columns of the periodic table could take over from silicon because they are better conductors.

Right now, it’s not clear whether these substances will be scalable or affordable, but given the combined expertise of the tech industry’s finest — and the incentive that will go along with it — the next semiconductor material could be out there waiting.

Quantum computing

Quantum computing is probably the most “out there” idea on this list. It’s also the most second most exciting. Quantum computers are, right now, an experimental and very expensive technology. They are a different animal from the binary digital electronic computers we know, which are based on transistors.

IBM Research

Instead of encoding data into bits which are either 0 or 1, quantum computing deals with quantum bits, which can be 0, 1 and both 0 and 1 at the same time. Long story short? These superpositions could make quantum computers much faster and more efficient than currently existing mainstream computers.

Making quantum computers carries plenty of challenges (they need to be kept incredibly cold for one thing). However, if engineers can crack this problem we may be able to trigger enormous progress at a pace so rapid it would make Gordon Moore’s head spin.

Stuff we can’t think of yet

Very few people would have predicted smartphones back in the 1980s. The idea that Google would become the giant that it is or that an e-commerce website like Amazon would be on track to become the first $1 trillion company would have sounded crazy at the start of the 1990s.

The point is that, when it comes to the future of computing, we’re not to going to claim to know exactly what’s around the corner. Yes, right now quantum computing looks like the big long term computing hope post-Moore’s Law, but chances are that in a few decades computers will look entirely different to the ones we use today.

Whether it’s new configurations of machines, chips made out of entirely new materials, or new types of subatomic research that open up new ways of packing transistors on to chips, we believe the future of computing — with all the ingenuity it involves — will be A-okay.


18
Mar

Computers can’t keep shrinking, but they’ll keep getting better. Here’s how


Why are modern computers so much better than old ones? One explanation relates to the enormous number of advances which have taken place in microprocessing power over the past several decades. Roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.

This trend was first spotted in 1965 by Intel co-founder Gordon Moore, and is popularly referred to as “Moore’s Law.” The results have propelled technology forward and transformed it into a trillion dollar industry, in which unimaginably powerful chips can be found in everything from home computers to autonomous cars to smart household devices.

But Moore’s Law may not be able to go on indefinitely. The high tech industry might love its talk of exponential growth and a digitally-driven “end of scarcity,” but there are physical limits to the ability to continually shrink the size of components on a chip.

What is Moore’s Law?

Moore’s Law is an observation made by Intel co-founder Gordon Moore in 1965. It states that roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.

Already the billions of transistors on the latest chips are invisible to the human eye. If Moore’s Law was to continue through 2050, engineers will have to build transistors from components that are smaller than a single atom of hydrogen. It’s also increasingly expensive for companies to keep up. Building fabrication plants for new chips costs billions.

As a result of these factors, many people predict Moore’s Law will peter out some time in the early 2020s, when chips feature components that are only around 5 nanometers apart. What happens after that? Does technological progress grind to a halt, as thought we were stuck today using the same Windows 95 PC we owned a couple of decades ago?

Not really. Here are seven reasons why the end of Moore’s Law won’t mean the end of computing progress as we know it.

Moore’s Law won’t end ‘just like that’

Imagine the disaster that would befall us if, tomorrow, the law of thermodynamics or Newton’s three laws of motion ceased to function. Moore’s Law, despite its name, isn’t a universal law of this kind. Instead, it’s an observable trend like the fact that Michael Bay tends to release a new Transformers movie in the summer — except, you know, good.

SSPL/Getty Images

SSPL/Getty Images

Court Mast/Intel via Getty Images

Two Intel 8080 chips from the 1970s (top-left), the Intel 486 and Pentium from 1989 and 1992 (top-right), the Dual-Core Xeon Processor 5100 from 2006, and the i7 8th Generation from 2017.

Why do we bring this up? Because Moore’s Law isn’t going to just end like someone turning off gravity. Just because we no longer have a doubling of transistors on a chip every 18 months doesn’t mean that progress will come to a complete stop. It just means that the speed of improvements will happen a bit slower.

Picture it like oil. We’ve gotten the easy-to-reach stuff on the surface, now we need to use technologies like fracking to gain access to the tougher-to-get resources.

Better algorithms and software

Think of those NFL or NBA stars who makes so much money that they don’t have to worry about making their existing savings last longer. That’s a slightly messy, but still pertinent, metaphor for the relationship between Moore’s Law and software.

Squeezing more performance out of the same chips will become a much higher priority.

While there’s beautifully coded software out there, a lot of the time programmers haven’t had to worry too much about streamlining their code to make it less sluggish year after year because they know that next year’s computer processors will be able to run it better. If Moore’s Law no longer makes the same advances, however, this approach can no longer be relied upon.

Squeezing more software performance out of the same chips will therefore become a much higher priority. For speed and efficiency, that means creating better algorithms. Beyond speed, hopefully it will mean more elegant software with a great level of focus on user experience, look-and-feel, and quality.

Even if Moore’s Law was to end tomorrow, optimizing today’s software would still provide years, if not decades, of growth — even without hardware improvements.

More specialized chips

With that said, one way for chip designers to overcome the slowing down of advances in general purpose chips is to make ever more specialized processors instead. Graphics processing units (GPUs) are just one example of this. Custom specialized processors can also be used for neural networks, computer vision for self-driving cars, voice recognition, and Internet of Things devices.

As Moore’s Law slows, chipmakers will ramp up production on specialized chips. GPUs, for example, are already a driving force for computer vision in autonomous cars and vehicle to infrastructure networks.

These special designs can boast a range of improvements, such as greater levels of performance per watt. Companies jumping on this custom bandwagon include market leader Intel, Google, Wave Computing, Nvidia, IBM, and more.

Just like better programming, the slowdown in manufacturing advances compels chip designers to be more thoughtful when it comes to dreaming up new architectural breakthroughs.

It’s no longer just about the chips

Moore’s Law was born in the mid-1960s, a quarter century before computer scientist Tim Berners-Lee invented the World Wide Web. While the theory has held true ever since then, there’s also less need to rely on localized processing in an age of connected devices. Sure, a lot of the functions on your PC, tablet or smartphone are processed on the device itself, but a growing number aren’t.

With Cloud computing  a lot of the heavy lifting can be carried out elsewhere.

Cloud computing means that a lot of the heavy lifting for big computational problems can be carried out elsewhere in large data centers, using massively parallel systems that utilize many, many times the number of transistors in a regular single computer. That’s especially true for A.I. intensive tasks, such as the smart assistants we use on our devices.

By having this processing carried out elsewhere, and the answer delivered back to your local machine when it’s calculated, machines can get exponentially smarter without having to change their processors every 18 months or so.

New materials and configurations

Silicon Valley earned its name for a reason, but researchers are busy investigating future chips which could be made of materials other than silicon.

For example, Intel is doing some amazing work with transistors which are built in an upwards 3D pattern instead of laying flat to experiment with different ways to pack transistors onto a circuit board. Other materials such as those based on elements from the third and fifth columns of the periodic table could take over from silicon because they are better conductors.

Right now, it’s not clear whether these substances will be scalable or affordable, but given the combined expertise of the tech industry’s finest — and the incentive that will go along with it — the next semiconductor material could be out there waiting.

Quantum computing

Quantum computing is probably the most “out there” idea on this list. It’s also the most second most exciting. Quantum computers are, right now, an experimental and very expensive technology. They are a different animal from the binary digital electronic computers we know, which are based on transistors.

IBM Research

Instead of encoding data into bits which are either 0 or 1, quantum computing deals with quantum bits, which can be 0, 1 and both 0 and 1 at the same time. Long story short? These superpositions could make quantum computers much faster and more efficient than currently existing mainstream computers.

Making quantum computers carries plenty of challenges (they need to be kept incredibly cold for one thing). However, if engineers can crack this problem we may be able to trigger enormous progress at a pace so rapid it would make Gordon Moore’s head spin.

Stuff we can’t think of yet

Very few people would have predicted smartphones back in the 1980s. The idea that Google would become the giant that it is or that an e-commerce website like Amazon would be on track to become the first $1 trillion company would have sounded crazy at the start of the 1990s.

The point is that, when it comes to the future of computing, we’re not to going to claim to know exactly what’s around the corner. Yes, right now quantum computing looks like the big long term computing hope post-Moore’s Law, but chances are that in a few decades computers will look entirely different to the ones we use today.

Whether it’s new configurations of machines, chips made out of entirely new materials, or new types of subatomic research that open up new ways of packing transistors on to chips, we believe the future of computing — with all the ingenuity it involves — will be A-okay.


18
Mar

NASA’s planet-hunting deep space telescope is about to run out of fuel


NASA

The Kepler space telescope is running on empty, and there are no places to fill up when you’re 94 million miles from Earth.

Charlie Sobeck, an engineer for the Kepler mission, announced in an update that the end is near for the nine-year old deep space observatory. “At this rate, the hardy spacecraft may reach its finish line in a manner we will consider a wonderful success,” he wrote. “With nary a gas station to be found in deep space, the spacecraft is going to run out of fuel. We expect to reach that moment within several months.”

Kepler was launched on March 6, 2009, on what was originally envisioned as a three-and-a-half-year mission. The spacecraft was guided into a solar orbit, trailing the Earth as it circles the sun, on a quest to find Earth-sized planets orbiting distant stars.

The Kepler telescope can’t actually “see” those distant planets, of course. Rather, it looks for variations in light as a planet passes in front of its star, creating a tiny pulse. Repeated observations can detect the size and orbit of the planet.

Kepler has discovered hundreds of exoplanets over the past nine years. Its mission could have ended in 2013 when a reaction wheel on the spacecraft broke, making it unable to maintain its position relative to the Earth.

The new Kepler mission, called K2, began using the pressure of sunlight to maintain its orientation. Like steering into the current on a river, the new technique let the telescope shift its field of view for a new observation every three months. The team initially estimated that the spacecrafts could conduct ten of these “campaigns” before ending its mission, but it’s already on its 17th.

The fuel that Kepler uses is hydrazine monopropellant, as Sobeck explained in a podcast about the mission. “It’s just one fluid that when it goes through the thrusters it ignites, and it provides thrust,” he said. “It’s pressurized in the tank, and that’s what drives it in to the thrusters, down fuel lines just like you have your lines in your car.”

One of the challenges is to retrieve the data that’s already stored on the data recorder. The last drops of fuel will be used to rotate the spacecraft so its parabolic dish is pointed at the Earth. “The data that we’ve spent so much time and effort to get, we want to get it to the ground,” Sobeck said. “It doesn’t help us if it lives on the spacecraft forever. We’ve got to get it to the ground.”

Although this may be the end of Kepler, a new planet-hunter is scheduled to take to the skies later this spring. TESS (Transiting Exoplanet Survey Satellite) will be launched aboard a SpaceX rocket on a mission to survey the 200,000 brightest stars nearest the sun for evidence of exoplanets.

Editors’ Recommendations

  • The new ESPRESSO four-in-one telescope is a next-generation planet hunter
  • SpaceX is blazing a trail to Mars, one milestone at a time
  • For the first time, scientists discover exoplanets in a galaxy far, far away
  • NASA releases first images of Jupiter’s bizarre geometric storms
  • SpaceX has successfully launched its first broadband satellites


18
Mar

In bid to compete with Amazon, Walmart files patents for farming drones


Walmart has been expanding the reach of its grocery business for several years, and may be looking to use technology to make its supply chains more efficient. The retail giant has filed patents for six drones that would help automate the farming process, Business Insider reported. The full details of the drones  haven’t been revealed, but we do know that one is meant to pollinate crops, one would work to protect plants from pests, and a third would keep an eye on plant health.

While this would give Walmart more control over its supply chain, it is unlikely that the company is planning on going into the farming business. Instead, Walmart will sell these drones to partner farms in an attempt to make them more efficient. On the consumer side of things, this could mean higher supplies of fruits and vegetables and lower prices, though nothing is certain.

Paula Savanti, a senior consumer analyst at Rabobank, told Business Insider that she believes the drones will give Walmart more insight into what is happening on its farms, and allow for better response to changes in supply.

“I’m guessing that any tech that’s geared toward improving efficiency at the farm level would benefit them. It would allow them to anticipate supply problems and adjust accordingly,” Savanti said.

Overall, this would help make Walmart’s supply chain more predictable and help it better compete with online retailers — most notably Amazon. Savanti noted that the rise of Amazon has triggered a change in the way retailers look at technology. It is no longer enough to simply have a well-designed web site.

“Part of the ‘Amazon effect’ is making these [retail] companies start looking into investments in different areas beyond ecommerce,” Savanti said. “It forces them to redirect investments to improving their technological capabilities in general — not just at the end of the supply chain, but in the beginning as well.”

Amazon may have spelled the end of the road for a lot of bookstores and electronic outlets, such as Circuit City, but consumers are still fairly new to the concept of online grocery shopping. Companies like Walmart still have a chance to compete with Amazon in that field.

Editors’ Recommendations

  • Before ‘plantscrapers’ can grow food in the city, they’ll need to grow money
  • Walmart takes on Amazon by offering grocery delivery — for a fee
  • Amazon might offer a pickup service for Whole Foods and other retailers
  • Walmart partners with Rakuten for online groceries and ebooks
  • Amazon-style drone deliveries come a step closer for U.K. shoppers


18
Mar

Charge your phone on the way to your destination with iOttie’s $42 QI Wireless Car Mount


Stay safe and charged up with the iOttie Easy One Touch.

iOttie’s Easy One Touch QI Wireless Car Mount is on sale for only $42.46 at Amazon right now when you enter the promo code S9SAVE15 at checkout. That saves you $8 off its regular price which hardly ever sees a discount.

iottie-android-clone.jpg?itok=dB12UyV9

This phone mount features wireless charging and can fit most devices securely. It has a telescopic arm that can extend up to eight inches and pivot up to 255 degrees. It also has a built-in USB port so you can charge a second device with it.

Almost 450 reviewers rated this item with 3.9 out of 5 stars at Amazon.

See at Amazon

18
Mar

The rise and rise (and rise) of ’Fortnite’


It’s safe to say that, when a video game that counts Drake among its fans has breakfast TV shows around the world discussing its effect on younger players, it has truly made it. No, we’re not talking about Grand Theft Auto, but Fortnite, Epic Games’ mass-multiplayer shooter that has over 40 million players across consoles and PC, and continues to grow at a rapid pace.

When Fortnite launched as a paid Early Access game in July 2017, it was solely as a PvE (player vs. environment) experience. Players completed levels by collecting materials and crafting super-elaborate bases to repel hordes of zombies. Reviews were mostly positive, but Epic kept one eye on the success of another Early Access game — PlayerUnknown’s Battlegrounds (PUBG) — and quickly spun out a new, free-to-play mode, “Battle Royale,” the following September.

The premise of this new mode was simple: 100 people parachute onto an island with only the clothes on their back and a pickaxe. Players can go it alone or enter as a team of up to four. As they forage for weapons and defensive items and generally hunt rival players, the game slowly restricts the game area by way of a “Storm Eye,” which players must remain inside to survive. It’s this mechanic that keeps the game active: The eye periodically shrinks and players are forced to converge on the same areas on the map or risk checking out early.

If that idea sounds familiar, it’s because the gameplay loop is what made PUBG so popular in the first place. Battle-royale games existed before — both Rust and H1Z1 enjoyed some level of success — but PUBG, with its expertly designed map and tight shooting mechanics, was the first to truly resonate with gamers.

By September, PUBG had broken the record for most concurrent players on Steam, reporting 1,342,857 players versus Dota 2’s previous milestone of 1,295,114. However, December was when the game truly exploded. It took home “Best Multiplayer Game” at the 2017 Game Awards, and before long the game had 30 million players and concurrents rose to 3.1 million, helped by its release on Xbox One.

When Fortnite: Battle Royale appeared, the similarities between the two weren’t lost on people. Chang Han Kim, CEO of PUBG Corporation, slammed Epic for “replicating” the game and threatened to take further action. The situation was complicated by the fact that PUBG uses Epic Games’ Unreal Engine.

By December, Fortnite was roughly level in total players but had less than of PUBG’s concurrent numbers. By January 15th, though, the game had added a further 10 million players. On January 19th, Epic said another 5 million people had picked up the game, bringing the total figure to 45 million. It’s likely that the second surge was prompted by Battle Royale’s updated map, which went live on January 18th. The update added various new locations to the game world, including a motel and a big city, as well as biomes, which provided different areas with their own unique environments.

That success continued into February. On February 8th, Epic announced that Battle Royale had topped its rival’s concurrent player record, amassing an impressive 3.4 million users (the previous record was 3.2 million). The game was seeing unprecedented demand: rewind a fortnight and that number was around the 2 million player mark. Up until that point, both Fortnite and PUBG had been enjoying growth independently of each other, but February saw the latter’s numbers fall for the first time ever. In January, PUBG had an average of 1.58 million players, with a peak of 3.26 million. In February, those numbers slid down to 1.39 million average and 2.93 million peak.

As Fortnite rose, streamers saw exponential rises in audience numbers. Tyler “Ninja” Blevins is one such personality. Blevins was former professional Halo player who went all-in on streaming, riding the early wave of interest in H1Z1 before turning his attention to PUBG and then Fortnite. Such was the interest in streams of Epic’s free-to-play hit, Ninja’s regularly began pulling in more than three times the number of viewers he usually averaged, consistently reaching 100,000 concurrent viewers at a time.

On March 12th, Kotaku put his total Twitch subscribers at 130,000 viewers. Thanks to Twitch Prime and the streaming platform’s varying subscriptions tiers, it’s impossible to say exactly how much money Blevin would make from these viewer numbers, but it would be at least $350,000 per month. His enthusiastic mannerisms and reactions to kills endear him to younger viewers, but it’s his high level of skill that has won over a more diverse group of fans, including the musician Drake.

playing fort nite with @ninja https://t.co/OSFbgcfzaZ

— Drizzy (@Drake) March 15, 2018

Drake’s interest started with an Instagram follow, and before long Ninja promised the duo would team up for a Twitch livestream. On March 15th, Epic Games and Twitch employees looked on in amazement as Ninja not only buddied up with Drake but also rapper Travis Scott, Pittsburgh Steelers wide receiver JuJu Smith-Schuster and Kim Dotcom.

Word of Drake’s involvement quickly spread and made Ninja’s streams even more popular than usual. When Drake tweeted that he was streaming live, however, records were broken. Buoyed by some of the rapper’s 36.9 million Twitter followers, Ninja’s stream passed 635,000 concurrent viewers, destroying the Twitch record for a single streamer. It also pushed his subscriber count to over 180,000, boosting his monthly earnings from the platform close to $500,000.

As Fornite’s popularity increased, so too did general interest in the game. Both Good Morning America and the UK’s This Morning show featured slots discussing the addictive nature of the game and parents’ worries about a possible desensitization to violence. The game does involve classic shooter elements, but it has a vibrant, cartoony art style that doesn’t show blood or gore.

Does Fortnite deserve its place as one of the world’s most popular games? Absolutely. Unlike shooters like Overwatch or Call of Duty, Fortnite doesn’t attempt to put as much emphasis on player ranks. Matchmaking isn’t based on a user’s skill level, which gives players the opportunity to come face-to-face with players both more and less proficient at the game. This is key: If new or lower-skilled players are constantly matched against people who are on or around their level, they may find it hard to make an impact and suddenly the game becomes a grind.

As with other battle royale games, loot is randomly dropped all over the map, and downed players drop any loot they’ve collected. No two visits to the same location will offer the same weapons, health packs and shields. At the start of a game, this is the ultimate leveler: Highly skilled players like Ninja can land, race to a gun, and be beaten to it and eliminated by someone playing for the first time. After being killed, players can either spectate their opponents or join another game.

Fortnite’s success isn’t accidental. Free-to-play gives it a considerable advantage over PUBG as gamers can try it out with no risk. Epic makes almost all its money from in-game items. Gamers can choose to buy a cheap Battle Pass, which unlocks certain in-game items and opens up new challenges, or simply buy the skins or emotes they like via the in-game store. There’s no pay to win; it’s purely cosmetic. That’s an easy sell to parents, who won’t have to put money down for yet another video game.

Unlike PUBG Corporation, Epic is an established developer. Its Unreal game engine is used by many companies, bringing in steady revenues. Its experience and cash reserves allowed it to quickly scale advantage scaling up support and resources, while PUBG slowly ramped up development as player numbers swelled, only releasing “version 1.0” in December of last year. Epic has an entire team dedicated solely to Battle Royale, which pumps out new in-game items, matchmaking modes and bug fixes roughly once a week. Just this month, a new “20 vs. 20 vs. 20 vs. 20 vs. 20” mode has arrived, offering a more clan-based take on the royale formula.

This rapid release cycle also enables the company to tap into internet culture. Memes are integrated into the game while they’re still fresh, with in-game emotes of famous dances or gestures. And the success of the game has also seen it begin to have an impact on culture in the real world. Soccer and rugby players have recreating moves from the game to celebrate scoring, and it’s likely we’ll see NFL stars do the same when the new season starts.

Following Drake’s recent appearance, gamers are now calling for a “Hotline Bling” emote. You can bet Epic is doing everything it can to make that happen.

how drake be giving ninja items pic.twitter.com/vcpeNwu0c5

— JhbTeam (@JhbTeam) March 15, 2018