Computers can’t keep shrinking, but they’ll keep getting better. Here’s how
Why are modern computers so much better than old ones? One explanation relates to the enormous number of advances which have taken place in microprocessing power over the past several decades. Roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.
This trend was first spotted in 1965 by Intel co-founder Gordon Moore, and is popularly referred to as “Moore’s Law.” The results have propelled technology forward and transformed it into a trillion dollar industry, in which unimaginably powerful chips can be found in everything from home computers to autonomous cars to smart household devices.
But Moore’s Law may not be able to go on indefinitely. The high tech industry might love its talk of exponential growth and a digitally-driven “end of scarcity,” but there are physical limits to the ability to continually shrink the size of components on a chip.
What is Moore’s Law?
Moore’s Law is an observation made by Intel co-founder Gordon Moore in 1965. It states that roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.
Already the billions of transistors on the latest chips are invisible to the human eye. If Moore’s Law was to continue through 2050, engineers will have to build transistors from components that are smaller than a single atom of hydrogen. It’s also increasingly expensive for companies to keep up. Building fabrication plants for new chips costs billions.
As a result of these factors, many people predict Moore’s Law will peter out some time in the early 2020s, when chips feature components that are only around 5 nanometers apart. What happens after that? Does technological progress grind to a halt, as thought we were stuck today using the same Windows 95 PC we owned a couple of decades ago?
Not really. Here are seven reasons why the end of Moore’s Law won’t mean the end of computing progress as we know it.
Moore’s Law won’t end ‘just like that’
Imagine the disaster that would befall us if, tomorrow, the law of thermodynamics or Newton’s three laws of motion ceased to function. Moore’s Law, despite its name, isn’t a universal law of this kind. Instead, it’s an observable trend like the fact that Michael Bay tends to release a new Transformers movie in the summer — except, you know, good.
SSPL/Getty Images
SSPL/Getty Images
Court Mast/Intel via Getty Images
Two Intel 8080 chips from the 1970s (top-left), the Intel 486 and Pentium from 1989 and 1992 (top-right), the Dual-Core Xeon Processor 5100 from 2006, and the i7 8th Generation from 2017.
Why do we bring this up? Because Moore’s Law isn’t going to just end like someone turning off gravity. Just because we no longer have a doubling of transistors on a chip every 18 months doesn’t mean that progress will come to a complete stop. It just means that the speed of improvements will happen a bit slower.
Picture it like oil. We’ve gotten the easy-to-reach stuff on the surface, now we need to use technologies like fracking to gain access to the tougher-to-get resources.
Better algorithms and software
Think of those NFL or NBA stars who makes so much money that they don’t have to worry about making their existing savings last longer. That’s a slightly messy, but still pertinent, metaphor for the relationship between Moore’s Law and software.
Squeezing more performance out of the same chips will become a much higher priority.
While there’s beautifully coded software out there, a lot of the time programmers haven’t had to worry too much about streamlining their code to make it less sluggish year after year because they know that next year’s computer processors will be able to run it better. If Moore’s Law no longer makes the same advances, however, this approach can no longer be relied upon.
Squeezing more software performance out of the same chips will therefore become a much higher priority. For speed and efficiency, that means creating better algorithms. Beyond speed, hopefully it will mean more elegant software with a great level of focus on user experience, look-and-feel, and quality.
Even if Moore’s Law was to end tomorrow, optimizing today’s software would still provide years, if not decades, of growth — even without hardware improvements.
More specialized chips
With that said, one way for chip designers to overcome the slowing down of advances in general purpose chips is to make ever more specialized processors instead. Graphics processing units (GPUs) are just one example of this. Custom specialized processors can also be used for neural networks, computer vision for self-driving cars, voice recognition, and Internet of Things devices.
As Moore’s Law slows, chipmakers will ramp up production on specialized chips. GPUs, for example, are already a driving force for computer vision in autonomous cars and vehicle to infrastructure networks.
These special designs can boast a range of improvements, such as greater levels of performance per watt. Companies jumping on this custom bandwagon include market leader Intel, Google, Wave Computing, Nvidia, IBM, and more.
Just like better programming, the slowdown in manufacturing advances compels chip designers to be more thoughtful when it comes to dreaming up new architectural breakthroughs.
It’s no longer just about the chips
Moore’s Law was born in the mid-1960s, a quarter century before computer scientist Tim Berners-Lee invented the World Wide Web. While the theory has held true ever since then, there’s also less need to rely on localized processing in an age of connected devices. Sure, a lot of the functions on your PC, tablet or smartphone are processed on the device itself, but a growing number aren’t.
With Cloud computing a lot of the heavy lifting can be carried out elsewhere.
Cloud computing means that a lot of the heavy lifting for big computational problems can be carried out elsewhere in large data centers, using massively parallel systems that utilize many, many times the number of transistors in a regular single computer. That’s especially true for A.I. intensive tasks, such as the smart assistants we use on our devices.
By having this processing carried out elsewhere, and the answer delivered back to your local machine when it’s calculated, machines can get exponentially smarter without having to change their processors every 18 months or so.
New materials and configurations
Silicon Valley earned its name for a reason, but researchers are busy investigating future chips which could be made of materials other than silicon.
For example, Intel is doing some amazing work with transistors which are built in an upwards 3D pattern instead of laying flat to experiment with different ways to pack transistors onto a circuit board. Other materials such as those based on elements from the third and fifth columns of the periodic table could take over from silicon because they are better conductors.
Right now, it’s not clear whether these substances will be scalable or affordable, but given the combined expertise of the tech industry’s finest — and the incentive that will go along with it — the next semiconductor material could be out there waiting.
Quantum computing
Quantum computing is probably the most “out there” idea on this list. It’s also the most second most exciting. Quantum computers are, right now, an experimental and very expensive technology. They are a different animal from the binary digital electronic computers we know, which are based on transistors.
IBM Research
Instead of encoding data into bits which are either 0 or 1, quantum computing deals with quantum bits, which can be 0, 1 and both 0 and 1 at the same time. Long story short? These superpositions could make quantum computers much faster and more efficient than currently existing mainstream computers.
Making quantum computers carries plenty of challenges (they need to be kept incredibly cold for one thing). However, if engineers can crack this problem we may be able to trigger enormous progress at a pace so rapid it would make Gordon Moore’s head spin.
Stuff we can’t think of yet
Very few people would have predicted smartphones back in the 1980s. The idea that Google would become the giant that it is or that an e-commerce website like Amazon would be on track to become the first $1 trillion company would have sounded crazy at the start of the 1990s.
The point is that, when it comes to the future of computing, we’re not to going to claim to know exactly what’s around the corner. Yes, right now quantum computing looks like the big long term computing hope post-Moore’s Law, but chances are that in a few decades computers will look entirely different to the ones we use today.
Whether it’s new configurations of machines, chips made out of entirely new materials, or new types of subatomic research that open up new ways of packing transistors on to chips, we believe the future of computing — with all the ingenuity it involves — will be A-okay.
Computers can’t keep shrinking, but they’ll keep getting better. Here’s how
Why are modern computers so much better than old ones? One explanation relates to the enormous number of advances which have taken place in microprocessing power over the past several decades. Roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.
This trend was first spotted in 1965 by Intel co-founder Gordon Moore, and is popularly referred to as “Moore’s Law.” The results have propelled technology forward and transformed it into a trillion dollar industry, in which unimaginably powerful chips can be found in everything from home computers to autonomous cars to smart household devices.
But Moore’s Law may not be able to go on indefinitely. The high tech industry might love its talk of exponential growth and a digitally-driven “end of scarcity,” but there are physical limits to the ability to continually shrink the size of components on a chip.
What is Moore’s Law?
Moore’s Law is an observation made by Intel co-founder Gordon Moore in 1965. It states that roughly every 18 months, the number of transistors that can be squeezed onto an integrated circuit doubles.
Already the billions of transistors on the latest chips are invisible to the human eye. If Moore’s Law was to continue through 2050, engineers will have to build transistors from components that are smaller than a single atom of hydrogen. It’s also increasingly expensive for companies to keep up. Building fabrication plants for new chips costs billions.
As a result of these factors, many people predict Moore’s Law will peter out some time in the early 2020s, when chips feature components that are only around 5 nanometers apart. What happens after that? Does technological progress grind to a halt, as thought we were stuck today using the same Windows 95 PC we owned a couple of decades ago?
Not really. Here are seven reasons why the end of Moore’s Law won’t mean the end of computing progress as we know it.
Moore’s Law won’t end ‘just like that’
Imagine the disaster that would befall us if, tomorrow, the law of thermodynamics or Newton’s three laws of motion ceased to function. Moore’s Law, despite its name, isn’t a universal law of this kind. Instead, it’s an observable trend like the fact that Michael Bay tends to release a new Transformers movie in the summer — except, you know, good.
SSPL/Getty Images
SSPL/Getty Images
Court Mast/Intel via Getty Images
Two Intel 8080 chips from the 1970s (top-left), the Intel 486 and Pentium from 1989 and 1992 (top-right), the Dual-Core Xeon Processor 5100 from 2006, and the i7 8th Generation from 2017.
Why do we bring this up? Because Moore’s Law isn’t going to just end like someone turning off gravity. Just because we no longer have a doubling of transistors on a chip every 18 months doesn’t mean that progress will come to a complete stop. It just means that the speed of improvements will happen a bit slower.
Picture it like oil. We’ve gotten the easy-to-reach stuff on the surface, now we need to use technologies like fracking to gain access to the tougher-to-get resources.
Better algorithms and software
Think of those NFL or NBA stars who makes so much money that they don’t have to worry about making their existing savings last longer. That’s a slightly messy, but still pertinent, metaphor for the relationship between Moore’s Law and software.
Squeezing more performance out of the same chips will become a much higher priority.
While there’s beautifully coded software out there, a lot of the time programmers haven’t had to worry too much about streamlining their code to make it less sluggish year after year because they know that next year’s computer processors will be able to run it better. If Moore’s Law no longer makes the same advances, however, this approach can no longer be relied upon.
Squeezing more software performance out of the same chips will therefore become a much higher priority. For speed and efficiency, that means creating better algorithms. Beyond speed, hopefully it will mean more elegant software with a great level of focus on user experience, look-and-feel, and quality.
Even if Moore’s Law was to end tomorrow, optimizing today’s software would still provide years, if not decades, of growth — even without hardware improvements.
More specialized chips
With that said, one way for chip designers to overcome the slowing down of advances in general purpose chips is to make ever more specialized processors instead. Graphics processing units (GPUs) are just one example of this. Custom specialized processors can also be used for neural networks, computer vision for self-driving cars, voice recognition, and Internet of Things devices.
As Moore’s Law slows, chipmakers will ramp up production on specialized chips. GPUs, for example, are already a driving force for computer vision in autonomous cars and vehicle to infrastructure networks.
These special designs can boast a range of improvements, such as greater levels of performance per watt. Companies jumping on this custom bandwagon include market leader Intel, Google, Wave Computing, Nvidia, IBM, and more.
Just like better programming, the slowdown in manufacturing advances compels chip designers to be more thoughtful when it comes to dreaming up new architectural breakthroughs.
It’s no longer just about the chips
Moore’s Law was born in the mid-1960s, a quarter century before computer scientist Tim Berners-Lee invented the World Wide Web. While the theory has held true ever since then, there’s also less need to rely on localized processing in an age of connected devices. Sure, a lot of the functions on your PC, tablet or smartphone are processed on the device itself, but a growing number aren’t.
With Cloud computing a lot of the heavy lifting can be carried out elsewhere.
Cloud computing means that a lot of the heavy lifting for big computational problems can be carried out elsewhere in large data centers, using massively parallel systems that utilize many, many times the number of transistors in a regular single computer. That’s especially true for A.I. intensive tasks, such as the smart assistants we use on our devices.
By having this processing carried out elsewhere, and the answer delivered back to your local machine when it’s calculated, machines can get exponentially smarter without having to change their processors every 18 months or so.
New materials and configurations
Silicon Valley earned its name for a reason, but researchers are busy investigating future chips which could be made of materials other than silicon.
For example, Intel is doing some amazing work with transistors which are built in an upwards 3D pattern instead of laying flat to experiment with different ways to pack transistors onto a circuit board. Other materials such as those based on elements from the third and fifth columns of the periodic table could take over from silicon because they are better conductors.
Right now, it’s not clear whether these substances will be scalable or affordable, but given the combined expertise of the tech industry’s finest — and the incentive that will go along with it — the next semiconductor material could be out there waiting.
Quantum computing
Quantum computing is probably the most “out there” idea on this list. It’s also the most second most exciting. Quantum computers are, right now, an experimental and very expensive technology. They are a different animal from the binary digital electronic computers we know, which are based on transistors.
IBM Research
Instead of encoding data into bits which are either 0 or 1, quantum computing deals with quantum bits, which can be 0, 1 and both 0 and 1 at the same time. Long story short? These superpositions could make quantum computers much faster and more efficient than currently existing mainstream computers.
Making quantum computers carries plenty of challenges (they need to be kept incredibly cold for one thing). However, if engineers can crack this problem we may be able to trigger enormous progress at a pace so rapid it would make Gordon Moore’s head spin.
Stuff we can’t think of yet
Very few people would have predicted smartphones back in the 1980s. The idea that Google would become the giant that it is or that an e-commerce website like Amazon would be on track to become the first $1 trillion company would have sounded crazy at the start of the 1990s.
The point is that, when it comes to the future of computing, we’re not to going to claim to know exactly what’s around the corner. Yes, right now quantum computing looks like the big long term computing hope post-Moore’s Law, but chances are that in a few decades computers will look entirely different to the ones we use today.
Whether it’s new configurations of machines, chips made out of entirely new materials, or new types of subatomic research that open up new ways of packing transistors on to chips, we believe the future of computing — with all the ingenuity it involves — will be A-okay.
NASA’s planet-hunting deep space telescope is about to run out of fuel
NASA
The Kepler space telescope is running on empty, and there are no places to fill up when you’re 94 million miles from Earth.
Charlie Sobeck, an engineer for the Kepler mission, announced in an update that the end is near for the nine-year old deep space observatory. “At this rate, the hardy spacecraft may reach its finish line in a manner we will consider a wonderful success,” he wrote. “With nary a gas station to be found in deep space, the spacecraft is going to run out of fuel. We expect to reach that moment within several months.”
Kepler was launched on March 6, 2009, on what was originally envisioned as a three-and-a-half-year mission. The spacecraft was guided into a solar orbit, trailing the Earth as it circles the sun, on a quest to find Earth-sized planets orbiting distant stars.
The Kepler telescope can’t actually “see” those distant planets, of course. Rather, it looks for variations in light as a planet passes in front of its star, creating a tiny pulse. Repeated observations can detect the size and orbit of the planet.
Kepler has discovered hundreds of exoplanets over the past nine years. Its mission could have ended in 2013 when a reaction wheel on the spacecraft broke, making it unable to maintain its position relative to the Earth.
The new Kepler mission, called K2, began using the pressure of sunlight to maintain its orientation. Like steering into the current on a river, the new technique let the telescope shift its field of view for a new observation every three months. The team initially estimated that the spacecrafts could conduct ten of these “campaigns” before ending its mission, but it’s already on its 17th.
The fuel that Kepler uses is hydrazine monopropellant, as Sobeck explained in a podcast about the mission. “It’s just one fluid that when it goes through the thrusters it ignites, and it provides thrust,” he said. “It’s pressurized in the tank, and that’s what drives it in to the thrusters, down fuel lines just like you have your lines in your car.”
One of the challenges is to retrieve the data that’s already stored on the data recorder. The last drops of fuel will be used to rotate the spacecraft so its parabolic dish is pointed at the Earth. “The data that we’ve spent so much time and effort to get, we want to get it to the ground,” Sobeck said. “It doesn’t help us if it lives on the spacecraft forever. We’ve got to get it to the ground.”
Although this may be the end of Kepler, a new planet-hunter is scheduled to take to the skies later this spring. TESS (Transiting Exoplanet Survey Satellite) will be launched aboard a SpaceX rocket on a mission to survey the 200,000 brightest stars nearest the sun for evidence of exoplanets.
Editors’ Recommendations
- The new ESPRESSO four-in-one telescope is a next-generation planet hunter
- SpaceX is blazing a trail to Mars, one milestone at a time
- For the first time, scientists discover exoplanets in a galaxy far, far away
- NASA releases first images of Jupiter’s bizarre geometric storms
- SpaceX has successfully launched its first broadband satellites
In bid to compete with Amazon, Walmart files patents for farming drones
Walmart has been expanding the reach of its grocery business for several years, and may be looking to use technology to make its supply chains more efficient. The retail giant has filed patents for six drones that would help automate the farming process, Business Insider reported. The full details of the drones haven’t been revealed, but we do know that one is meant to pollinate crops, one would work to protect plants from pests, and a third would keep an eye on plant health.
While this would give Walmart more control over its supply chain, it is unlikely that the company is planning on going into the farming business. Instead, Walmart will sell these drones to partner farms in an attempt to make them more efficient. On the consumer side of things, this could mean higher supplies of fruits and vegetables and lower prices, though nothing is certain.
Paula Savanti, a senior consumer analyst at Rabobank, told Business Insider that she believes the drones will give Walmart more insight into what is happening on its farms, and allow for better response to changes in supply.
“I’m guessing that any tech that’s geared toward improving efficiency at the farm level would benefit them. It would allow them to anticipate supply problems and adjust accordingly,” Savanti said.
Overall, this would help make Walmart’s supply chain more predictable and help it better compete with online retailers — most notably Amazon. Savanti noted that the rise of Amazon has triggered a change in the way retailers look at technology. It is no longer enough to simply have a well-designed web site.
“Part of the ‘Amazon effect’ is making these [retail] companies start looking into investments in different areas beyond ecommerce,” Savanti said. “It forces them to redirect investments to improving their technological capabilities in general — not just at the end of the supply chain, but in the beginning as well.”
Amazon may have spelled the end of the road for a lot of bookstores and electronic outlets, such as Circuit City, but consumers are still fairly new to the concept of online grocery shopping. Companies like Walmart still have a chance to compete with Amazon in that field.
Editors’ Recommendations
- Before ‘plantscrapers’ can grow food in the city, they’ll need to grow money
- Walmart takes on Amazon by offering grocery delivery — for a fee
- Amazon might offer a pickup service for Whole Foods and other retailers
- Walmart partners with Rakuten for online groceries and ebooks
- Amazon-style drone deliveries come a step closer for U.K. shoppers
Charge your phone on the way to your destination with iOttie’s $42 QI Wireless Car Mount
Stay safe and charged up with the iOttie Easy One Touch.
iOttie’s Easy One Touch QI Wireless Car Mount is on sale for only $42.46 at Amazon right now when you enter the promo code S9SAVE15 at checkout. That saves you $8 off its regular price which hardly ever sees a discount.

This phone mount features wireless charging and can fit most devices securely. It has a telescopic arm that can extend up to eight inches and pivot up to 255 degrees. It also has a built-in USB port so you can charge a second device with it.
Almost 450 reviewers rated this item with 3.9 out of 5 stars at Amazon.
See at Amazon
The rise and rise (and rise) of ’Fortnite’
It’s safe to say that, when a video game that counts Drake among its fans has breakfast TV shows around the world discussing its effect on younger players, it has truly made it. No, we’re not talking about Grand Theft Auto, but Fortnite, Epic Games’ mass-multiplayer shooter that has over 40 million players across consoles and PC, and continues to grow at a rapid pace.
When Fortnite launched as a paid Early Access game in July 2017, it was solely as a PvE (player vs. environment) experience. Players completed levels by collecting materials and crafting super-elaborate bases to repel hordes of zombies. Reviews were mostly positive, but Epic kept one eye on the success of another Early Access game — PlayerUnknown’s Battlegrounds (PUBG) — and quickly spun out a new, free-to-play mode, “Battle Royale,” the following September.
The premise of this new mode was simple: 100 people parachute onto an island with only the clothes on their back and a pickaxe. Players can go it alone or enter as a team of up to four. As they forage for weapons and defensive items and generally hunt rival players, the game slowly restricts the game area by way of a “Storm Eye,” which players must remain inside to survive. It’s this mechanic that keeps the game active: The eye periodically shrinks and players are forced to converge on the same areas on the map or risk checking out early.
If that idea sounds familiar, it’s because the gameplay loop is what made PUBG so popular in the first place. Battle-royale games existed before — both Rust and H1Z1 enjoyed some level of success — but PUBG, with its expertly designed map and tight shooting mechanics, was the first to truly resonate with gamers.
By September, PUBG had broken the record for most concurrent players on Steam, reporting 1,342,857 players versus Dota 2’s previous milestone of 1,295,114. However, December was when the game truly exploded. It took home “Best Multiplayer Game” at the 2017 Game Awards, and before long the game had 30 million players and concurrents rose to 3.1 million, helped by its release on Xbox One.
When Fortnite: Battle Royale appeared, the similarities between the two weren’t lost on people. Chang Han Kim, CEO of PUBG Corporation, slammed Epic for “replicating” the game and threatened to take further action. The situation was complicated by the fact that PUBG uses Epic Games’ Unreal Engine.
By December, Fortnite was roughly level in total players but had less than of PUBG’s concurrent numbers. By January 15th, though, the game had added a further 10 million players. On January 19th, Epic said another 5 million people had picked up the game, bringing the total figure to 45 million. It’s likely that the second surge was prompted by Battle Royale’s updated map, which went live on January 18th. The update added various new locations to the game world, including a motel and a big city, as well as biomes, which provided different areas with their own unique environments.
That success continued into February. On February 8th, Epic announced that Battle Royale had topped its rival’s concurrent player record, amassing an impressive 3.4 million users (the previous record was 3.2 million). The game was seeing unprecedented demand: rewind a fortnight and that number was around the 2 million player mark. Up until that point, both Fortnite and PUBG had been enjoying growth independently of each other, but February saw the latter’s numbers fall for the first time ever. In January, PUBG had an average of 1.58 million players, with a peak of 3.26 million. In February, those numbers slid down to 1.39 million average and 2.93 million peak.
As Fortnite rose, streamers saw exponential rises in audience numbers. Tyler “Ninja” Blevins is one such personality. Blevins was former professional Halo player who went all-in on streaming, riding the early wave of interest in H1Z1 before turning his attention to PUBG and then Fortnite. Such was the interest in streams of Epic’s free-to-play hit, Ninja’s regularly began pulling in more than three times the number of viewers he usually averaged, consistently reaching 100,000 concurrent viewers at a time.
On March 12th, Kotaku put his total Twitch subscribers at 130,000 viewers. Thanks to Twitch Prime and the streaming platform’s varying subscriptions tiers, it’s impossible to say exactly how much money Blevin would make from these viewer numbers, but it would be at least $350,000 per month. His enthusiastic mannerisms and reactions to kills endear him to younger viewers, but it’s his high level of skill that has won over a more diverse group of fans, including the musician Drake.
playing fort nite with @ninja https://t.co/OSFbgcfzaZ
— Drizzy (@Drake) March 15, 2018
Drake’s interest started with an Instagram follow, and before long Ninja promised the duo would team up for a Twitch livestream. On March 15th, Epic Games and Twitch employees looked on in amazement as Ninja not only buddied up with Drake but also rapper Travis Scott, Pittsburgh Steelers wide receiver JuJu Smith-Schuster and Kim Dotcom.
Word of Drake’s involvement quickly spread and made Ninja’s streams even more popular than usual. When Drake tweeted that he was streaming live, however, records were broken. Buoyed by some of the rapper’s 36.9 million Twitter followers, Ninja’s stream passed 635,000 concurrent viewers, destroying the Twitch record for a single streamer. It also pushed his subscriber count to over 180,000, boosting his monthly earnings from the platform close to $500,000.
As Fornite’s popularity increased, so too did general interest in the game. Both Good Morning America and the UK’s This Morning show featured slots discussing the addictive nature of the game and parents’ worries about a possible desensitization to violence. The game does involve classic shooter elements, but it has a vibrant, cartoony art style that doesn’t show blood or gore.
Does Fortnite deserve its place as one of the world’s most popular games? Absolutely. Unlike shooters like Overwatch or Call of Duty, Fortnite doesn’t attempt to put as much emphasis on player ranks. Matchmaking isn’t based on a user’s skill level, which gives players the opportunity to come face-to-face with players both more and less proficient at the game. This is key: If new or lower-skilled players are constantly matched against people who are on or around their level, they may find it hard to make an impact and suddenly the game becomes a grind.
As with other battle royale games, loot is randomly dropped all over the map, and downed players drop any loot they’ve collected. No two visits to the same location will offer the same weapons, health packs and shields. At the start of a game, this is the ultimate leveler: Highly skilled players like Ninja can land, race to a gun, and be beaten to it and eliminated by someone playing for the first time. After being killed, players can either spectate their opponents or join another game.
Fortnite’s success isn’t accidental. Free-to-play gives it a considerable advantage over PUBG as gamers can try it out with no risk. Epic makes almost all its money from in-game items. Gamers can choose to buy a cheap Battle Pass, which unlocks certain in-game items and opens up new challenges, or simply buy the skins or emotes they like via the in-game store. There’s no pay to win; it’s purely cosmetic. That’s an easy sell to parents, who won’t have to put money down for yet another video game.
Unlike PUBG Corporation, Epic is an established developer. Its Unreal game engine is used by many companies, bringing in steady revenues. Its experience and cash reserves allowed it to quickly scale advantage scaling up support and resources, while PUBG slowly ramped up development as player numbers swelled, only releasing “version 1.0” in December of last year. Epic has an entire team dedicated solely to Battle Royale, which pumps out new in-game items, matchmaking modes and bug fixes roughly once a week. Just this month, a new “20 vs. 20 vs. 20 vs. 20 vs. 20” mode has arrived, offering a more clan-based take on the royale formula.
This rapid release cycle also enables the company to tap into internet culture. Memes are integrated into the game while they’re still fresh, with in-game emotes of famous dances or gestures. And the success of the game has also seen it begin to have an impact on culture in the real world. Soccer and rugby players have recreating moves from the game to celebrate scoring, and it’s likely we’ll see NFL stars do the same when the new season starts.
Following Drake’s recent appearance, gamers are now calling for a “Hotline Bling” emote. You can bet Epic is doing everything it can to make that happen.
how drake be giving ninja items pic.twitter.com/vcpeNwu0c5
— JhbTeam (@JhbTeam) March 15, 2018
Amazon hires former FDA exec for secret health care team
Amazon has made a big hire for its secret health care division internally known as 1492, according to CNBC: FDA’s first chief health informatics officer Taha Kass-Hout. The e-commerce giant reportedly brought him on board to work under former Google X director Babak Parviz in a business development role. What that role is remains a mystery — it is a secret division, after all — but based on Kass-Hout’s previous jobs, CNBC believes he might help Amazon conjure up a way for people to get easier access to their health records.
His LinkedIn profile describes him as a “physician executive empowering consumers via sustainable health data ecosystems,” and the last job in his list is SVP/Chief Digital Health and Intelligence Officer at Michigan’s Trinity Health. Kass-Hout was FDA’s health informatics head for three years from 2013 to 2016, but before that he also worked for the CDC in the same capacity.
Since other tech titans like Apple are also developing ways to make it easier to access your medical history, Amazon working on a similar project doesn’t sound far-fetched. As CNBC noted, medical errors are one of the leading causes of death in the US — a quick way to get a person’s medical records could mean the difference between life and death.
In addition, Kass-Hout could also help Amazon with the regulatory process if, say, the tech giant is launching new health hardware. Either way, it looks like the corporation is cooking up something big for the space, and we’ll just have to wait and see if it truly is a new health device or medical history software.
Source: CNBC
Whistleblower explains how Cambridge Analytica ‘exploited’ Facebook
Last night Facebook announced bans against Cambridge Analytica, its parent company and several individuals for allegedly sharing and keeping data that they had promised to delete. This data reportedly included information siphoned from hundreds of thousands of Amazon Mechanical Turkers who were paid to use a “personality prediction app” that collected data from them and also anyone they were friends with — about 50 million accounts. That data reportedly turned into information used by the likes of Robert Mercer, Steve Bannon and the Donald Trump campaign for social media messaging and “micro-targeting” individuals based on shared characteristics.
Now, reports by The New York Times and The Guardian reveal what was behind the timing of that Friday night news dump. According to reporters from both outlets, which were collaborating, the social network had downplayed their reporting and even threatened to sue The Guardian, over what they learned from documents and a whistleblower (who Facebook included in its ban list): Christopher Wylie.
Wylie’s account largely fills in the gaps from Facebook’s statement. While Facebook didn’t explain how many users had their data snagged by the “thisisyourdigitallife” app, the reports say it pulled private info from more than 50 million people even though they didn’t know about it or consent — an act that at the time was allowed under Facebook’s rules. About 30 million of those (a number previously reported by The Intercept) contained enough information for Cambridge Analytica to match profiles with other data and complete its “psychographic” work — learning about individuals and trying to target them with personally tailored messages.
That’s the bit that causes Wylie to describe his former employer as engaging in something more like psychological warfare than simple “data analysis.” Cambridge Analytica maintains that “When it subsequently became clear that the data had not been obtained by GSR in line with Facebook’s terms of service, Cambridge Analytica deleted all data received from GSR.” It also claims none of the data was used during the 2016 campaign, however, the NYT notes that its CEO previously said the Trump efforts drew on psychographics it had created for the Ted Cruz campaign.
As for Facebook, that company is staunchly pushing the line that this does not represent a “breach” or a “leak”. Deputy General Counsel tweeted that “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.” In a series of now-deleted tweets, CSO Alex Stamos said the app’s creator “lied” to users and Facebook about what he was using the data for, but said his use was consistent with its API at the time, and the way some APIs for contact sharing works on platforms like Android and iOS. Facebook updated last night’s release with a new statement:
The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.
That will not be enough to avoid further scrutiny however — according to The Guardian, the British Information Commissioner’s Office and the Electoral Commission are investigating. In the US, Massachusetts attorney general Maura Healey announced her office is opening an investigation, and they probably won’t be the only ones.
That’s above and beyond the professor suing Cambridge Analytica in the UK to find out the full extent of the data it has acquired. David Carroll’s lawsuit was filed yesterday, and a crowdfunding campaign to back the effort has already raised £30,000.
Source: New York Times, The Guardian (1), (2)
V-Moda’s Crossfade 2 review: Pricey, delightful headphones that can take a beating
When it comes to headphones that run $200 or more Beats is the name that comes up most often in casual conversation. And while Beats often have decent sound, its build quality and design often leaves a lot to be desired. This is where a company like V-Moda really shines; its Crossfade 2 is a fantastic example of it. Where Beats uses plastic frames with weights in them to feel like quality, V-Moda designed the Crossfade 2 from the ground up to be a work of art.
A Box as Beautiful as the Product

Unboxing the Crossfade 2 is the rare occasion in which the the it is almost as much of a pleasure as using. Right of the bat, it reeks of “premium”. Made of thick gauge, lightly textured card stock, the box sleeve is the first thing you see when opening the V-Moda.
A vegan leather carry handle is bolted in place by square, pyramidal studs that matches the snap-latch that secures the lid in place.
Before you can do anything else, you have to literally cut the ribbon on your new headset – a crimson, silky ribbon secured around the box, sleeve and lid. The attention to detail borders on the ridiculous at times. You can easily, and correctly, guess that a company that puts this much effort into packaging would create an equally beautiful product.
Under the lid is a cap protecting the top of the exoskeleton case with firm, soft foam. It’s a simple affair with matte black card stock glued to the aforementioned foam and emblazoned with a glossy V-Moda logo. All of this adds to the impression that a lot of love went into designing the product.
The Exoskeleton

The included hard-sided case is, as you might expect by now, every bit as premium as the box and headset itself. It features a symmetrical, almost clamshell aesthetic, with adjustable vents on either side of the shell to air out your headphones after use. Even the zipper feels premium. The teeth are hidden and guided by strips of stiff fabric which ensures it’ll never catch or jam.
Inside the case is a bright orange microfiber lining, as well as two V-shaped elastic storage compartments – perfect to keep your USB and auxiliary cables. To top it off, the V-Moda branding is stitched into top patch of the case. The Crossfade 2 fits perfectly inside but only when completely collapsed. They’ll require adjusting each time you wear them.

Built like a Titanium Gymnast
Upon first withdrawing the $330 Crossfade 2 from its case, you can’t help but think to yourself how ordinary it looks. Save for the shiny steel forks attached to the ear cups, the headset is completely matte black. It’s neither overly heavy (309 grams), nor particularly bulky (I’m looking at you, Beats).

That all changes, though, when you unfold the ear cups for the first time. They pivot downward on two sets of hinges (one is on the frame and the other is on the cups) and snap into place satisfyingly.
The solidness of the click and smoothness of the motion really showcase the durability of the Crossfade 2; they can twist in pretty much any direction you want without fear of breaking. This is thanks to a nearly indestructible “SteelFlex” headband covered in vegan leather (which could mean seaweed or cork, or it could mean PVC or Polyurethane) and memory foam.
There’s very little horizontal movement in the frame. The SteelFlex headband and steel forks offer a surprising amount of tensile flexibility without fear of damaging it. V-Moda claims the Crossfade 2 will withstand 10 cases of a completely flattened headband without any ill effects. As any gamer can attest, this is typically a death knell for a headset.
Sleek and Industrial
A second glance reveals a more subtle, and somehow obvious, boldness to its design. The twenty-plus hex screws holding the thing together are bare for all the world to see, and the nylon-braided cables run uncovered from the headband to the cans. The stock model features metal earcups and cover. These are hexagonal in shape and secured with tamper-proof hex screws, giving it a sleek, industrial look.

It’s an odd hybrid of aesthetics that combines for a very unique styling. The covers that come in the package are sleek and stylish, but you can also order custom ones to make it truly yours.
Functionally, the Crossfade 2 is both easy to use and occasionally a bit finicky. Rather than a Power button, V-Moda opted for a three-stage switch on the side of the Right ear cup. Stage One is off, stage two is on, and stage three is pairing mode.
The volume controls are similarly odd with three buttons on top of that same ear cup. The front and back buttons are volume up and down, respectively, and the third button is a multifunction, contextual V button. This button is responsible for a number of important features, including playback control, call control, and summoning your mobile Assistant of choice.
Playback control is a bit tricky: press twice for next or three times for back. It’s not the easiest function to get right every time, but it’s not terrible. Consider that you’ll mostly be using it when connected to a phone that can control playback much more easily.
High-End Sound, High-End Price
I’m not an expert in audio quality. I’m a layman when it comes to high-end sound equipment with my main over-ear headphone experience coming from Turtle Beach and Razer gaming headsets. Keep this in mind as I discuss the sound quality of the Crossfade 2. With that caveat out of the way, though… Wow. The Crossfade 2 really blew me away with its sound.

My $249 Turtle Beach Stealth 700 has a pair of 50mm drivers with a frequency response of 20Hz-20,000Hz. The $330 Crossfade 2 features dual 50mm, dual-diaphragm drivers with a 5Hz-40,000Hz range. This means the Crossfade 2 is more than twice as sensitive to bass and treble than the 800X. Turtle Beach has a reputation for being among the more popular gaming headsets on the market. And that’s well earned! But even its most expensive headset doesn’t hold a candle to the Crossfade 2.
In the World
My experience with both headsets backs up those numbers. Be it gaming or listening to music, the Crossfade 2 is simply a joy. It’s got a booming bass that doesn’t drown out the complexity of the highs, and the massive frequency range ensures you catch every detail in the sound. Whether straining to hear the footfalls of an enemy in PlayerUnknown’s BattleGrounds or picking out each and every note of Voodoo Child, V-Moda’s Crossfade 2 is excellent in any situation.
I used the Crossfade 2 for gaming (mobile and console) and listening to music at home and on the go. While most of my music tastes tend toward rock, I also did a few test runs with Pop, R&B, EDM, and Classical styles to get a feel for how the Crossfade 2 sounded when blasting each genre. In short, it sounds awesome no matter what you put on. Gaming is no different. I played Shooters, Puzzlers, MOBAs, RPGs, and all of the games sounded wonderful.
V-Moda estimates the Crossfade 2’s battery life at 14 hours. In practical usage, I found it closer to 12 or so. I’m sure that varies based on the features you’re using and the volume at which you’ve set it. Honestly, though, battery life isn’t a concern as long as you’re carrying around the included 3.5mm cable. When plugged in, the Crossfade 2 immediately becomes a zero-latency, zero-power analog headset whose quality isn’t diminished in the least.
Value
$330 is a lot to spend on a headset, no matter who you are. But, in a world where the Beats headphones are synonymous with luxury, people are willing to shell out the cash. Do yourself a favor and go against the crowd. If you’re looking to buy an expensive headset, don’t buy a pair of Beats. Buy the boldly-styled, built-like-a titanium-gymnast, excellent-sounding V-Moda Crossfade 2.
The Crossfade 2 retails for $330, and can be bought from any number of retailers, including Walmart and B&H. However, at the time of this writing the Matte Black model is available from Amazon for just $270 a bargain.

























How to get March Madness scores with Google Assistant

Ok Google… who broke my bracket today?
The Big Dance is just beginning, and while there are apps aplenty to keep bracket-makers and die-hard fans watching and trash-talking, not all of us need that. Some of us just want to see how far our local team has gotten so far. Some of us just need a quick way to find out what channel the next good game’s going to be on. Some of us just want to get the score and then get on with our lives.
Some of us just need Google Assistant.
First, a word about the women’s teams

According to Google Home’s sports support page, Google Home only covers basketball scores from the NBA, the WNBA, and Division 1 Men’s teams of the NCAA. So, if you want to follow the Louisville men, you’re fine, but if you want to follow the women, you’ll have to look to other apps instead.
Looking up a particular team score



If you just want to see how far your alma mater has gotten or if the home team is playing today, Google Assistant is all over this. In fact, when you ask about March Madness, Google Assistant recommends just asking for specific teams rather than the whole field (we’ll get to why in a moment). There’s a few ways to phrase it, but above all else you need to include that it’s the basketball team, otherwise Google Assistant might default to the football team or regular search results, even though football ended three months ago. Go figure.
- OK Google, is Louisville basketball playing today?
- OK Google, when is the Baylor Bears basketball game?
- OK Google, did Duke basketball win?
- OK Google, who is Notre Dame basketball playing?
These games were scheduled Sunday night and the second/third round games are scheduled hours/days after winning their matches, so Assistant might not get the matchups the second your team wins. Google Assistant can only tell you matches that have been piped in from its Knowledge Graph, so keep that in mind if Assistant doesn’t have the next match lined up when you ask.
Following the whole bracket? You’ll have to look elsewhere


If you’re wanting to keep up with the whole field, I’m sorry to say that you’re going to have to use an app or the NCAA website, because Google Assistant is only showing three games in the search results, and suggests using the NCAA site to see the bracket and live stream the games. And during the later rounds, three games might be enough, but during the First Round? Yeah, that’s kinda useless.
So, are you following any particular teams, so or do you just want to keep up since it’ll be the taking over water cooler conversations the next few weeks?
If you need apps to watch the games or follow your broken, broken bracket, find them here.
Updated March 2018: This article has been cleaned up and updated for the Big Dance of 2018.



