Skip to content

Archive for

1
Mar

Porsche claims Mission E won’t have Tesla’s performance limits


Porsche knows you’re probably going to compare the Mission E to Tesla’s cars, and it’s determined to prove that it its electric performance car is the one to beat. Company EV head Stefan Weckbach has promised that the Mission E has “reproducible” performance Tesla (and particularly the Model S) can’t match. The Model S can only hit its claimed 0-60MPH time “twice” in short succession, Weckbach claimed, and can’t run at at full speed for significant stretches either. The Mission E can maintain its full speed for “long periods,” Weckbach said.

The manager reiterated some of the Mission E’s known abilities, including its rapid charging (20 minutes for 250 miles of driving). He added that the car should be relatively practical, too. On top of decent rear seating, helped by sculpted batteries that allow more foot room, there’s even some decent cargo space. While the front has a motor, cooling and other vital components, it should still have about 100 liters (about 3.5 cubic feet) of storage. It’s not clear what the back can carry.

The arguments are partly true. Tesla used to limit both the consecutive and overall uses of its Launch Control feature to prevent premature wear on its cars, but it dropped that limit months ago. Porsche has a better argument when it comes to sustained high speed. You need a massive amount of cooling to run an EV at full tilt for an extended time, and that’s not present on Tesla’s upscale sedans and SUV. The new Roadster may change that given its sports car design and three motors.

If you’re sold on Porsche’s concept, you won’t have to wonder where to top up. The North America chief Klaus Zellmer stressed that the company is equipping all 189 of its US dealerships with 800-volt high-speed chargers, and is working with “other organizations” on a network of charging stations. You’ll probably charge at slower rates most of the time, but the eventual network should help for long (or very enthusiastic) trips.

1
Mar

Digital coin miners purchased more than 3 million graphics cards in 2017


A recent report published by Jon Peddie Research states that cryptocurrency miners purchased more than 3 million add-in graphics cards worth $776 million during 2017. The number is no surprise given the shortage of add-in cards throughout the year along with inflated prices customers faced with cards they actually managed to find on the market. AMD was the big winner in 2017’s cryptocurrency craze with an increased market share of 8.1 percent. 

According to the report, the demand from miners will lessen “as margins drop in response increasingly utilities costs and supply and demand forces that drive up AIB prices.” While that sentence is a rough read, it could indicate that many miners already have what they need and likely won’t purchase newer cards with higher prices released throughout 2018. The firm may also base the demand drop on rumors that Nvidia and possibly AMD will release dedicated cryptocurrency mining cards this year. 

Despite the drop in demand, Jon Peddie Research believes that graphics add-in card prices will not drop anytime soon. To offset the cost, the firm suggests that PC gamers mine digital coins when they’re not shooting aliens and slaying dragons. 

“GPUs are traditionally a leading indicator of the market, since a GPU goes into every non-server system before it is shipped, and most of the PC vendors are guiding cautiously for Q4’14,” the report states. “The Gaming PC segment, where higher-end GPUs are used, was a bright spot in the market in the quarter.” 

The report shows that the annual shipment growth rate for graphics add-in cards for desktops rose 9.7 percent compared to 2016. Integrated and embedded GPUs fell 8.3 percent, discrete notebook GPUs fell 5.6 percent, and integrated/embedded notebook GPUs fell 6.8 percent. When comparing the third quarter and fourth quarter of 2017, desktop integrated/embedded graphics rose three percent and discrete notebook GPUs rose 3.6 percent while all other GPUs took a slight tumble. 

Despite 2017’s cryptocurrency craze eating up the add-in graphics card market, gamers will always be the “bread and butter” for Nvidia and AMD regarding GPU sales. Add-in graphics cards are also highly used in the professional segment such as 3D animators, architects, video editors, photographers, and so on. Miners didn’t really hit the “professional” market given those GPU prices can be rather costly. 

In March, Nvidia is expected to reveal dedicated digital mining cards codenamed “Turing” during its annual developer conference. The name stems from Alan Turing, an English computer scientist, theoretical biologist, mathematician, and cryptanalyst. His knowledge of cryptography helped crack coded messages sent by the Nazis, contributing to the Allies winning World War II. These cards will be based on Nvidia’s latest GPU design codenamed “Volta.” 

Nvidia already supplies a graphics chip for digital coin mining purposes: The Pascal-based P106-100 chip, which is a variant of the GTX 1060 used in add-in cards manufactured by Asus, MSI, and several others. Meanwhile, AMD’s RX 470 chip can be found in dedicated cryptocurrency mining solutions. In both cases, the respective companies tweaked the chips specifically for digital coin mining.  

Editors’ Recommendations

  • AMD gains ground on Nvidia thanks to cryptocurrency miners
  • Nvidia to reveal new GeForce cards for gamers, miners in March, sources say
  • Mining Bitcoin in the cloud is like renting a money printer and yes, it’s bizarre
  • AMD and Nvidia report strong revenue amid still sky-high GPU pricing
  • Video cards are so expensive, you’re better off buying an entire desktop PC


1
Mar

Wind and solar could supply 80 percent of U.S. energy needs, study says


If the United States were to focus its energy on renewable sources, it could reliably supply 80 percent of its electricity demand through solar panels and wind turbines. That is the result of a study out this week in the journal Energy and Environmental Science, which analyzed hourly U.S. weather data from 36 years of to unpack the geophysical barriers holding wind and solar energy back.

To say the U.S. has significant green energy potential is nothing new. Sustainability has long been within our reach with the right amount of effort, investment, and infrastructure. But in the recent study, scientists tried to simplify this assessment and consider how much of our energy needs could be met by these sources, independent of future technologies.

“Previous studies have used complex models with technologies and costs to show that the U.S. could affordably get around 80 percent of our electricity from solar and wind,” Steven Davis, an Earth systems scientist at the University of California, Irvine, and one of the lead authors of the study, told Digital Trends. “We’ve stripped away some of the complexity and in the new paper show that the 80 percent number boils down to natural variability in sun and wind.”

In other words, we could reliably reach that four-fifths goal with current technologies by accounting for seasonal fluctuations in daylight and wind, according to the study. However, if we were to source more than 80 percent our energy from renewables, we would need to account for significant hikes in storage and energy generation.

“So, for example, we might get 80 percent of electricity from solar and wind with 12 hours’ worth of energy storage,” he said. “But to get 99 percent of our power from those sources alone would require either building twice as many solar panels and wind turbine or else having weeks’ worth of storage.”

Right now, the main barriers include a storage and transmission infrastructure, which would require substantial financial investment. Cross-country transmission lines could cost hundreds of billions of dollars. Although that is a lot of money, it’s cheaper than the more than $1 trillion needed to store that amount of electricity in today’s most economical batteries.

In short, the study gives an optimistic outlook for renewables in the U.S., putting seemingly lofty goals within our current reach and emphasizing the importance of energy storage solutions.

“While still a lot to take in, I think what makes the study exciting is that our conclusions don’t rely on assumptions about this or that technology or cost,” Davis said. “Rather, we’re looking at patterns of sun and wind over 36 years and the results describe the fundamental challenge Mother Nature has laid out for us.”

Editors’ Recommendations

  • Tesla is bringing its home batteries to Canada
  • T-Mobile calls on Verizon, AT&T to commit to renewable electricity
  • With a $350 billion contribution to the U.S. economy, Apple plays the patriot
  • Catch up on the entire ‘Destiny 2’ story before playing ‘Curse of Osiris’
  • Thanks to renewable energy, German factories got paid to use power last weekend


1
Mar

Humanoid firefighting robot will go where it’s not safe for humans to venture


A Walk-Man could one day save your life. No, we’re not talking about the now-retro Sony audio cassette player, but rather a humanoid robot developed by engineers at IIT-Istituto Italiano di Tecnologia, supported by funding from the European Commission. Started way back in 2013, the Walk-Man project is designed to serve as a robot emergency responder which could be used to assist firefighters.

It is able to locate the position of fires, walk toward the offending blaze, and then activate an extinguisher. While it is doing this, it collects images from its environment and sends them back to a human emergency team, who can use the data to analyze the situation and remotely guide the robot. This remote control is carried out by an operator using a virtual interface and sensorized suit, which allows them to relay actions to the avatar-like Walk-Man robot.

“The robot will demonstrate new skills, including powerful manipulation, robust balanced locomotion, and physical sturdiness,” project coordinator Nikolaos Tsagarakis told Digital Trends. “To achieve this, the project integrates suitable technologies in compliant robot design, manipulation and locomotion control, perception, and motion planning to develop a humanoid robot capable of walking inside human oriented infrastructures, manipulating human tools and interfaces, and addressing disaster response in hazardous environments.”

The latest iteration of Walk-Man, now in its final evaluation stages, boasts an upgraded upper body that is lighter and more powerful than the first version. The new upper body allowed the team to reduce the overall weight of the robot from 293 to 223 pounds. Despite this reduction, its arms have 40 percent higher strength with a payload capacity of 22 pounds. The onboard computational power has also been increased, along with the robot’s software, to allow it to perform tasks and actions faster than the original system. Its speed is additionally helped by the reduction in weight, while the reduced dimensions of the upper body means it can now more easily operate in human environments.

During a test run, Walk-Man dealt with a scenario intended to replicate an industrial plant that’s been damaged by an earthquake. With gas leaks and fires making the situation too dangerous for humans, the robot was guided through a damaged room and asked to perform four tasks. These included opening and traversing the doorway, locating the valve which controlled the gas leak and closing it, removing debris in its path, and finally identifying the fire and activating an extinguisher. It passed with flying colors!

Editors’ Recommendations

  • Harvard’s insect-inspired HAMR robot scuttles like a cockroach on meth
  • Awesome Tech You Can’t Buy Yet: Tiny rafts, self-cooling tents, and more
  • Whatever you do, don’t mess with Boston Dynamics’ SpotMini robot
  • Spirit animals: 9 revolutionary robots inspired by real-world creatures
  • Rise of the machines: Here are the best robots we saw at CES 2018


1
Mar

HTC’s Vive Focus mobile VR headset uses the same lenses, displays as Vive Pro


A recent hands-on preview of HTC’s Vive Focus mobile virtual reality headset reveals that the device, currently only available in China, relies on the same lenses and displays that will be used in HTC’s next-generation VR headset for the PC: The Vive Pro. It serves as a “premium” alternative to other stand-alone VR headsets sold on the Chinese market, sporting premium components and a premium price tag of around $525 before tax. 

As previously reported, the Vive Focus doesn’t rely on an inserted smartphone. Instead, it contains everything you need for an untethered virtual reality experience. Powering this headset is Qualcomm’s Snapdragon 835 processor typically found in high-end smartphones. Behind each lens is a 1,600 x 1,440 OLED display, higher than the 1,280 x 1,400 per-eye resolution seen with Lenovo’s Mirage Solo headset. Combined, the Vive Focus has a maximum resolution of 1,600 x 2,880. 

As for other features, the Vive Focus sports a refresh rate of 75Hz, slightly below the vomit-preventive 90Hz rendered by the HTC Vive and upcoming Vive Pro. It also provides a 110-degree field of view, a micro SD card slot, built-in Wireless AC connectivity, a built-in battery, and sensors for tracking movement and position. There is no need for a tethered PC or external sensors to track your movements through physical space. 

Typical “mobile” VR headsets rely on a smartphone to provide all the hardware, such as Samsung’s Gear VR. In this case, the headset relies on a single screen at 2,560 x 1,440 that is divided into two views of 1,280 x 1,440. Audio either travels through the phone’s included headphone jack or its built-in speaker. 

One problem with this setup ties directly into performance: It’s a phone after all, so there are multiple tasks running in the background associated with the carrier and messaging along with services related to social networks, Google Play, and so on. Juggling all those processes on top of a VR experience can be taxing on the processor. 

Outside eliminating the smartphone, a stand-alone VR headset doesn’t deal with all those processes, but instead focuses solely on VR. Manufacturers are also beginning to incorporate full-motion sensing with world-tracking abilities so you can experience and move around in VR without worrying about bumping into furniture, walls, people, annoyed pets, and tripping over cables. 

Vive Focus includes a fan to keep the innards cool while the Snapdragon chip does all the rendering and motion processing. This essentially allows the chip to provide better performance versus the same non-cooled chip throttling back because it’s simply getting too hot within the confines of a smartphone. You don’t want a hot chip sitting close to your face, either. 

The headset itself provides six degrees of movement — forward/backward, left/right, up/down, and three perpendicular axis — while the included controller simply tracks your hand movements (three degrees). Unfortunately, it’s still a mobile-class device; you’re simply not going to see the same high-resolution visuals generated on the PC-tethered Vive and Vive Pro headsets. 

Ultimately, the availability of premium content on Vive Focus will depend on developers and how well they take advantage of the hardware. 

Editors’ Recommendations

  • HTC Vive Pro hands-on review
  • HTC Vive vs. Vive Pro
  • Oculus Rift vs. Vive Pro
  • Twitter tease indicates HTC could reveal a 4K Vive VR headset at CES
  • HTC Vive Tracker Review


1
Mar

A.I. helps scientists inch closer to the ‘holy grail’ of computer graphics


Computer scientists at the University of California, San Diego, and UC Berkeley devised a way to make animals in movies and video games more realistic by improving the look of computer-generated fur. It might not sound like much but the researchers call photorealistic fur a “holy grail” of computer graphics.

“Creating photorealistic … characters has long been one of the holy grails of computer graphics in film production, virtual reality, and for predictive design,” Ravi Ramamoorthi, a professor of computer science at UC San Diego, who worked on the project, told Digital Trends. “Realistic rendering of animal fur is a key aspect to creating believable animal characters in special effects, movies, or augmented reality.”

To do so, they leveraged artificial intelligence to better reflect the way light bounces between the fur of an animal pelt, which has a surprisingly significant effect on realism.

Existing models were designed to depict human hair and were less focused on animal fur. However, while human hair and fur both contain an interior cylinder called a medulla, the medulla in fur is much bigger than in hair, and creates an unusual scattering of light. Most existing models haven’t taken the medulla into account in this complex scattering of light.

But the team from UC San Diego and UC Berkeley turned to a concept called subsurface scattering and employed an A.I. algorithm to lend a hand.

“A key innovation is to relate fur rendering to subsurface scattering, which has earlier been used for things like clouds or human skin,” Ramamoorthi said. “There are many techniques to render subsurface scattering efficiently, but the parameters are completely different physically from those used to describe fur reflectance. We have introduced a simple neural network that relates them, enabling one to translate a fur reflectance model to comparable subsurface parameters for fast rendering.”

In terms of speed, Ramamoorthi said his team’s model can generate more accurate simulations ten times faster than the current state-of-the-art models. They shared their findings last week at the SIGGRAPH Asia conference in Thailand.

The new model has future potential in fields from virtual reality to video games, but Ramamoorthi seemed most enthusiastic about its current use for special effects in films.

“Our fur reflectance model is already used, for example in the Rise of the Planet of the Apes, nominated for a visual effects Oscar this year,” he said.

Editors’ Recommendations

  • DIY your own Snapchat World Lenses with new (and free) Lens Studio
  • The 20 best tech toys for kids will make you wish you were 10 again
  • Kano Computer Kit Complete Review
  • Intel is building brain-like processors that will power the robots of the future
  • Get with the Christmas spirit with computer carols from Alan Turing


1
Mar

A.I. helps scientists inch closer to the ‘holy grail’ of computer graphics


Computer scientists at the University of California, San Diego, and UC Berkeley devised a way to make animals in movies and video games more realistic by improving the look of computer-generated fur. It might not sound like much but the researchers call photorealistic fur a “holy grail” of computer graphics.

“Creating photorealistic … characters has long been one of the holy grails of computer graphics in film production, virtual reality, and for predictive design,” Ravi Ramamoorthi, a professor of computer science at UC San Diego, who worked on the project, told Digital Trends. “Realistic rendering of animal fur is a key aspect to creating believable animal characters in special effects, movies, or augmented reality.”

To do so, they leveraged artificial intelligence to better reflect the way light bounces between the fur of an animal pelt, which has a surprisingly significant effect on realism.

Existing models were designed to depict human hair and were less focused on animal fur. However, while human hair and fur both contain an interior cylinder called a medulla, the medulla in fur is much bigger than in hair, and creates an unusual scattering of light. Most existing models haven’t taken the medulla into account in this complex scattering of light.

But the team from UC San Diego and UC Berkeley turned to a concept called subsurface scattering and employed an A.I. algorithm to lend a hand.

“A key innovation is to relate fur rendering to subsurface scattering, which has earlier been used for things like clouds or human skin,” Ramamoorthi said. “There are many techniques to render subsurface scattering efficiently, but the parameters are completely different physically from those used to describe fur reflectance. We have introduced a simple neural network that relates them, enabling one to translate a fur reflectance model to comparable subsurface parameters for fast rendering.”

In terms of speed, Ramamoorthi said his team’s model can generate more accurate simulations ten times faster than the current state-of-the-art models. They shared their findings last week at the SIGGRAPH Asia conference in Thailand.

The new model has future potential in fields from virtual reality to video games, but Ramamoorthi seemed most enthusiastic about its current use for special effects in films.

“Our fur reflectance model is already used, for example in the Rise of the Planet of the Apes, nominated for a visual effects Oscar this year,” he said.

Editors’ Recommendations

  • DIY your own Snapchat World Lenses with new (and free) Lens Studio
  • The 20 best tech toys for kids will make you wish you were 10 again
  • Kano Computer Kit Complete Review
  • Intel is building brain-like processors that will power the robots of the future
  • Get with the Christmas spirit with computer carols from Alan Turing


1
Mar

Moss for PlayStation VR Review: The first chapter in a truly epic story


The hype is real. This is exactly the kind of experience VR exists to deliver. But I’m going to need a lot more of it.

moss-hero.jpg?itok=er3YPYgi

Imagine for a moment you are a giant spirit, your sheer mass allowing you to gaze down upon the world as though we humans were all living in dollhouses. You have the ability to interfere with our lives in small ways, but your primary task in life is to observe the story that is our existence.

That, in a nutshell, is the new PlayStation VR game Moss. You more or less observe the story of Quill, a brave little mouse who is called to greatness. She’s got destiny on her side and you as the hovering spirit to help her out from time to time, and what you experience in this game is the first part of her story.

See at Amazon

moss-statue.jpg?itok=KJgLTCTV

Listen, learn, and fight!

Moss story and art

Moss is more or less a side-scrolling adventure brought to Virtual Reality, presented as though you are journeying through pages in a book. The game is broken up into sections you can observe by sitting still, looking from left to right. Quill starts on one side of the screen, and you help her get to the other side. The representation isn’t always left to right, but that is certainly the default. As you complete a section, you hear a page turn and the world around you changes to the next part of the scene. Rinse, repeat.

What makes each section interesting are the puzzles. Quill needs to navigate the world from the perspective of a small mouse, while you observe and help with advantages like seeing through walls and being able to stand up to see the whole area. You can peer around corners Quill cannot, and together you unlock doors and set traps and fight. Neither of you are able to do this alone, but together the puzzles are a lot of fun to play through. The ability to stand up and look around a world is incredible, and on more than one occasion gave me the unique perspective I needed to quickly solve a puzzle and move on with the story.

moss-swords.jpg?itok=SgHJ9nz1moss-red.jpg?itok=Us-6TqxW

This is also the real opportunity to take in the art of the game. Moss is built in such a way that Quill sees everything as fairly simple and either dark or colorful through most of the game, but with your superior vantage you see a world utterly destroyed by a massive war interrupted by stunning moments of untouched simplicity and beauty.

Moss is a shining example of what makes VR gaming so special.

When I say you observe Quill, I’m being a little obtuse. You still very much control Quill with your DualShock controller, with buttons for attack and jump to generally navigate her around. But when you get to a puzzle, she uses a combination of squeaks and American Sign Language and gestures to show you how to solve the puzzle. You as the giant spirit interact with her as an equal, giving her high fives and waving to her when she waves at you, but when it comes to jumping off cliffs and fighting monsters it is very much you in control of the mouse. This split focus is a lot of fun, because you can multitask quite effectively. Your abilities are mostly limited to healing Quill and lifting heavy things when they need lifting when it comes to physically influencing the world, but by being able to see everything from your mighty vantage you can better guide Quill as you see fit.

In fact, by basically using your face as the camera angle for the third-person Quill experience in Moss, it often becomes easy to find flaws in the world this game is built on. Areas the developers have arbitrarily stopped you from climbing on or jumping through because it doesn’t suit the puzzle, but as the viewer you can clearly see Quill could easily step through a hole in the wall or climb a wall a different way. Like many games, Moss adds a splash of color to areas it wants you to climb instead of leaving it to the player to discover and making the experience a little more open. It’s an unfortunate bit of user agency restriction in an otherwise fantastic environment.

moss-snek.jpg?itok=VMLVn0dm

Don’t get distracted, no matter what

Moss gameplay

Most third-person hack and slash games, of which Moss very much is, features a hero that can take a couple of hits. Quill is not that kind of character, and that introduces an amazing level of difficulty you don’t frequently experience in this genre. Most of the enemies in this game can seriously damage Quill with a single hit, and as you progress in the story there are several enemies capable of dispatching her with single strikes. Success in these fights relies on Quill’s speed, and your ability to interrupt certain attacks as the giant spirit while also counterattacking as Quill. It’s remarkable how challenging that concept can become when there are multiple enemies in play, because the natural instinct from years of gamplay is to focus on what Quill is doing in a fight. In fact, if you don’t lean back a little and observe the entire arena for combat, Quill is likely to be in a lot more trouble.

25hejv.gif?itok=W5drvpXD

Moss is also uniquely challenging in its hidden puzzles. Nearly every section of the game has a hidden scroll for Quill to find, so much so that coming across an area without a scroll can occasionally feel like a failure on your part. These scrolls often require additional planning in a puzzle so you don’t have to start over, or standing up and looking around the map to see areas Quill cannot. This whole aspect of the game invites a level of interaction you can’t get outside of this environment. In this respect, Moss is a shining example of what makes VR gaming so special.

The story itself is very linear, right down to the level of interaction in your gameplay. Everything starts off very casual, but by the end of this part of the story you are at the edge of your seat rushing Quill from section to section to see what happens next. The enemies become increasingly more challenging, the puzzles start to grow beyond individual sections of the story, and eventually you are in a full sprint to the end of the tale. It’s the kind of experience you can thoroughly enjoy in a single sitting because it sucks you in, but also because the story itself isn’t particularly long.

I completed my first run through Moss in a little over three hours to see how the story ended. I missed quite a bit, and plan to go back and fully explore the game from beginning to end, but either way you look at it this is not a long game. The ending makes it very clear this is the first part of a story, which implies there will be more to play at some point in the future.

moss-treasure.jpg?itok=pcx7XHjx

Should you buy it? Absolutely

I loved every minute of this game. There are a few small things I would change, starting with Polyarc’s decision to start every chapter with a blinding flash of white light directly into my eyeholes, but it’s a small criticism in what was otherwise an exceptional story. There can be no greater sign of a quality experience than getting to the end and wanting more, and Moss delivers that without making you feel like you’ve been shortchanged.

If you own a PlayStation VR, spend the $30 and take a trip through Moss. You will not regret it.

See at Amazon

Pros:

  • Incredible scale and immersion
  • Beautiful environments
  • Challenging puzzles

Cons:

  • Physically painful flashes of white light
  • Little user agency
  • Game can be beaten in three hours

Outstanding

4.5/5
xbox-review-star_0.pngxbox-review-star_0.pngxbox-review-star_0.pngxbox-review-star_0.pngxbox-review-half-star_0.png

PlayStation 4

ps4-controllers.jpg

  • PS4 vs. PS4 Slim vs. PS4 Pro: Which should you buy?
  • PlayStation VR Review
  • Playing PS4 games through your phone is awesome

Amazon

1
Mar

YouTube TV adds Seattle Sounders local broadcasts to its MLS slate


YouTube TV has landed another Major League Soccer deal, and this time you might be more likely to notice. As part of a multi-year agreement, the internet TV service is now the official streaming option for all Seattle Sounders FC games. Similar to the LAFC deal, you can watch the 14 nationally televised games on conventional TV networks like ESPN and Fox (including through their online apps), but a dedicated YouTube TV channel will stream the teams’ 20 regionally-broadcast games online.

This isn’t necessarily a tremendous coup for YouTube given MLS’ relatively modest viewership compared to other American sports leagues. The Sounders are generally considered one of the best-known team in the league, though, so it’s bound to command some attention. And like before, this illustrates how YouTube TV’s growth is influencing its ability to make deals. It’s large enough that major sports teams are willing to make exclusive arrangements instead of turning to conventional TV like they might have in the past.

Source: MLS

1
Mar

YouTube moderators inadvertently removed right-wing channels


YouTube’s crackdown on conspiracy theories in the wake of the Parkland mass shooting has had some unintended casualties. The streaming video firm has confirmed to Bloomberg that its human moderators inadvertently removed videos and took down channels from right-wing and pro-gun outlets. Newcomers to YouTube’s moderation team can “misapply” its policies, a spokesperson said, which led to “mistaken removals” of content. The site vowed to reinstate material it had incorrectly pulled.

The company wasn’t specific about which channels fell afoul of the unintentional move, although the Outline (which broke the story) detailed a wide range of affected accounts. Some of them, such as Anti-School and Destroy the Illusion, explicitly peddle in conspiracy theories. Others, however, don’t appear to have obvious violations — the Military Arms Channel saw three gun videos taken down. Some of MAC’s videos had been restored as of the afternoon on February 28th, but none of the affected channels were back.

Some outspoken conservatives, including some of those affected by the crackdown, have accused Google of trying to censor right-wing politics through moves like this, including an earlier action against Infowars. They leveled similar accusations against Twitter when it removed thousands of their followers, although that was debunked when it turned out that the social network was purging bots.

The incident highlights the problem YouTube faces when policing it content. It knows that algorithms alone aren’t enough to detect offenders, but also has to contend with human nature — it can’t guarantee that people will flawlessly interpret the rules, especially if they’re inexperienced. That’s problematic by itself, but it’s worse when it happens in a politically charged atmosphere where people will accuse it of systemic bias.

Source: Outline, Bloomberg