Skip to content

Archive for

27
Apr

Xiaomi Mi 6 hands-on: Two steps forward, one step back


xiaomi-mi-6_1.jpg?itok=2CIydOzo

Xiaomi once again sets the bar for value, but the decision to remove the 3.5mm jack could backfire.

Over the course of the last year, we’ve seen phones in the mid-range segment close the gap on flagships from Samsung, LG, and HTC. The likes of OnePlus 3T, Honor 8, and Xiaomi’s Mi 5 showed that you don’t necessarily have to spend big to get access to high-end internals and dual camera tech.

Xiaomi has built its entire business model on selling phones that offer great value for money. The manufacturer doesn’t make much profit from initial sales, but gets a bigger cut over the lifecycle of a handset as component costs come down. The strategy has worked very well for the Chinese company over the last three years, and the Mi 6 represents its boldest move yet.

The Mi 6 has everything you’d expect in a high-end phone in 2017: a sharp display, Snapdragon 835 SoC, 6GB of RAM, 128GB storage, dual 12MP cameras, and a 3350mAh battery. What sets the phone apart is that it offers all of these features for just $420 (¥2,899), or half the cost of traditional flagships like the Galaxy S8.

The marquee feature on Xiaomi’s 2017 flagship is the dual camera setup, which includes a standard 12MP f/1.8 lens augmented by a secondary 12MP f/2.6 sensor that acts as a telephoto lens. The setup is similar to what Apple introduced in the iPhone 7 Plus, but Xiaomi’s implementation is far more elegant as there’s no ungainly camera bump at the back of the Mi 6.

The design is an evolution from the Mi 5 and Mi 5s, with Xiaomi adopting stainless steel to reinforce the frame and add much-needed heft to the device. The curves now extend out to all corners in what Xiaomi calls “four-sided 3D glass,” leading to a design that belies its price tag.

It’s hard to justify the price of an $800 phone when you can get 90% of the features for half as much.

The home button on the Mi 6 uses Qualcomm’s Sense ID, and is identical to that used in the Mi 5s. The sensor takes a 3D map of your finger’s pores and ridges using ultrasound technology, resulting in a much more detailed picture of your fingerprint.

On the software front, it’s great to see Xiaomi using Android 7.1.1 Nougat. There isn’t a global ROM for the Mi 6 yet, and as such I’ll only be able to talk about the software intricacies in the review. For now, the phone runs on MIUI 8, and other than the new camera modes to take advantage of the dual camera tech, there isn’t a whole lot that’s new from earlier this year.

xiaomi-mi-6-2.jpg?itok=-10dQ4to

Xiaomi is introducing several color options for the Mi 6 — the phone is available in blue, white, and black. There’s also a limited edition silver variant with a mirror finish that will be available in limited quantities, as well as a ceramic black option with 18K gold accents around the camera sensors.

Combined with stereo speakers up front, top-notch internal hardware, and a 5.15-inch Full HD display that’s one of the best in this segment, you’re getting a lot for your money. What you don’t get is the ability to plug in your headphones. With the Mi 6, Xiaomi is joining the USB-C audio bandwagon, and while the standard may well be the future of audio, in 2017, there’s no justification to ditch the ubiquitous 3.5mm jack.

xiaomi-mi-6-6.jpg?itok=XP0rxQtM

LeEco was the first manufacturer to get rid of the 3.5mm jack last year in the Le Max 2, and Lenovo, Apple, and HTC followed suit. In that time, we haven’t seen any compelling audio products that are based on USB-C, so if you’re looking for good headphone options on devices that don’t have a 3.5mm jack, you’ll have to spring for Bluetooth-enabled products or use a USB-C to 3.5mm dongle. The former includes an added investment, and the latter is just clunky to use.

It is particularly surprising that Xiaomi chose to ditch the 3.5mm jack, as the company makes a host of affordable audio products designed in collaboration with 1More. Like its phones, Xiaomi’s headphones deliver excellent value for money, and they’re the few accessories the manufacturer sells directly to consumers in Western markets.

xiaomi-mi-6-7.jpg?itok=b1Cz0DCi

Talking about distribution, Xiaomi has stated that it will not sell the Mi 6 in the U.S. or Europe. The company is a few years away from making its debut in Western countries, and that means that if you want to get your hands on the Mi 6 outside of Asian markets, you’ll have to do so from resellers.

For now, the Mi 6 is limited to China, but with Xiaomi looking to consolidate its position in India, the phone should make its debut in the subcontinent in the coming months. Xiaomi has seen a resurgence in the budget segment with the Redmi Note 4, and with the Mi 6, it will be looking to get some much-needed momentum going in the mid-range category.

27
Apr

Samsung posts record profits in Q1 2017 even as phone sales decline


Increased sales in the semiconductor business lead to Samsung’s second-most profitable quarter.

Samsung has published its earnings results for the quarter ending in March 2017, and as it forecast a few weeks ago, Q1 2017 was very profitable for the company. Operating profit saw a massive year-on-year increase of 48% to $8.8 billion (9.9 trillion won), leading to Samsung’s second-most profitable quarter ever and best-ever first quarter.

samsung-ces15.jpg?itok=XZUFczVf

Net profit at $6.8 billion (7.68 trillion won) was also up 46% from the same period a year ago. Although profits increased significantly, overall revenue at $44.7 billion (50.55 trillion won) was a slight increase from Q1 2016’s $44.01 billion (49.78 trillion won). Sales from the mobile unit declined, with the business posting an operating profit of $1.8 billion (2.07 trillion won), down 47% from the $3.4 billion (3.89 trillion won) it netted a year ago.

With the termination of the Note 7, Samsung had to rely on the Galaxy S7 for longer than intended, and the company had to reduce its price to stay competitive. That said, the manufacturer saw healthy sales of the mid-range Galaxy A 2017 as well as increased momentum in the mid- to low-end segments in emerging markets. Looking forward to Q2 2017, Samsung is bullish on Galaxy S8 and S8+ sales leading to increased profits and revenues. In addition to maximizing sales of the Galaxy S8, Samsung will launch a second flagship — theNote 8 — in the latter half of the year.

A bulk of Samsung’s profits were from the semiconductor business, where the company saw increased demand for memory products. Overall profits from the division amounted to $5.57 billion (6.31 trillion won) on revenues of $13.8 billion (15.66 trillion won). Sales of DRAM and enterprise SSDs increased, as well as demand for 14nm application processors for mid-range phones and image sensors for flagships. Looking ahead, Samsung is counting on its recent 10nm node to drive growth, with the 14nm processors branching out into automotive, IoT and wearable segments.

27
Apr

Google becomes first foreign internet company to go live in Cuba


After former President Obama reopened America’s diplomatic relations with Cuba, businesses started looking for opportunities to make inroads to the island nation. Google was one of these, with Obama himself announcing it would come to help set up WiFi and broadband access there. Cuba’s national telecom ETECSA officially inked a deal with Google back in December, and today, they finally switched on the service, making the search giant the first foreign internet live on the island.

To be fair, Google already had a headstart when it made Chrome available in Cuba back in 2014. The servers Google switched on today are part of a the Google Global Cache (GGC), a global network that locally stores popular content, like viral videos, for quick access. Material stored in-country will load much quicker than Cuba’s existing setup: Piping internet in through a submarine cable connected to Venezuela. Many Cubans can only access the web through 240 public access WiFi spots scattered through the country, according to Buzzfeed. While this won’t bring Cuban internet near as fast as American access, it’s still a huge step forward.

Source: Buzzfeed

27
Apr

Microsoft wasn’t hammered by surveillance requests in 2016


A couple of weeks ago, Microsoft released its Transparency Report revealing that it had received “1,000 to 1,499 surveillance requests for foreign intelligence purposes (known as FISA) from January to June 2016.” There’s only one problem though — it didn’t. Today, Microsoft updated the report to say that stat was an error, and the number of orders it had received in 2016 is actually somewhere between 0 – 499, as it has been in previous years. Unfortunately, the company is not allowed to release more specific data, so we don’t know if it has actually changed or by how much. A spokesperson told Reuters the mistake was a “human error.”

Microsoft:

*Editor’s note on April 25, 2017: Our latest U.S. National Security Orders Report and accompanying blog post contained an error, reporting that from Jan. 1 – June 30, 2016 Microsoft received 1,000 – 1,499 FISA orders seeking disclosure of customer content. The correct range is 0 – 499 FISA orders seeking disclosure of customer content. All the other data disclosed in the National Security Orders Report was correct.

Microsoft corrected the mistake as soon as we realized it was made to ensure the accuracy of our reporting. We’ve put additional safeguards in place to ensure the numbers we report are correct. We apologize for the error.”

Source: Reuters, Microsoft

27
Apr

2018 FIFA World Cup will be the first with instant replay


Soccer (or, to the rest of the world, football) traditionalists have shunned video replay for years, claiming it would alter the sanctity of referees’ calls. But well-documented flubbed calls like those that erroneously eliminated England and Mexico in the 2010 World Cup have nudged FIFA into considering the technology. At long last, after getting implemented at professional levels, it’s headed to the game’s biggest stage: On-field instant replay is coming to the World Cup for the first time in 2018, when Russia hosts the tournament.

FIFA President Gianni Infantino says video assistant referees will be used at the World Cup for the first time at 2018 tournament in Russia

— Sky News Newsdesk (@SkyNewsBreak) April 26, 2017

Video replay came to American football years ago, filling out professional stadiums in 2007 and college-level ball in 2010. Ref-assisting technology started trickling into soccer thereafter, with FIFA finally bringing goal-line tracking games in 2012 and video replay into general matches last year. Hopefully, on-field replay will prevent the gaffes that have haunted past World Cups.

Source: CBS Sports

27
Apr

Google experiment promises clean nighttime shots from your phone


Many modern smartphones can take decent photos when the sun goes down, but their noisy, washed out images still don’t hold a candle to the shots from a high-end DSLR. Google researcher Florian Kainz might have a way of closing that gap at least some of the time, however. In response to a challenge from one of his team members, he wrote an experimental Android app that helps take exceptionally clean photos in even the darkest conditions. The software gives you manual control over exposure, focus distance and ISO sensitivity, all of which are crucial to low-light photography. When you tap the shutter button, the app takes a burst of up to 64 photos. After that, it’s a matter of some calculation: Kainz eliminates the noise by computing the mean of the frames, and can remove artifacts by subtracting the mean of frames shot with tape over the sensor.

As for the results? They’re not perfect (you’re looking at one example up top), but they’re miles above what you’re used to from your handset. Google’s Nexus 6P and Pixel could take well-exposed, color-accurate and largely noise-free photos in extremely dark situations. Kainz even managed a decent photo of the night sky using nothing but starlight. The approach can compensate for motion, too, so you might not need a tripod to stabilize your phone.

Right now, the approach isn’t practical for most people. It requires a lot of after-the-fact processing on a computer, and the pictures still aren’t as high-resolution as what you tend to get out of DSLRs. Also, you certainly wouldn’t want to use this for fast-moving objects. However, Kainz believes that a phone could handle everything by itself with the appropriate app. If the experiment translates from Google’s labs to real-world code, you could take many nighttime photos that are virtually impossible today.

Source: Google Research Blog

27
Apr

Replacing your console with a gaming PC is a brilliant, idiotic idea


There was a time when PC gaming was tied to the desk. Every game, from puzzling adventures to heart-pounding shooters, could be enjoyed only from the same, stodgy, often uncomfortable position. The classic joke of a PC gamer playing from the corner of a basement had a bit of truth to it. It was a hobby that required isolation.

Thankfully, that has changed in recent years. Computers like the MSI Trident, Alienware Alpha, and Zotac Zbox have bridged the gap. They all fit comfortably alongside a PlayStation 4 – many are smaller, in fact – and they offer all the perks of a PC. These console-like computers can be a gamer’s best friend, and their worst enemy.

The performance is unparalleled…

The PC has always enjoyed a performance edge over console hardware, and it’s not restricted to big, heavy desktops. Zotac’s Magnus EN1080K, for instance, packs a seventh-generation Intel processor with Nvidia’s GeForce GTX 1080, a graphics chip that pushes almost nine teraflops of raw computer power. That easily bests even the upcoming Xbox Scorpio, and it’s about five times more powerful than a standard PlayStation 4. Even Alienware’s Alpha, the most modestly equipped of its peers, beats the Xbox One S and PlayStation 4 in raw performance.

Alienware Alpha R2
Bill Roberson/Digital Trends

Visually, it pays off. PC titles usually offer more detailed textures, better post-process effects, superior anti-aliasing, and more realistic shadows. Despite that, games usually run more quickly on PC. Consoles almost always target 30 frames per second, while the PC can regularly flex its muscles and obtain 60 FPS or higher. The net result is more attractive, more fluid gameplay.

…but so is the price.

Even the PlayStation 4 Pro retails for just $400. While that’s a lot to some console gamers, it’s chump change in the PC world, where even the Alienware Alpha starts at $500. The MSI Trident 3 we recently reviewed was powerful, but it also retails for $1,300. Top-of-the-line custom models, like Origin’s Chronos, can top $3,000.

The problem is obvious. That’s a lot of money. A living room PC may beat the visual quality of a console, but does it matter enough to justify a price that’s several times higher? Most people will answer no. Even enthusiasts find it hard to tolerate. After all, hardcore PC gamers are guaranteed to already own a fast desktop, and they’re not eager to buy the same hardware twice.

The game library is incredible…

More games come to PC in most genres, and entire genres don’t appear on console.

Of course, there’s more to price than just the hardware. Games are also expensive, and there, PC titles tend to have an edge. They go on sale more frequently, through a wider variety of stores. New titles hit $20 on Steam or Humble Bundle way before they plummet to the same lows on Amazon or the PlayStation Store.

This isn’t because of goodwill towards PC gamers, of course. Prices trend low because there’s a ton of competition. More games come to PC in most genres, and entire genres that don’t appear on console. MOBAs, the world’s most popular game genre, is almost entirely missing on console. On PC, there’s plenty to choose from. The same can be said of real-time and turn-based strategy, massively multiplayer games, hardcore racing simulators.

…but the controls can suck.

Microsoft’s wise decision to make Xbox controllers work with Windows has unified the controls of games that debut on both platforms. That’s a big deal, and it makes most top-tier games playable from the couch no matter what’s running them.

Yet there are still limitations. In some cases, it’s due to the superiority of native PC controls. You can play a shooter on a living room console, controller in hand, and it works alright in single-player. Jump online, though, and you’ll be dead faster than you can say “thumbsticks suck.”

Other games don’t support couch-friendly controls in any form. This is true of most strategy games, a great number of PC-only indie games, and some racing titles. Yes, you can play from your couch with a wireless mouse. But you probably don’t want to. We’ve tried it many times, and it’s a great way to screw up your shoulders or neck. Keyboards and mice were designed to be used by an office worker sitting upright in a desk chair, not a gamer slouching comfortably on a couch.

The versatility can’t be matched…

Even if it’s no fun to use a keyboard and mouse from the couch, you’ll still want to hook them up. Why? Simple. A living room PC can do so much more than game. Just like a desktop or laptop, a living room PC can be used for almost any task imaginable, from streaming live sports, to viewing PowerPoint presentations.

In fact, a living room PC with a decent wireless mouse can replace almost every other device found in a home theater. Forget about a Roku or Chromecast. Forget about your home DVR. Forget about plugging in a USB drive to view family photos. In short, a living room PC is still a PC, and that means it can be adapted for many tasks.

…but the annoyances are hard to tolerate.

Unfortunately, a living room PC is still a PC, and that means it suffers from all the usual bugs and annoyances. An email notification is never going to interrupt a suspenseful episode of Game of Thrones on Roku, but that’ll become a frequent occurrence on PC. You’ll have to update with all the usual updates, as well, so get used to the seeing the Windows 10 Update screen.

It remains true that bugs are more common on a computer than a game console.

And then you must deal with the bugs. Console fans often overstate the problems found on PC, and modern game consoles aren’t immune to crashing, but it remains true that bugs are more common on a computer than a game console. We aren’t just talking about hard crashes, which are rare. Instead, it’s the small stuff that becomes an issue. A game might fail to load properly because it wasn’t run in administrator mode, the Wi-Fi adapter might occasionally lose connection, or a USB port might go on the fritz.

Consoles may have problems, but they tend to be rudimentary. A thing works, or it doesn’t. If it doesn’t, then a simple reset usually fixes it. If that doesn’t work, then the console is likely broken, or the game itself is bugged. The rabbit hole of troubleshooting goes much deeper on PC, and that can become an unwanted time-sink.

Beware the living room PC, embrace the living room PC

We say all this as a warning. Buying a PC as a console replacement may seem like a good idea, and it does have its benefits. Yet there are also a lot of issues, none of which are easy to resolve. Alienware, MSI, or Zotac can do little to fix the problems above. The issues – at least for now – are inherent to the PC experience, baked into the operating system, or the way computers are constructed.

If you want to buy a living room PC, do it. Just make sure you’re doing it because you want a gaming PC in your living room, and not because you want a more powerful version of a console.




27
Apr

Man arrested for allegedly assaulting a robot security guard in Silicon Valley


Why it matters to you

Just in case you ever had an intention of harming a robot, be aware than it can get you locked up.

People show their concern about the impending artificial intelligence-fuelled rise of the robots in all manner of different ways.

One man in Mountain View, California, this week decided to take out what we can only imagine to be his fears about the impact of increased automation on the human workforce by allegedly assaulting an innocent robot in a Silicon Valley parking lot.

The robot caught up in in the incident was K5, one of the smart sensor-equipped 300-pound security robots designed by robotics company Knightscope. The assailant allegedly knocked over K5, which immediately registered the assault and sounded an alarm. The suspect attempted to run but was ultimately arrested by Mountain View Police.

“We are incredibly proud of the outcome and believe this to be a true testament to the technology we developed here in Silicon Valley,” Stacy Dean Stephens, Knightscope’s vice president of marketing and sales, told the U.K.-based Independent newspaper. “We are equally happy to report that the robot has recuperated from his injuries and is back on patrol keeping our office and employees safe again.”

K5 reportedly suffered only superficial injuries, including a number of scratches on its back, during the attack.

“I think this is a pretty pathetic incident because it shows how spineless the drunk guys in Silicon Valley really are because they attack a victim who doesn’t even have any arms,” a Mountain View resident told ABC7 News.

While man-on-robot crime is still something of a new phenomenon, it is not totally without precedent. In 2015, an intoxicated man in Japan attacked one of SoftBank’s emotion-reading Pepper robots, kicking and damaging the bot.

Personally, we would steer clear of fighting with robots, regardless of the circumstances. After all, you never know which one might know Boston Dynamics’ terrifying dog robot on a first-name basis.




27
Apr

AMD updates latest Ryzen drivers with new power plan to boost performance


Why it matters to you

If your PC uses a new Ryzen CPU, then you’ll want to install the latest driver update on your Windows 10 64-bit machine.

AMD’s new Ryzen chips based on the company’s Zen CPU architecture are winning some accolades for providing good performance at competitive pricing. The high-end Ryzen 7 series were the first to ship, and the midrange Ryzen 5 series chips have also hit the market.

Anyone looking to optimize a Ryzen-based system is likely keeping up with the latest driver and software updates. With just such people in mind, AMD released its latest AMD Ryzen chipset drivers with an eye on improving performance — at least for Windows 10 64-bit users.

According to AMD, version 17.10 of the drivers is aimed specifically at maximizing the performance of Ryzen-based systems by adjusting for how Windows 10 manages power states. By default, Windows 10 is configured for its “Balanced” power profile, even on desktops, as opposed to the “High Performance” power plan that’s been demonstrated to show significant performance benefits.

The new Ryzen drivers get around the Windows 10 default by installing a new “AMD Ryzen Balanced” power plan. The primary advantage is that while the Windows 10 power plan imposes a 30 millisecond delay in switching power states, the AMD power plan takes advantage of the Ryzen chipset’s ability to switch power states within 1 millisecond when the chip itself is in control. An additional benefit is that the new drivers avoid “parking” cores, which create additional latency costs.

The new drivers enable PCs to take full advantage of the processors’ AMD SenseMI technology, which enables such rapid adjustments to voltages and frequencies and can thus optimize performance. The new AMD Ryzen Balanced power plan is meant to provide the same performance benefits as the Windows 10 High Performance power plan, and indeed it does, as the following chart attests.

To enable the new AMD Ryzen Balanced power plan, first install the latest AMD chipset drivers on your Windows 10 64-bit machine. Then, open the Control Panel, click on Power Options, and then select the plan from the list of options.

Once the new power plan is in effect, you should see improved performance in a variety of games and power-hungry applications. Again, the update is only for Windows 10 64-bit machines.




27
Apr

The FCC’s plan to end net neutrality is here, and the fight is going to get ugly


America’s telecom regulator has made public its intention to roll back net neutrality laws. Here’s what you need to know.

This week, the FCC, America’s telecom regulator, announced its intention to bring about the end of net neutrality in an official sense, removing the Title II classification that has been bestowed upon the Internet and its service providers since the decision was made in 2015 to do so.

In a speech, Ajit Pai, a former FCC Commissioner under Chairman Tom Wheeler and, under President Trump, Chairman of a tonally different regulator, laid his plan to claw back the consumer protections enabled by Title II. In short, net neutrality prevents Internet service providers from differentiating the type of traffic going across its pipes, both wired and wireless, abrogating the use of “fast lanes” for content providers that choose to pay for it.

ajit%20pai%20official.jpg?itok=GUiDJvZ0 Image credit: FCC

In his speech, Pai said that Title II classification was put forth as a way for the FCC at the time to assert power and prove its independence, and that it has hurt innovation and, in turn, consumers. “So what happened after the Commission adopted Title II? Sure enough, infrastructure investment declined. Among our nation’s 12 largest Internet service providers, domestic broadband capital expenditures decreased by 5.6% percent, or $3.6 billion, between 2014 and 2016, the first two years of the Title II era. This decline is extremely unusual. It is the first time that such investment has declined outside of a recession in the Internet era,” he said.

Removing Title II classification from Internet traffic will have the following advantages, according to Pai:

  • It will bring high-speed Internet access to more Americans
  • It will create jobs
  • It will boost competition
  • It is the best path toward protecting Americans’ online privacy

ISPs have put up roadblocks for consumers when given the opportunity.

But opponents of the repeal say that there is no reason to remove the classification, and that competition amongst the U.S. service providers has thrived since the change. The FCC claims that it should not be able to “micromanage” the Internet and has come out against forcing service providers to stop zero-rating programs like T-Mobile Binge On or AT&T’s Sponsored Data, which it says promotes a healthy marketplace and provides greater choice to consumers.

In an interview with Reason.com, a conservative website aimed at “free minds and free market,” Pai said that “we were not living in a digital dystopia in the years leading up to 2015. By contrast, actually, the commercialization of the internet in the 1990s up to 2015 represented I think the … one of the most incredible free market innovations in history. With light touch regulation, broadband providers spent 1.5 trillion dollars on infrastructure. Companies like Google and Facebook and Netflix became household names precisely because we didn’t have the government micromanaging how the internet would operate. That Clinton-era framework is something I think served us well and going forward I hope it continues to serve us well.”

“These rules, Title II rules were designed to regulate Ma Bell, and the promise with Ma Bell, the deal with the government was, we’ll give you a monopoly as long as you give universal service to the country. As a result, for decades, we didn’t see innovation in the network we didn’t see innovation in phones and it’s when you have a competitive marketplace and you let go of that impulse to regulate everything preemptively, that you finally get to see more of a competitive environment.”

But ISPs have put up roadblocks for consumers when given the opportunity. One only needs to look at the lawsuits levelled at AT&T and Verizon around their old unlimited plans, which were silently throttled after a particular data cap was hit. These days, those unlimited plans make it very clear when throttling will come into effect. On the broadband side, Verizon was sued by the City of New York for not following through with its contractual commitment to provide Fios access to all New Yorkers.

It’s no surprise that the big U.S. carriers support the decision to remove Title II classification.

Pai says that he isn’t opposed to net neutrality itself, just a heavy regulatory hand overseeing internet service providers that could limit customer choice and, in turn, competition. He thinks that Title I classification, which was established for broadband providers in the Clinton era, is the right compromise, and that under his proposal he would encourage, but not force, ISPs to follow net neutrality rules by codifying them in their terms of service — which could be easily changed, even retroactively, without informing consumers. It’s no surprise that U.S. ISPs are already coming out in support of such a change.

Verizon issued a statement saying that, while it supports net neutrality, “[it] also supports Chairman Pai’s proposal to roll back Title II utility regulation on broadband. Title II (or public utility regulation) is the wrong way to ensure net neutrality; it undermines investment, reduces jobs and stifles innovative new services. And by locking in current practices and players, it actually discourages the increased competition consumers are demanding.”

Sprint said something similar:

“Sprint has always supported an open internet and will continue to do so. We recognize that our customers demand access to the content, applications, and devices of their choice and as a competitive wireless carrier, we always strive to meet our customers’ needs.

“Chairman Pai’s proposed rulemaking provides an opportunity for all stakeholders to share their views and work with the FCC to remove uncertainties and refine the rules that protect and ensure an open internet. Sprint believes that competition provides the best protection to consumers. Promoting robust competition and ensuring consumers have real choice among competing internet providers is the best way for the FCC to achieve its open internet objectives. Sprint looks forward to working with the FCC, consumers, and content providers towards that end.”

T-Mobile and AT&T haven’t yet issued comments, but have both previously come out in support of the reclassification. A group of companies, including Facebook and Google, oppose the change, and have previously filed briefs with the FCC to that effect.

The next step for Pai is to publish the full proposal and then put the Commission itself to a vote on May 18th. If approved, the FCC will open the proposal up to public debate before codification later in the year.