Skip to content

Archive for

21
Jan

The Meitu selfie app unlocks your anime beauty and personal data


There’s a price for the beauty that comes from the Chinese selfie app that’s been flooding Facebook, Twitter and Instagram with glowing (with a twist of anime) renditions of your friends: It’s data.

The free Meitu app for iOS and Android asks for (and apparently was granted by users thirsty for glowing-skin likes) far more permission on Google’s operating system (access to the calendar, contacts, SMS messages, location, auto launches at startup, external storage, and IMEI number) than a normal camera application.

Let me get this straight…
All of you just installed a photo app from China that requires these permissions? Let me know how it works out. pic.twitter.com/wGDUYbRdSA

— Greg Linares (@Laughing_Mantis) January 19, 2017

The iOS version checks to see if your phone is jailbroken (probably to see if it can used the compromised OS to send more data back to the developer in China), which carrier you’re using and can probably figure out your iPhone’s unique ID. Yeah, not great.

One security researcher noted that the app is sending Android IMIE information to several servers in China.

Now all of this data could be a goldmine if the company sells it to third parties. But according to a statement the developer sent to CNET, the company is not peddling your data to the highest bidder.

Meitu says the reason for all the data collecting is because it’s headquartered in China where the tracking services offered by Apple and Google are blocked. It’s workaround is a combination of in-house and third-party information tracking. The developer says that all that data is “is sent securely, using multi-layer encryption to servers equipped with advanced firewall, IDS and IPS protection to block external attacks.” It also insisted that its iOS code shenanigans only asks for permissions allowed by Apple’s developer guidelines.

The developer might in fact only be using the data it collects for tracking right now, but more than a few companies have changed their business practices and terms of service when cash starts to run low. Suddenly all that personal information that was supposed to be used internally is a great way to make a quick buck.

Plus the hype around the app has already passed at this point so you might as well delete the app and enjoy those photos. They may have taken your data, but they’ll never take away your selfies.

Via: CNET, Ars Technica

21
Jan

Amazon will help train veterans for tech jobs


Last week, Amazon said it would bring 100,000 full-time jobs to the US by 2018. This week, the online retailer announced a registered apprenticeship program with the US Department of Labor that will offer training to veterans. The initiative follows CEO Jeff Bezos’ pledge to hire 25,000 veterans and their spouses over the course of five years. That goal was announced back in May.

This new apprenticeship program will train veterans for “in-demand technical careers” at Amazon. In a press release announcing the initiative, the US Department of Labor said that the first participants will be trained for an AWS Cloud Support Associate position. The Labor Department also explained that over 200 companies, colleges and labor organizations have signed on to participate in the larger ApprenticeshipUSA program. As TechCrunch notes, Amazon and Tesla Motors are the only two big name tech companies listed that offer registered apprenticeships.

Via: TechCrunch

Source: US Department of Labor

21
Jan

Net neutrality foe Ajit Pai tapped to take over the FCC


FCC commissioner and outspoken critic of net neutrality Ajit Pai will reportedly be promoted to the agency’s top post when Chairman Tom Wheeler steps down today. Pai, who was nominated by President Obama and served as the senior Republican commissioner, would not require Senate approval and his new position could be announced as early as Friday afternoon, Politico reports.

Pai’s promotion won’t come as a surprise, but his new role should worry any supporters of a fair and open internet. Last month Pai and the FCC’s other Republican commissioner Michael O’Rielly sent a letter to telecoms and carrier lobbying groups promising to “revisit” the net neutrality rules laid out in 2015 that protect consumers from practices like pay-for-priority access, blocking and throttling. According to Pai and O’Rielly, these rules for carrier transparency and traffic fairness create “unjustified burdens” for service providers and the pair intend to “undo” them.

Net neutrality won’t disappear overnight, however. As Ars Technica noted last month, any rules change would still require months of procedure and public comment. Net neutrality aside, Pai also laid out a Digital Empowerment Agenda in September that he claims will help close the digital divide between the country’s rich and the poor by reducing broadband deployment regulations and encourage mobile broadband adoption. Prior to joining the FCC, Pai also worked as an attorney for Verizon other telecoms clients. Politico also notes that his term technically ended last year, but according to the FCC’s rules he can stay on until the end of 2017. After that, he’ll need to be reconfirmed by the Senate.

Source: Politico

21
Jan

How artificial intelligence can be corrupted to repress free speech


The internet was supposed to become an overwhelming democratizing force against illiberal administrations. It didn’t. It was supposed to open repressed citizens eyes, expose them to new democratic ideals and help them rise up against their authoritarian governments in declaring their basic human rights. It hasn’t. It was supposed to be inherently resistant to centralized control. It isn’t.

In fact, in many countries, the internet, the very thing that was supposed to smash down the walls of authoritarianism like a sledgehammer of liberty, has been instead been co-opted by those very regimes in order to push their own agendas while crushing dissent and opposition. And with the emergence of conversational AI — the technology at the heart of services like Google’s Allo and Jigsaw or Intel’s Hack Harassment initiative — these governments could have a new tool to further censor their citizens.

Turkey, Brazil, Egypt, India and Uganda have all shut off internet access when politically beneficial to their ruling parties. Nations like Singapore, Russia and China all exert outsized control over the structure and function of their national networks, often relying on a mix of political, technical and social schemes to control the flow of information within their digital borders.

The effects of these policies are self-evident. According to a 2016 report from internet liberty watchdog, Freedom House, two-thirds of all internet users reside in countries where criticism of the ruling administration is censored — 27 percent of them live in nations where posting, sharing or supporting unpopular opinions on social media can get you arrested.

Take China for example. An anonymous source within Facebook last November claimed to the NYT that the company had developed an automated censorship tool for the CPC — a token of loyalty that CEO Mark Zuckerberg hopes will open the Chinese market to the Western social network. While Facebook likely won’t censor user-generated content directly, the effect will be the same if the tool is utilized by a third-party company located in China.

If Facebook is willing to do that in China, what’s to stop the company from doing the same here in America at the behest of a Trump administration? What’s to keep Twitter, Instagram or Snapchat (which is owned by FB) from following suit? Twitter, Facebook and Intel all declined to comment for this story. However, Dr. Toby Walsh, a leading researcher of AI and current guest Professor at TU Berlin, believes such an outcome is plausible. “When we think of 1984-like scenarios, AI is definitely an enabling technology,” he said.

China Development Forum 2016 In Beijing

Facebook’s Mark Zuckerberg and Alibaba’s Jack Ma speak at the China Development Forum 2016 — VCG via Getty Images

While the country has slowly warmed to capitalist markets and a more open economy, the Communist Party of China (CPC) has long maintained a tight grip on digital culture. A quarter of the world’s population — nearly 700 million people — are online in China. 90 percent of web users in that nation access the web from a mobile device and, in 2015 alone, more than 40 million new users signed on for the first time.

And yet, some of the biggest cultural stories in China’s modern history simply don’t exist within its borders. All references to the 1989 Tiananmen Square crackdown have been so thoroughly scrubbed from the Chinese national internet that, in 2015, financial institutions were reportedly unable to accept monetary transfers that included a 4 or 6 because those digits refer to the protests’ June 4th anniversary. Of course, there is no such thing as perfect security. “People are creative in how they work around such systems,” Jason I. Hong, Associate Professor at the Human Computer Interaction Institute at Carnegie Mellon University, wrote to Engadget. “In China, people sometimes refer to Tiananmen Square protests as May 35 (June 4), which evaded censors for a while.”

What’s more, according to GreatFire.org, around 3,000 websites had been blocked by the country’s government as of 2015. Those include Google, Facebook, Twitter and the New York Times. This ubiquitous censorship is a testament to China’s top-down design for its national network.

Essentially, Chinese censorship halts the flow of dissenting ideas before they can even start by continually keeping an eye on you. Unlike in the US, Chinese ISPs and websites are legally liable for what their users post which has forced them into becoming unofficial editors for the state. So much as linking to political opinions critical of the CPC’s conduct is a prosecutable offense. By keeping ISPs and websites under threat of closure, the government is able to leverage that additional labor force to help monitor a larger population than it would otherwise be able to. A conversational AI system would be able to accomplish the same effect more efficiently and at an even larger scale.

State censorship even extends to social media. This past July, the Cyberspace Administration of China, the administration in charge of online censorship, issued new rules to websites and service providers which enabled the government to punish any outlet that publishes “directly as news reports unverified content found on online platforms such as social media.” That is, if a news organization gets a tip from a reader via Weibo, that organization is going to be fined or shuttered.

“It means political control of the media to ensure regime stability,” David Bandurski of the University of Tokyo told the New York Times. “There is nothing at all ambiguous about the language, and it means we have to understand that ‘fake news’ will be stopped on political grounds, even if it is patently true and professionally verifiable.”

It’s not that bad here in America, yet. Over the past 20 years, “self expression has proliferated exponentially. And the Supreme Court, especially the Roberts Court, has been, on the main, a strong defender of free expression,” Danielle Keats Citron, Professor of Law at the University of Maryland Carey School of Law, wrote to Engadget.

Historically, the court has upheld specific forms of speech like snuff films, video game violence and falsified military service claims because they don’t meet the intentionally narrow threshold for unprotected speech — like yelling “fire” in a crowded theater. “At the same time,” Keats Citron continued. “Much expression occurs on third party platforms whose speech decisions are not regulated by the First Amendment.”

A sizeable portion of this expression takes the form of online harassment — just look at the Gamergate, Pizzagate, Lizard Squad and Sad/Rabid Puppies debacles, or the cowardly attacks on Leslie Jones for her role in the GhostBusters reboot. Heck, even Donald Trump, the newly-installed President of the United States, has leveraged his Twitter feed and followers to attack those critical of his policies.

“The thing to remember about these platforms is that the thing that makes them so powerful — that so many people are on them — is also what makes them so uniquely threatening to freedom of speech,” Frank Pasquale, Professor of Law at the University of Maryland Carey School of Law said.

All of this hate and vitriol has a stifling effect on speech. When constantly inundated with this abuse, many rational people prefer to remain silent or log off entirely, as Ms. Jones did. Either way, the effect is the same: The harassment acts as a form of peer censorship. However, a number of the biggest names in technology are currently working to leverage machine learning algorithms and artificial intelligence to combat this online scourge. And why not? It certainly worked in League of Legends. The popular game managed to reduce toxic language and the abuse of other players by 11 percent and 6.2 percent, respectively, after LoL’s developer, RiotGames, instituted an automated notification system that reminded players not to be jerks at various points throughout each match.

Intel CEO Brian M. Krzanich speaking at the 2016 Intel AI Day in San Francisco — YouTube

Intel’s Hack Harassment initiative, for another example, is “a cooperative effort with the mission of reducing the prevalence and severity of online harassment,” according to Intel PR. Intel is developing an AI tool in conjunction with Vox Media and the Born This Way Foundation that actively “detects and deters” online harassment with the goal of eventually creating and releasing an open API.

Ina Fried, Senior Editor at ReCode, spoke with Intel’s Lori Smith-DeYoung about the program at the 2016 Intel AI Day in San Francisco last November. “Online harassment is a problem that technology created so it’s actually kind of important that we as an industry help solve it,” Fried explained. ReCode’s role is “really just talking about the issue, amplifying it and bringing voices to the issue showing the problem.” The group has already built a demo app that looks at tweets and identifies content that constitutes harassment. It can warn users about their actions before they hit send or the system could, in theory, be built “into online communities and help monitor [harassment] and prevent some of it from being seen, or at least be seen as prevalently.”

Google has undertaken a similar effort with recently-acquired subsidiary, Jigsaw. The team’s Conversation AI system operates on the same fundamentals as Hack Harassment. It leverages machine learning to autonomously spot abusive language. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” Jigsaw president, Jared Cohen, told Wired. “To do everything we can to level the playing field.”

One major hurdle for these systems is sarcasm — something even people have trouble discerning in online writing without the help of additional contextual clues like emoji. “Context is crucial to many free speech questions like whether a threat amounts to a true threat and whether a person is a limited purpose public figure,” Professor Keats Citron told Engadget. “Yet often the full context of a threat or a person’s public figure status depends upon a wide array of circumstances–not just what happens on Twitter or Facebook but the whole picture of the interaction.”

In Conversation AI’s case, Jigsaw’s engineers educated the machine learning system by inundating it with roughly 17 million flagged comments from the New York Times website. It was also exposed to 130,000 bits of text from Wikipedia discussions. All of the Wiki snippets were also viewed by a crowdsourced 10-person panel that independently determined if each one constituted a “personal attack” or harassment.

After providing the system all of these examples, Conversation AI can recognize harassment a startling 92 percent of the time with only a 10 percent false positive rate compared to a 10-member human panel. The results are so impressive that the NYT now employs the system to auto-block abusive comments before they can be vetted by a human moderator. The team hopes to further improve the system’s accuracy by expanding its scope to look at long-term trends like the number of posts a certain account has made over a set period of time.

Both of these programs are pursuing a noble goal, however it’s one that could set a dangerous precedent. As Fried said during a subsequent AI Day panel discussion, “An unpopular opinion isn’t necessarily harassment.” But that decision is often left to those in power. And under authoritarian regimes, you can safely bet that it won’t be the will of the people.

“I’m really surprised there hasn’t been more of a discussion of this post-Snowden,” Dr. Toby Walsh, a leading researcher of AI and current guest Professor at TU Berlin, told Engadget. “I’m surprised that people were surprised that our emails are being read. Email is the easiest thing to read, it’s already machine readable text. You’ve got to assume that any email being read is not private.”

Professor Keats Citron made a similar point. “As private actors, intermediaries like Facebook, Twitter, and Google has free reign to decide what content appears online,” she said. “Whereas government cannot censor offensive, hateful, annoying, or disturbing expression, intermediaries can do as they please. For that reason, I’ve urged platforms to adopt clear rules about what speech is prohibited on their sites and some form of due process when speech is removed or accounts suspended on the grounds of a ToS violation.”

These are not small issues and they are not inconsequential, especially given the authoritarian tenor struck by the incoming presidential administration. “What I find most troubling from over the past few weeks is that you have Trump surrogate Newt Gingrich go on the news and say ‘Look, the rules are the president can order someone to do something terrible and then pardon,’” Professor Pasquale noted. He further explained that Trump’s current actions are not wholly unprecedented, but rather a “logical extension of the Unitary Executive Theory…which would effectively put the executive branch above the law.”

As mentioned above, even the threat of oversight from a government is enough to curtail free speech both online and off. “Even though there are many rights, either under the first amendment or subsequent statutes passed after J. Edgar Hoover’s COINTELPRO program,” Professor Pasquale said. “You barely ever see someone taking advantage of that statute to, say, win monetary damages or otherwise deter the [government’s] activity.”

However, the industry itself is beginning to wake up to the dangers of misusing AI systems. “There’s increasing awareness within the AI community of the risks — both intentional and unintentional — so there are a number of initiatives now to promote best practices to think about some of these ethical questions,” Professor Walsh said. “I’ve been involved with initiatives from IEEE, the largest professional organization within the space, to draw up ethical guidelines for people building AI systems.”

Should our government implement an automated censorship system akin to the one Facebook developed, even if it had only a fraction of the capability of Jigsaw’s Conversation AI, the threat to civil liberties and the First Amendment would be immediate and overwhelming.

“I think Snowden did America and the world a service by revealing the extent of the wiretapping that was going on and the fact that it was not just external parties but citizens of the United States,” Professor Walsh concluded. “I don’t think we’ve seen enough of [the discussion Snowden was attempting to instigate], people are not fully aware of quite how much the intelligence services must already be reading and the technologies that they’re able to bring to bear.”

Lead image: Getty Creative

21
Jan

Tag Heuer sold more $1,500 smartwatches than it expected


I’m not much of a smartwatch guy, but I like my LG R Android Wear watch and its bright OLED screen. An acquaintance recently expressed admiration for it, and to my surprise, came back the next day with a $1,500 Tag Heuer Connected. (“Must be nice to have money,” I thought.) He wasn’t alone, though: In an interview with German site NZZ, Tag Heuer CEO Jean-Claude Biver said that over 56,000 people bought one, tripling expected sales. As a result, the Swiss company will release new smartwatch models in May and expects to sell 150,000 units.

As you’d expect for the priciest available Android Wear watch (at the time), we found that the Tag Heuer Connected was nice-looking, lightweight (thanks to a titanium housing) and very-well built. My first impression when I saw it, though, was the rather dim screen, which settles for a transflective LCD instead of LG’s much punchier (and more energy efficient) OLED.

However, Biver said the new watches would have “more powerful displays,” without specifying what type. They’ll also come with a payment function, he said, presumably via Android Pay, a feature that’s set to arrive with Android Wear 2.0. Other improvements include a better GPS that’s accurate to a yard, a stronger wireless receiver and, thankfully, better battery life.

Since the end of 2015, our sales have grown again, most recently by around 15 percent. The smartwatch and the publicity that it brought us have played a role [in that].

The company is addressing another complaint we had, namely, its lack of unisex appeal — the rather bulky Tag Heuer Connected seems mainly aimed at men. “The new series will feature a smaller watch for women and the Asian market, along with a bigger one than before,” Biver said. “We will also offer different colors and materials.”

Interestingly, Biver sees the device as not just a minor sales success, but a way to drive interest for all of its watches, following a 10 percent sales drop in 2014. “Since the end of 2015, our sales have grown again, most recently by around 15 percent. The smartwatch and the publicity that it brought us have played a role [in that].”

Via: Pocket Lint

Source: NZZ (translated)

21
Jan

Apple sues Qualcomm for $1 billion in royalty dispute


Apple has filed a $1 billion lawsuit against Qualcomm, claiming that for many years, the chip manufacturer has “unfairly insisted on charging royalties for technologies they have nothing to do with,” CNBC reports.

This marks the end of a rough week for Qualcomm: The Federal Trade Commission on Tuesday sued the company for its alleged use of monopolistic and exclusionary tactics within the baseband processor market. Apple’s lawsuit piggybacks off of these claims. For reference, baseband processors are the chips that power network connectivity in mobile devices.

According to CNBC, Apple claims Qualcomm charges five times more for its patents than all of the other licensors it does business with combined. The company also argues that Qualcomm withheld nearly $1 billion in payments when Apple cooperated with South Korean authorities as it investigated the company’s unfair trade practices — precisely what the US and Apple are going to court over now. In December, South Korean regulators fined Qualcomm a record $854 million for abusing its power in the smartphone chip market and overcharging device makers.

“We are extremely disappointed in the way Qualcomm is conducting its business with us and unfortunately after years of disagreement over what constitutes a fair and reasonable royalty we have no choice left but to turn to the courts,” Apple says in a statement to CNBC.

We’ve reached out to Qualcomm for comment and will update this story as we hear back.

Source: CNBC

21
Jan

Apple Sues Qualcomm for $1 Billion in Unpaid Royalty Rebates


Following an FTC complaint alleging Qualcomm engaged in anticompetitive patent licensing practices, Apple has filed a lawsuit against Qualcomm claiming the company has charged unfair royalties for “technologies they have nothing to do with.”

According to a statement Apple shared with several news sites, Qualcomm “reinforces its dominance” through exclusionary tactics and high patent licensing fees. Apple’s full statement is below:

“For many years Qualcomm has unfairly insisted on charging royalties for technologies they have nothing to do with. The more Apple innovates with unique features such as TouchID, advanced displays, and cameras, to name just a few, the more money Qualcomm collects for no reason and the more expensive it becomes for Apple to fund these innovations. Qualcomm built its business on older, legacy, standards but reinforces its dominance through exclusionary tactics and excessive royalties. Despite being just one of over a dozen companies who contributed to basic cellular standards, Qualcomm insists on charging Apple at least five times more in payments than all the other cellular patent licensors we have agreements with combined.

To protect this business scheme Qualcomm has taken increasingly radical steps, most recently withholding nearly $1B in payments from Apple as retaliation for responding truthfully to law enforcement agencies investigating them.

Apple believes deeply in innovation and we have always been willing to pay fair and reasonable rates for patents we use. We are extremely disappointed in the way Qualcomm is conducting its business with us and unfortunately after years of disagreement over what constitutes a fair and reasonable royalty we have no choice left but to turn to the courts.”

In the lawsuit, filed in a federal district court in the Southern District of California, Apple accuses Qualcomm of using its position as the supplier of a key iPhone component to drive up patent licensing fees.

Qualcomm supplies the LTE modems used in Apple’s line of iPhones, and up until 2016, the company was Apple’s sole supplier. The iPhone 7 and the iPhone 7 Plus use modems from both Qualcomm and Intel.

Qualcomm reportedly forced Apple to use its LTE chips exclusively in iOS devices and pay a percentage of the total average selling price of an iPhone for access to Qualcomm patents.

Qualcomm is supposed to provide Apple with quarterly rebates, but has failed to do so for the past year because of Apple’s participation in an antitrust investigation against Qualcomm in South Korea. That investigation led to an $850+ million fine against Qualcomm for anticompetitive licensing practices.

Apple is seeking $1 billion in rebate payments that have been withheld.

Earlier this week, the United States Federal Trade Commission filed a lawsuit against Qualcomm that focused in part on Apple and Qualcomm’s licensing deals. According to the FTC, Qualcomm imposes “onerous and anticompetitive supply and licensing terms” on its smartphone partners by abusing its patent portfolio.

Qualcomm has said it has “grave concerns” about the lack of evidence supporting the FTC’s allegations and has promised to defend itself in federal court.

Tags: lawsuit, Qualcomm
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs