Facebook still has a thing or two to learn about what’s considered acceptable in your timeline. The social network is catching flak after it briefly took down an ad for Cancerfonden’s breast cancer awareness campaign that included cartoon representations of breasts — and very abstract ones at that (they were just pink circles). The company has since restored the post and apologized, but only after Cancerfonden unsuccessfully tried using a ‘safe’ blurry image and posted an open letter that blasted Facebook’s stance. You’d need square breasts to make Facebook happy, the organization argued.
In apologizing for the move, Facebook said that it examines “millions” of ad images each week and sometimes bans them by mistake. There’s no denying that the internet giant has a lot on its plate, and that it would be difficult to completely avoid slip-ups. However, this is just the latest in a string of incidents where Facebook has been overly aggressive with takedowns, only to backtrack after a public uproar. And this time, it can’t pin the removal on ambiguities in its existing policy — it acknowledged that the original image was fine in its mea culpa. Clearly, the company has yet to reach that point where it can reliably tell the difference between potentially offensive content and something that’s merely testing boundaries.
Source: Cancerfonden (1), (2)
Facebook has quietly upgraded its Messenger app for Windows 10 with the ability to make voice and video calls, VentureBeat has discovered. No more leaving the app to ring up a friend through a browser. If that new-but-familiar phone or camera icon that you’re probably used to seeing on iOS and Android has that green bubble up, your friend’s online — just tap either to start a call.
In case you don’t have the feature yet, you’ll likely get it soon: a Facebook spokesperson told the publication that it only started rolling out last week. When the feature does go live for you, you’ll get call notifications if someone rings you up and be able to leave voicemails in your friends’ inboxes. VentureBeat says you’ll also be able to choose which camera to use, record your video calls and do group voice — not video, unfortunately — calls if the whole squad wants to chat.
Facebook has also updated WhatsApp for Windows Phone with video calling capability, a Spanish website has reported. However, it’s an experimental release exclusively for select beta users, so you’ll have to be really lucky to be able to test it out before everyone else.
A Russian publication has spotted an experimental Instagram feature it obviously got its from parent corporation’s repertoire: live videos. One of T Journal’s readers sent in screenshots and a video of a curious icon lined up with Instagram Stories on top that’s clearly marked “Live.” It led to a “popular live broadcasts” page, but it refused to load — not surprising since the company hasn’t even officially announced the feature yet. T Journal also posted a screenshot of the app’s camera screen that says “Go Insta!” at the bottom, which we’re assuming starts a live broadcast.
Facebook, Instagram’s overlord, launched Live videos to the masses back in January following Periscope’s and Meerkat’s success. While Meerkat had to shut down after being eclipsed by Periscope, Facebook’s Live videos continue to thrive. It makes sense for the mega-social network to bring the capability to its popular photo app, but at this point, it’s still unclear if and when it’ll get a wider release. Those hoping and wishing to get an early glimpse of Instagram Live, though, take note: T Journal’s reader was using a Nexus 6P.
Via: The Verge
Facebook rightly came under fire for censoring the iconic, Pulitzer-winning “napalm girl” photo THe Terror of War not that long ago. Now, the social network is altering its course as a direct result. “In the weeks ahead, we’re going to begin allowing more items that people find newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards,” VP of Global Public Policy for the site Joel Kaplan writes.
The thing is, Zuckerberg and Co. don’t know exactly how they’ll do it without stepping on anyone’s toes in regards to local cultural norms. Kaplan says that the service is going to tap its community and partners to figure it out in regards to tools and rule enforcement. Specifically: experts (gurus are all on vacation, apparently), publishers, journalists, photographers, law enforcement officials and safety advocates. Why start relying on humans instead of algorithms now, though? Oh, right.
“I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.” — United States Supreme Court Justice Potter Stewart
In 1964, the Supreme Court overturned an obscenity conviction against Nico Jacobellis, a Cleveland theater manager accused of distributing obscene material. The film in question was Louis Malle’s “The Lovers,” starring Jeanne Moreau as a French housewife who, bored with her media-mogul husband and her polo-playing sidepiece, packs up and leaves after a hot night with a younger man. And by “hot,” I mean a lot of artful blocking, heavy breathing and one fleeting nipple — basically, nothing you can’t see on cable TV.
In six simple words, Justice Stewart encapsulated the near-impossible act of creating a single definition of pornography: “I know it when I see it”.
Attitudes toward sex have changed significantly since 1964. Soon after Jacobellis faced the Supreme Court, the United States experienced a sexual revolution followed by the porn boom of the 1970s and, more recently, the advent of the internet. Today, anyone with an internet connection can be knee-deep in creampies and pearl necklaces in a matter of seconds. We’ve come a long way, but one thing remains the same: We’re still nowhere close to a universal definition of pornography or obscenity.
Jean Moreau and Jean-Marc Bory in the not-so-sexy scene from “The Lovers” at the heart of Jacobellis v. Ohio (Image Credit: Getty Images)
But unfettered access to all things smutty, dirty and questionably filthy has created a surge in censorship tools that, in theory, use algorithms and advanced artificial intelligence programs to identify porn and weed it out. Last year, Twitter acquired Madbits, a small AI startup that, according to a Wired report, created a program that accurately identifies NSFW content 99 percent of time and alerts users to its presence. Late last month, Yahoo open-sourced its own deep learning AI porn filter and there are no doubt similar projects underway at other internet companies.
Big players have been sinking big money into cleaning up the internet for decades. The trouble is, censorship is a slippery slope, and obscenity is inherently subjective. If we can’t agree on what constitutes pornography, we can’t effectively teach our computers to “know it when they see it.” No matter the sophistication of the technology or the apparent margin of error, porn filters still depend on humans to teach them what is and isn’t NSFW.
Sometimes a naked child is more than a naked child.
In the early days of the world wide web, US libraries and schools implemented filters based on rudimentary keyword searches in order to remain in compliance with the Child Internet Protection Act. The act attempts, as the name suggests, to protect children from the darker side of the internet, specifically “pictures that are: (a) obscene; (b) child pornography; or (c) harmful to minors (for computers that are accessed by minors).”
But that’s not exactly how it played out.
A 2006 report on internet filtering from NYU’s Brennan Center for Justice referred to early keyword filters and their AI successors as “powerful, often irrational, censorship tools.”
“Filters force the complex and infinitely variable phenomenon known as human expression into deceptively simple categories,” the report continued. “They reduce the value and meaning of expression to isolated words and phrases. An inevitable consequence is that they frustrate and restrict research into health, science, politics, the arts, and many other areas.”
The report found that popular filters inexplicably blocked sites belonging to Boing Boing, GLAAD, photographer Robert Mapplethorpe and Super Bowl XXX, among others, and often reflected the political and social prejudices of their creators. While Yahoo and Google’s AI-powered filters have replaced keyword searches with sophisticated image recognition, they still rely on humans to teach them what is and isn’t safe for work. And as Facebook recently discovered, images are no less divisive than words.
(Image Credit: ASSOCIATED PRESS)
The social network faced widespread backlash in early September when it took down the photo above for violating its community standards. The Pulitzer Prize-winning image from 1972 shows a naked 9-year-old girl running away from a napalm attack during the Vietnam War. Facebook originally took the photo down for violating its community standards, saying, “While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.”
But as the New York Times reported, Facebook reinstated the original post after thousands of users posted the photo to their timelines in protest.
“An image of a naked child would normally be presumed to violate our community standards, and in some countries might even qualify as child pornography. In this case, we recognize the history and global importance of this image in documenting a particular moment in time.”
It’s not clear how the image was flagged, but whether it was a human or AI, or some mix of the two, the bottom line is: Sometimes a naked child is more than a naked child.
Sometimes a man with a bullwhip hanging out of his ass is more than a man with a bullwhip hanging out of his ass.
This isn’t the first time Facebook has been criticized for censoring images that many deem to be “clean.” The social network has repeatedly come under fire for deleting posts containing exposed female breasts in the context of nursing photos and information about mammograms. More recently it learned a lesson about the fine line between pornography and art, when it deleted and later reinstated a video of a black woman who painted her naked body white on Facebook Live to draw attention to police brutality and the Black Lives Matter movement.
The real world too, is rife with examples of the debate about what is art and what is porn. In 1990, the Contemporary Arts Center in Cincinnati and its director were accused and acquitted of obscenity charges for an exhibition of Robert Mapplethorpe’s photography.
It was the first time such charges were brought against a museum in the US, and the photos in questions — depictions of gay S&M — were at the center of a national debate headed by the Republican Party. The prosecution argued that the exhibition, funded by the National Endowment for the Arts, constituted pornography while the defense defined it as art. That case proved that sometimes a man with a bullwhip hanging out of his ass is more than a man with a bullwhip hanging out of his ass. It also proved that our access to art, no matter how controversial, isn’t always guaranteed.
Our personal prejudices continue to undermine our access to information and freedom of expression, despite advances in internet filtering. We may never agree on what NSFW really means, but without a universal definition, our machines will simply act as conduits for our own opinions. Not one of us can claim to know it when we see it, and no amount of code can change that.
Facebook is really trying to show News Feed stories you might want to see, even if you’re on a slow internet connection. Last year, it tweaked its algorithms to serve up stories by caching them on your device if internet speeds slow down. While this guarantees you have some kind of content, it means you might get old or irrelevant articles. So, Facebook devised a new ranking system that weighs “both new stories from the server and unseen stories from the persistent cache.” The result should be posts you’re more interested in reading, even if you’re stuck on a train or in a refugee camp.
The social network says this “client-side ranking” takes advantage of your smartphone’s computing power. It differs from browsers like Chrome, which serves up text only if the internet is slow, loading images later when things speed up. Facebook, by contrast, feels a story isn’t relevant “until its associated media (e.g. images, at least beginning of a video, text of Instant Articles, etc.) have loaded.” As a result, it avoids spinners and grey boxes by “requiring all stories to have all necessary media available before rendering them in News Feed.”
Now, when you start up the app and scroll your feed, the client (ie the app on your phone) triggers a request to “get next best story.” It then weighs the server ranking based on the type of article and looks at cached stories to see whether the image or video is available, among other things. After taking into account your internet speed, the algorithm scores and sorts all the stories, and gives you the top one based on all of that.
The motivation for the changes was to improve News Feeds in emerging markets, Facebook says. However, the updates will also benefit everyone, “as we all experience less than ideal internet connections at times.” Zuckerberg & Co. will build on the new foundation and have promised more updates to come.
Presidential candidate Donald Trump has had a rough week. After a tape was revealed of him bragging that he could grab women “by the pussy” and get away with it, several former female acquaintances have come forward accusing him of sexual assault. But that hasn’t stopped venture capitalist Peter Thiel, an ardent Trump supporter, from giving him $1.25 million this past weekend. It so happens that Thiel is also a part-time partner of startup incubator Y Combinator and a long-time member of Facebook’s board of directors. Yet, neither entity have rescinded their support of Thiel. Facebook CEO Mark Zuckerberg’s reason? To protect those with “different viewpoints” in the name of “diversity.”
Y Combinator president Sam Altman gave a similar reason, saying that we should talk to people who are “different from we are” and that to terminate their relationship with Thiel over his support of a political candidate would be a “dangerous path” to take.
I can see their point if Trump was an ordinary Republican candidate. Facebook has already come under fire earlier this year for apparently suppressing conservative news and Zuckerberg has made a few anti-Trump remarks in the past, so it’s understandable that he wants to make nice on both sides of the political aisle.
But Trump is not a normal conservative. He has made blatantly xenophobic and racist statements by threatening to ban an entire religion, calling Mexicans rapists and implying that all African-Americans live in crime-infested “inner cities” (even his racism is outdated). He has said he would pay the legal fees of any supporter that attacked protesters. He has not denounced the support he’s received from white supremacists and far-right hate groups. He’s been accused of sexual harassment, assault and even rape. He has said that once elected, he wants to have his political rival sent to jail. Worse, he refused to say whether or not he’d accept the results of the election and has accused it of being rigged — with absolutely zero evidence — sowing distrust and inciting conflict.
These are the statements of a misogynist, a racist and a fascist. And by donating such a large sum of money to his campaign, Thiel is essentially endorsing Trump’s hateful behavior. Ellen Pao, a co-founder of a diversity initiative called Project Include, wrote in a blog post that this donation is a “direct contribution to creating hate and instilling fear.”
Zuckerberg wrote “There are many reasons a person might support Trump that do not involve racism, sexism, xenophobia or sexual assault.” Which, is superficially true, but suggests that saying hateful, misogynistic things and being repeatedly accused of sexual assault shouldn’t automatically disqualify you for president, regardless of your political views.
Even members of Trump’s own party are abandoning him. Arizona Senator John McCain has withdrawn his support, House Speaker Paul Ryan has refused to campaign for him and former GOP rivals like Jeb Bush and John Kasich have denounced him as well. In an op-ed for the Washington Post, Maine Senator Susan Collins wrote that Trump “does not reflect historical Republican values nor the inclusive approach to governing that is critical to healing the divisions in our country.” 2012 Republican presidential nominee Mitt Romney has said “I simply couldn’t ignore what Mr. Trump was saying and doing, which revealed a character and temperament unfit for the leader of the free world.” To them, denouncing Trump has nothing to do with a disagreement over tax codes or public policy. Instead, it’s taking a stand for basic human decency.
And let’s not kid ourselves: There’s a difference between welcoming opposing viewpoints on Facebook and promoting straight-up bigotry. It’s one thing to provide a platform for people to say whatever they want under the guise of free speech, and it’s another thing to basically say it’s OK for a member of your own board of directors to support a hatemonger.
In short, it’s laughable that Zuckerberg is keeping Thiel on Facebook’s board in order to cultivate a culture of “diversity” when Trump’s entire rhetoric is against it. Maybe, instead, Zuckerberg could direct his efforts into real inclusion, by hiring more women and underrepresented minorities. Or inviting a single person of color to join its board. Maybe then, I could believe that Facebook really believed in diversity. But as long as Facebook keeps Thiel on its board of directors, I can’t say that I do.
Bots are one of the big buzzwords of 2016; Google, Microsoft and Facebook have all made them major parts of their strategy this year. Yes, they might not all be panning out quite as planned, but that doesn’t mean bots are out of style yet. Take eBay: the company just launched a shopping bot for Facebook Messenger in beta appropriately called Shopbot.
Once you set it up (go to Facebook Messenger and search for eBay Shopbot), you just tell the bot what you want to buy and it’ll start serving up suggestions and asking you additional questions to refine your search. I searched for the Apple Watch and it showed me appropriate suggestions and also prompted me with different options for band color, case material, size and so forth to make the suggestions more accurate.
You can even upload photos and the bot will analyze and search for the items contained within, but just don’t expect to get exact matches every time. A picture of my iPhone 6S brought up a whole range of iPhones from the 4 through the latest model, and an image of the Pixel C tablet brought up a host of no-name convertible devices that I wouldn’t want to drop any cash on.
Photos might be hit or miss, but overall the bot’s search functionality seems to be pretty good. Where the experience seems to fail is in its clunky buying process. When you tap through to an item, it brings up a minimal detail page in Messenger’s in-app browser; clicking the prominent “buy” button asks you to login with your eBay credentials. That all makes sense, but I had the eBay app installed on both phones I tried this one — sending me to the app where I was already logged in would have made for a better experience for sure.
Regardless, the eBay Shopbot does meet the company’s mission of putting the service out in front of users on a different platform. “We’re going to where our users are, versus letting it all play out on eBay.com and our mobile app,” EBay chief product officer R.J. Pittman told Bloomberg. The eBay Shopbot is available now in beta and works on iOS and Android versions of Facebook Messenger as well as on the web.
Ever since Peter Thiel drove Gawker Media to bankruptcy in a bid to silence unfavorable press, many have been wondering: why is Facebook keeping Thiel on its board of directors when he’s antithetical to the company’s emphasis on free speech, and is an ardent Trump supporter? Facebook chief Mark Zuckerberg has finally broken the silence… but it’s not going to make everyone happy. In a leaked post (Facebook has since confirmed that it’s authentic), Zuckerberg claims that it’s all about upholding diversity. You can’t just stick up for people you already agree with, he argues — you have to also protect the rights of people with “different viewpoints.”
Zuck adds that it’s possible for people like Thiel to support Trump without embracing racism, sexism or other labels attached to the Republican candidate. It would be wrong to give Thiel the boot if he was really just concerned about smaller government, lower taxes or other typical right-wing views, according to the executive.
The argument is nothing new. Social critic Noam Chomsky has long contended that free speech means protecting the right to hold unpopular views, for example. Thiel’s stance complicates things, however, and his support for Trump isn’t the only reason people have distanced themselves from his venture capitalism. After all, he took down Gawker precisely to restrict freedom of expression, to suppress a view he didn’t like. Why is his free speech allowed to override someone else’s? Zuck clearly has to walk a fine line in such a politically sensitive climate, but it’s hard to ignore the contradiction of claiming to defend free speech while embracing someone bent on destroying it.
Via: CNN Money
Source: Hacker News
It’s been a long damn road, getting from there to here, but we’re finally at the third and final presidential debate. But for the good of democracy, and the country, we’re all going to tune in anyway to see what both candidates get up to. After all, the first debate was a good excuse for a stiff drink and the second gave us a 70-year-old man dry humping a chair, Ken Bone and so many karaoke tweets. Thankfully, no matter where you are and what device you’re rocking, there’s a way to watch the final showdown between Hillary and Donald. The show begins at 9pm ET / 6pm ET and will be broadcast live from the University of Nevada, Las Vegas.
Same as the last two events, Facebook Live will leverage its deal with ABC News to broadcast the debate without commercials. As before, the social network will add in commentary from viewers as well as additional features not available to those watching on the TV. In addition, plenty of other outfits will use Facebook Live to stream their own versions of the debate, including Buzzfeed, CNBC and the New York Times.
As part of the company’s live video push, Twitter will, once again stream Bloomberg’s feed of the debate. You’ll also be able to enjoy the newswire’s on-air analysis paired with Twitter’s world-famous well-considered and thoughtful one-eyed invective.
When it comes to high profile events that need streaming video, YouTube’s uniquely-placed to throw its considerable weight around. The site will serve streams of the debate from NBC, CBS, Fox, PBS, C-Span, the Washington Post, the New York Times, Univision and Telemundo. In addition, YouTube creators The Young Turks and Complex News will be offering a different sort of commentary experience live from their smartphones.
If you’re not yet wedded to the notion of cord cutting, that’s okay, because you’re gonna be looked after with the traditional broadcasters. The debate will be shown on ABC, NBC, CBS, Fox, MSNBC, CNN, C-Span, PBS, Telemundo, Univision and Fox News.
Image Credit: AP Photo/John Minchillo (Facebook), Getty (Las Vegas), Daniel Acker/Bloomberg via Getty Images (UNLV Sign) AP Photo/Julio Cortez (Candidates).