Facebook employees are unofficially trying to defeat fake news
Ever since America opted to elect that guy to its highest office, a lot of accusatory fingers have been pointed toward Facebook. After all, the social network has the attention of hundreds of millions of voters and has a reputation for proliferating propaganda. Mark Zuckerberg has refuted the accusation that Facebook’s laissez-faire attitude toward fake news contributed to Trump’s win, but his employees disagree. BuzzFeed has spoken to several sources within the company that say dozens of workers are privately investigating the company’s actions.
Facebook is quick to censor posts if they contain nudity or other sexual content, but has been noticeably slow at dealing with political content. In the run up to the election, another BuzzFeed investigation found that a group of Macedonian teenagers learned how to game Facebook’s newsfeed for money. The result was a slew of pro-Trump content that claimed the Pope had endorsed the candidate, amongst other things. Zuckerberg has often claimed that false content is just “one percent” of the site’s overall material, but one percent — when multiplied across the billions of shares every day — is still a big deal.
Last week, Zuckerberg spoke at a conference where he said that his site influenced the election was “crazy.” One of BuzzFeed’s sources said that the only thing crazy about his statement is that he’d dismiss the problem out of hand. The individual went on to say that “he [Zuckerberg] knows, and those of us at the company know” just how widely fake news had proliferated across the site. The anonymous sources also said that they’d been warned by Facebook chiefs not to talk to the media and risked losing their jobs for doing so.
Facebook’s been something of a leaky ship over the last few days, with many sources telling reporters the site didn’t do enough. Gizmodo reported that the company did develop a tool that would better target and delete fake news from the site. But the platform was shelved when executives found out that it would impact right wing sites far more than other outlets. It’s believed that they were worried about appearing biased in the wake of an earlier scandal where human curators claimed to have been instructed to suppress conservative news. Facebook has denied all of the above, of course.
Facebook’s CEO has also spoken out about preserving “diversity” on his company’s board by supporting Peter Thiel. At that same conference, Zuckerberg also said that the problem with this election wasn’t that his site was awash with fake news, but with engagement. In the words of former Facebook employee Bobby Goodlatte, “bullshit is highly engaging.”
Source: BuzzFeed
Snowden: We rely too much on Facebook for news
Facebook has been under fire this past week for supposedly influencing the election by not doing enough to stop the rise of fake news. In an interview at the Real Future Fair conference, Edward Snowden said that while this is certainly an issue, the real problem here is that Facebook is where most people get their news. “This gets into a bigger challenge, which is lack of competition,” he said. “This is a danger of a single point of failure.”
Snowden, who is currently exile in Russia and appeared at the conference via a telepresence robot, did say that he wasn’t sure if Facebook really did influence the election. If it did, he said it was a “sad indictment of our democracy that our voters could be so easily misled.” Still, this whole affair does point out how dangerous it is to rely on a single company or service for news. “If one company makes a bad decision, we all suffer,” he said.
Instead, Snowden called for a more “federated system,” which consists of something like 10,000 Facebooks connected together. So if one particular network has a point of failure, it doesn’t destroy the whole system. He didn’t specify how this would work, but he did say that Silicon Valley’s desire to implement “world-eating services” is asking us to accept a status quo in which we set aside competition in favor of scale. “I think we should be cautious in embracing this,” he said. “When we look at monopolies throughout the past, they’ve grown very quietly.” And as soon as they achieve a platform of prominence, according to him, no one can stop them.
“They get less careful. They get more muscular. They end up trampling not only their customers but paradigms in ways that we need to be very cautious about,” he said. “One company shouldn’t have the power to reshape the way we think.”
Chat bot helps immigrants complete their visas
Legal-minded chat bots are useful for more than just fighting parking tickets — they might just start your life in a new land. Visabot has launched a namesake Facebook Messenger chat AI that helps you complete US visa applications. After you answer a slew of questions, the bot fills out the relevant forms and gives you instructions on how to send those documents to immigration officials. And it should learn over time — if you voluntarily report officials’ decisions on those forms, you can improve the bot’s approach for future applications.
The assistance is limited to two visas right now (B-2 extensions for business and travel, O-1 for exceptional individuals), but the plan is to extend it to H-1B and L-1 skilled worker visas. Your first session is free, and you can talk to lawyers over Skype if Visabot can’t answer a question.
This won’t take all the hard work out of applying for a visa, and there’s certainly no guarantee that you’ll be approved. The AI can only work with the information you provide — it can’t work miracles. However, it could eliminate some of your initial uncertainty about the process, and spare you from talking to flesh-and-blood lawyers until it’s truly necessary.
Via: VentureBeat
Source: Visabot
WhatsApp finally launches video calling
WhatsApp has introduced a number of new features in recent months but perhaps the most eagerly awaited has been video calling. Some users briefly flirted with video calls back in May but it took until October before a wider set of Android device owners could get involved. Now, the Facebook-owned company is ready to unleash video calls on everybody and in the coming days will roll out the feature to iOS, Android and Windows devices.
When the feature is activated, open a chat and select the phone icon. You’ll then be given an option to place a voice or video call. When we tested the feature, we found that voice and video quality was excellent over strong WiFi, but your mileage may vary if you’re connecting via a mobile or slower broadband connection.
While Facebook Messenger users have enjoyed voice calls for over a year and a half, many popular messaging apps like still don’t offer the feature. With over one billion users, WhatsApp’s video calls can connect people all over the world, regardless of their choice of mobile operating system, allowing it stay ahead of apps like Google’s Allo.

Facebook will also cut off fake news sites from ad money
Facebook has followed Google’s example and finally banned fake news sites from using its ad network to generate revenue, according to The Wall Street Journal. Fake news websites now fall under the “misleading, illegal and deceptive sites” category, which are already prohibited from using the company’s Audience Network. Facebook’s Audience Network shows its customers’ advertisements not just on the social network itself, but also on other mobile apps and websites.
If you’ll recall Mark Zuckerberg and co. came under fire after the elections, with people accusing the tech titan of helping skew the results by allowing election-related fake news to propagate. Zuckerberg strongly denied the accusations, claiming that 99 percent of the content you see on the website is authentic. According to a Gizmodo report, though, the company knew it had a problem with fake news websites and had the tool to block them. Facebook decided not to use that tool, because it would prevent most conservative news sources from showing up on people’s News Feeds. The company also denied that allegation.
As for this new development, a company spokesperson told The WSJ:
“While implied, we have updated the policy to explicitly clarify that this applies to fake news. We vigorously enforce our policies and take swift action against sites and apps that are found to be in violation. Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance.”
Source: The Wall Street Journal
Facebook didn’t stop fake news because it’s afraid of conservatives
In the last week, Facebook’s been battling the accusation that fake, often inflammatory stories showing up in users’ Newsfeeds influenced how people voted in the presidential election. The social media giant vowed it is currently taking the issue seriously and is searching for an unspecified solution even as CEO Mark Zuckerberg personally defended Facebook, claiming that over 99% of stories on the network are authentic and that it was “extremely unlikely” that fake news impacted the election outcome. But within Facebook, a fierce debate has allegedly roiled since May about whether to install an update that curbs fake and hoax news — but they didn’t deploy it because stories from conservative news sources were disproportionately downgraded and removed from users’ Newsfeeds.
To be clear, there’s not much known about the update’s efficacy to accurately scrub fake news. But ultimately, it was shelved and buried, sources told Gizmodo. One said that, “They absolutely have the tools to shut down fake news,” but that product decisions (i.e. whether to install the update) were affected by fear of offending conservative readers even further after a mini-scandal six months ago.
Back in May, Facebook’s Trending Topics section got flak for how much its curation team “routinely suppressed” stories of interest to conservative readers. This likely contributed to the decision to pink slip the team and ditch human control of trending entirely in favor of a supposedly impartial algorithm in August. That incident sunk company mood, according to a New York Times report released last weekend: “The Trending Topics episode paralyzed Facebook’s willingness to make any serious changes to its products that might compromise the perception of its objectivity.”
Without their human editors, hoax stories blossomed across the social network. Fake reports circulated that Fox News anchor Megyn Kelly had been fired just after the Trending Topics team was let go, while a 9/11 tabloid conspiracy story rose to the top of the algorithm-controlled Trending Topics just prior to the anniversary.
But it was election-related fake news that have raised concerns in the last few days. Posts like “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide” or “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement,” points out Gizmodo, were circulating leading up to the election. In his November 12th post addressing the issue, Zuckerberg rejected the premise that those kinds of stories affected voters: Not only do they account for less than 1% of content passing around Facebook, he said, but hoax posts showed up on both sides of the aisle — and many don’t involve politics at all.
Even relying on napkin math, that’s still almost 1% of stories in front of Facebook’s 1.79 billion users. And since 44% of US adults use the social network as a news source according to a Pew survey, that’s still a chunk of Americans who kept seeing the fake news, and however many that trusted the authenticity of stories on Facebook to keep circulating the stories. Considering how many states’ electoral votes hinged on less than one percent of votes swinging one way or another, the marginal influence of fake news isn’t something to dismiss.
Update: A Facebook spokesperson shared this statement with Engadget over email, responding to Gizmodo’s story:
The article’s allegation is not true. We did not build and withhold any News Feed changes based on their potential impact on any one political party. We always work to make News Feed more meaningful and informative, and that includes examining the quality and accuracy of items shared, such as clickbait, spam and hoaxes. Mark himself said “I want to do everything I can to make sure our teams uphold the integrity of our products.” This includes continuously review updates to make sure we are not exhibiting unconscious bias.
Source: Gizmodo
After Math: The new normal
This week sure went sideways in a hurry. After Hillary Clinton’s stunning election night collapse, the American people are now faced with four years of well, whatever sorts of fresh hell the President-elect’s administration unleashes upon us. Gitmo is getting better Wi-Fi, Facebook denied all responsibility in spreading FUD throughout the election, the internet’s worst people are bolder than ever and the Navy can’t even afford bullets for its brand-new destroyer. Numbers, what are they good for? Not predicting presidential elections, apparently.
Mark Zuckerberg: over 99 percent of Facebook content is authentic
Facebook creator Mark Zuckerberg is clearly riled by allegations that his social network skewed the election by allowing fake news to propagate, and he isn’t having any of it. The CEO has posted a defense of Facebook in which he argues that the low volume of bogus news made it “extremely unlikely” that it gave Trump his election win. According to Zuck, “more than 99%” of the Facebook content you see is authentic, and what fake news exists is neither limited to one side of the political spectrum nor always political. This isn’t to say that Facebook is unconcerned with hoaxes, the exec says, but it has to tread “very carefully” before it purges anything.
The biggest challenge is simply determining whether or not something is a hoax in the first place. In some cases, a story may get core ideas right but omit important details. In other cases, stories may simply reflect opinions that others don’t like, even though they’re technically accurate. Recently implemented tools to help report fake news should help, as will future efforts to fight it, but Facebook is leery of becoming an “arbiter of truth” in its own right.
Zuckerberg is naturally eager to defend Facebook’s positive roles in the election, such as sparking a surge in voter registration and promoting discussion. “A lot of that dialog may not have happened without Facebook,” he says.
The defense comes as word circulates of the company questioning its level of influence. New York Times sources understand that “several” executives asked each other whether or not Facebook played a pivotal role in the election, and there was enough concern that they agreed to tackle staff anxieties at a quarterly all-hands meeting. They also planned a separate chat with the company’s policy team. While it’s not clear how much uncertainty there is among top brass, rank-and-file staff are worried about the spread of racist memes and a “filter bubble” where you only see posts that agree with your ideology.
The Zuckerberg post has a point in that the company has to be careful, and that there’s a risk of overstating the volume of fake news. However, he sidesteps the question of the significance of the hoaxes that get through — a handful of lies can be very damaging if enough people believe them. University of North Carolina professor Zeynep Tufekci notes that a story falsely claiming that Pope Francis endorsed Trump was shared nearly a million times, for instance. While many of those who read such articles have already made up their minds, it’s difficult to believe that no one was swayed.
Source: Mark Zuckerberg (Facebook)
Facebook Messenger public chats arrive in two countries
Facebook’s plan to revive its public group chat feature just became official. The social network tells the Courier Mail that it’s rolling out a test version of Messenger Rooms on Android devices in Australia and Canada. As leaked code suggested, this is similar to but not quite like the Rooms app of old. All you have to do is create (or search for) a room around a given topic — after that, anyone can join. If you’re worried that you’ll get an influx of trolls, you can require approval for new participants.
It’s not certain if and when Rooms will make it to the US, UK and other countries. The Australian and Canadian launches are really about giving Facebook a way to test English-language features with a smaller audience, to make sure they work smoothly and gauge demand for a wider rollout. However, Facebook’s public announcement of Rooms suggests that you could see the feature in other regions relatively soon. As it stands, this is a safer investment than the Rooms stand-alone app. Facebook doesn’t have to pour as many resources into this as it would dedicated software, and you’re more likely to use the feature if it’s inside an app you already use.
Via: TechCrunch
Source: The Courier Mail
A Facebook bug killed off people before they were dead
Well, this is awkward. Facebook incorrectly flagged some people’s profile pages with a message saying that the person was deceased. And it looks like the glitch was pretty widespread: even founder Mark Zuckerberg had apparently ceased to be. A banner at the top of his profile page read, “We hope people who love Mark will find comfort in the thing others share to remember and celebrate his life.” A number of Engadget staffers had passed on as well; we weren’t just pining.
But it seemed to be on whoever was viewing the page, and not individual accounts — a few of us didn’t see any issues when scrolling through the social network’s pages. We’ve reached out to Facebook for more information and will update this post should it arrive.



