Facebook tests ‘floating’ videos in your News Feed
Pop-Up Video: it’s not just the greatest VH1 show ever, it’s also Facebook’s latest feature. The social network is rolling out floating videos for desktop users that can sit anywhere in your window while you continue browsing your News Feed, just like on Tumblr. You can activate the feature by clicking on a new button at the bottom-right of video embeds, which looks like this:

First spotted by The Next Web, it seems like the feature is slowly rolling out to the majority of users. In our informal poll, four out of five people were already seeing the button in their feed. Facebook has been steadily improving its video options in an effort to muscle in on YouTube’s lucrative business of placing ads on user videos. If pop-ups gets more users watching more videos for more time, that can only be a good thing when it comes to selling marketing space to potential advertisers.
Filed under: Facebook
Via: The Next Web
What is machine learning?
One area of technology that is helping improve the services that we use on our smartphones, and on the web, is machine learning. Sometimes, the terms machine learning and artificial intelligence get used as synonyms, especially when a big name company wants to talk about its latest innovations, however AI and machine learning are two quite distinct, yet connected, areas of computing.
The goals of AI is to create a machine which can mimic a human mind and to do that it needs learning capabilities. However the goal of AI researchers are quite broad and include not only learning, but also knowledge representation, reasoning, and even things like abstract thinking. Machine learning on the other hand is solely focused on writing software which can learn from past experience.
What you might find most astonishing is that machine learning is actually more closely related to data mining and statistical analysis than AI. Why is that? Well, lets look at what we mean by machine learning.

One of the standard definitions of machine learning, as given by Tom Mitchell – a Professor at the Carnegie Mellon University (CMU), is a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
To put that a bit more simply, if a computer program can improve how it performs a task by using previous experience then you can say it has learned. This is quite different to a program which can perform a task because its programmers have already defined all the parameters and data needed to perform the task. For example, a computer program can play tic-tac-toe (noughts and crosses) because a programmer wrote the code with a built-in winning strategy. However a program that has no pre-defined strategy and only has a set of rules about the legal moves, and what is a winning scenario, will need to learn by repeatedly playing the game until it is able to win.
This doesn’t only apply to games, it also true of programs which perform classification and prediction. Classification is the process whereby a machine can recognize and categorize things from a dataset including from visual data and measurement data. Prediction (known as regression in statistics) is where a machine can guess (predict) the value of something based on previous values. For example, given a set of characteristics about a house, how much is it worth based on previous house sales.

That leads us to another definition of machine learning, it is the extraction of knowledge from data. You have a question you are trying to answer and you think the answer is in the data. That is why machine learning is related to statistics and data mining.
Types of machine learning
Machine learning can be split into three broad categories: Supervised, unsupervised and reinforcement. Let’s look at what they mean:
Supervised learning is where you teach (train) the machine using data which is well labeled. That means that the data is already tagged with the correct answer (outcome). Here is a picture of the letter A. This is the flag for the UK, it has three colors, one of them is red, and so on. The greater the dataset the more the machine can learn about the subject matter. After the machine is trained, it is the given new, previously unseen data, and the learning algorithm then uses the past experience to give a result. That is the letter A, that is the UK flag, and so on.
Unsupervised learning is where the machine is trained using a dataset that doesn’t have any labeling. The learning algorithm is never told what the data represents. Here is a letter, but no other information is given about which letter. Here are the characteristics of a particular flag, but without naming the flag. Unsupervised learning is like listening to a podcast in a foreign language which you don’t understand. You don’t have a dictionary and you don’t have a supervisor (teacher) to tell you about what you are hearing. If you listen to just one podcast it won’t be of much benefit, but if you listen to hundreds of hours of these podcasts your brain will start to form a model about how the language works. You will start to recognize patterns and you will start to expect certain sounds. When you do get hold of a dictionary or a tutor then you will learn the language much quicker.
One of the buzzwords that we hear from companies like Google and Facebook is ‘Neural Net.’
The key thing about unsupervised learning is that once the unlabeled data has been processed it only takes one example of labeled data to make the learning algorithm fully effective. Having processed thousands of images of letters, processing one letter A will instantly label a whole section of the processed data. The advantage is that only a small set of labelled data is needed. Labeled data is harder to create than unlabeled data. In general we all have access to large amounts of unlabeled data, and only small amounts of labeled data.
Reinforcement learning is similar to unsupervised training in that the training data is unlabeled, however when asked a question about the data the outcome will be graded. A good example of this is playing games. If the machine wins the game then the result is trickled back down through the set of moves to reinforce the validity of those moves. Again, this isn’t much use if the computer plays just one or two games. But if it plays thousands, even millions of games then the cumulative effect of reinforcement will create a winning strategy.
How does it work
There are lots of different techniques used by engineers building machine learning systems. As I mentioned before, a large number of them are related to data mining and statistics. For example, if you have a dataset which describes the characteristics of different coins including their weight and diameter then you can employ statistical techniques like the ‘nearest neighbors’ algorithm to classify a previously unseen coin. What the ‘nearest neighbors’ algorithm does it look to see what classification was give to the nearest neighbors and then give the same classification to the new coin. The number of neighbors used to make that decision is referred to as ‘k’, and so the full title for the algorithm is ‘k-nearest neighbors.’
However there are lots of other algorithms that try to do the same thing, but using different methods. Take a look at the following diagram:
The picture on the top left is the data set. The data is classified into two categories, red and blue. The data is hypothetical, however it could represent almost anything: coin weights and diameters, number of petals on a plant and their widths, etc. Clearly there is some definite grouping here. Everything in the upper left belongs to the red category, and the bottom right to blue. However in the middle there is some crossover. If you get a new, previously unseen, sample which fits somewhere in the middle, does it belong to the red category or to blue? The other images show different algorithms and how they attempt to categorize a new sample. If the new sample lands in a white area then it means it can’t be classified using that method. The number on the lower right shows the classification accuracy.
Neural Nets
One of the buzzwords that we hear from companies like Google and Facebook is “Neural Net.” A neural net is a machine learning technique modeled on the way neurons work in the human brain. The idea is that given a number of inputs the neuron will propagate a signal depending on how it interprets the inputs. In machine learning terms this is done with matrix multiplication along with an activation function.

The use of neural networks has increased significantly in recent years and the current trend is to use deep neural networks with several layers of interconnected neurons. During Google I/O 2015, Senior Vice-President of Products, Sundar Pichai, explained how machine learning and deep neural networks are helping Google fulfill its core mission to “organize the world’s information and make it universally accessible and useful.” To that end you can ask Google Now things like, “How do you say Kermit the Frog in Spanish.” And because of DNNs, Google is able to do voice recognition, natural language processing, and translation.
Currently Google is using 30 layer neural nets, which is quite impressive. As a result of using DNNs, Google’s error rate for speech recognition has dropped from 23% in 2013 to just 8% in 2015.
Some examples of machine learning
So we know that companies like Google and Facebook use machine learning to help improve their services. So what can be achieved with machine learning? One interesting area is picture annotation. Here the machine is presented with a photograph and asked to describe it. Here are some examples of machine generated annotations:
The first two are quite accurate (although I am not sure there is a sink in the first picture), and the third is interesting in that the computer managed to detect the box of doughnuts, but it misinterpreted the other pastries as a cup of coffee. Of course the algorithm can also get it completely wrong:
Another example is teaching a machine to write. Cleveland Amory, an American author, reporter and commentator, once wrote, “In my day the schools taught two things, love of country and penmanship — now they don’t teach either.” I wonder what he would think about this:

The above handwriting sample was produced by a Recurrent Neural Network. To train the machine its creators asked 221 different writers to use a ‘smart whiteboard’ and to copy out some text. During the writing the position of their pen was tracked using infra-red. This resulted in a set of x and y coordinates which were used for supervised training. As you can see the results are quite impressive. In fact, the machine can actually write in several different styles, and at different levels of untidiness!
Google recently published a paper about using neural networks as a way to model conversations. As part of the experiment the researchers trained the machine using 62 million sentences from movie subtitles. As you can imagine the results are interesting. At one point the machine declares that it isn’t “ashamed of being a philosopher!” While later when asked about discussing morality and ethics it said, “and how i’m not in the mood for a philosophical debate.” So it seems that if you feed a machine a steady diet of Hollywood movie scripts the result is a moody philosopher!
Wrap-up
Unlike many areas of AI research, machine learning isn’t an in tangible target, it is a reality that is already working to improve the services we use. In many ways it is the unsung hero, the uncelebrated star which works in the background trawling through all our data to try and find the answers we are looking for. And like “Deep Thought” from Douglas Adam’s Hitchhiker’s Guide to the Galaxy, sometimes it is the question we need to understand first, before we can understand the answer!
WhatsApp may get “like” and “mark as unread” options in next update
According to some reports from beta testers and other sources, WhatsApp may be getting a couple new features when their next update rolls out. The new features include a “Like” button and some ability to mark messages as unread.
One beta tester, Ilhan Pektas, tweeted out a message recently that translates roughly as, “Like Button for divided images in WhatsApp: o #cool”. Facebook has already implemented some existing features from other platforms into WhatsApp, like the “read message” tick marks found in their Messenger app that made their way over to WhatsApp. So it would not be a stretch to imagine Facebook adding a “Like” button to WhatsApp, especially in light of the role the Like button plays in their main service.
The other rumor concerning new features comes from ADSLZone and has to do with marking messages as “unread.” Being able to mark a message as unread is nothing new in technology as can be seen in virtually any email client. However, given the interactive, two-way nature of WhatsApp, that may impact how an “unread” message feature is implemented. Currently WhatsApp provides the sender with an indicator via a blue tick mark that a recipient has seen their message. If a recipient goes back and marks a message as unread, WhatsApp will need to determine whether the sender would also see that change in status or if this would just be for recipients. ADSLZone came across this information in an internal WhatsApp document discussion.
source: AndroidPIT
Come comment on this article: WhatsApp may get “like” and “mark as unread” options in next update
Facebook quietly redesigns site logo with new font
If you use Facebook often, you might notice some slight changes to the site’s logo over the next few days. The company has (very quietly) announced a redesign of the word “Facebook” on their site, which now uses a slightly more rounded font. Is it a major change? Not at all. But Facebook is almost synonymous with social media at this point, and their previous font has become pretty iconic for the company.
The standard lowercase f you’re used to on the app logo and everywhere else on the web should remain the same, so if you hate change, at least that’ll stay the same.
Any thoughts on the new logo? I’m not a big Facebook user, but I think the new font looks a little more modern and friendly. Combined with the newer tablet interface Facebook is working on, it looks like the company is going to spend 2015 revamping their image.
source: Under Consideration
via: The Verge
Come comment on this article: Facebook quietly redesigns site logo with new font
Facebook sharing more ad dollars with video creators
Facebook’s shiny logo isn’t all that’s new for the social network today: The outfit’s also announced how it plans to split video ad revenue with publishers. Like YouTube, Facebook will give content creators 55 percent of ad revenue and keep the rest, according to Fortune. Early publishing partners include Funny or Die, Fox Sports, Hearst and the NBA. And if you’re curious about how ads will work with video, it doesn’t seem like you’ll have to worry about them auto-playing loud and proud while you’re scrolling through your news feed on mobile. On the handheld platform, when you tap a clip you’ll go to a different screen with “Suggested Videos” and once your selected video finishes, an ad will play before the next one’s served up.
It’s still in testing (and only with what Forbes says is a scant few iOS users), but the feature opened up a bit more today will add even more users soon. As is often the case with Facebook and its new stuff, Apple’s mobile ecosystem gets it first while Android and desktop are slated to pick up the rear here.
[Image credit: Darren Abate/Invision/AP]
Filed under: Cellphones, Internet, Mobile, Facebook
Source: Fortune
Mark Zuckerberg shows off Facebook’s internet lasers
Most of us use Facebook to show off a new car, an engagement or a particularly notable lunch, but Mark Zuckerberg does it a bit differently. In a Q&A session yesterday Zuckerberg referenced his company’s plans for using lasers to connect more areas to the internet, and today he posted a few demonstration pictures from the Connectivity Lab. According to the Facebook founder, we won’t actually be able to see the beams (that’s just for show) but the connections will “dramatically” increase the speed of sending data over long distances, and this is just one of the connectivity projects in development. Last year Facebook mentioned combining this laser tech with drones and satellites to help connect the next billion people with its Internet.org initiative, and it appears that work is still moving along.
(function(d, s, id) var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = “//connect.facebook.net/en_US/sdk.js#xfbml=1&version=v2.3”; fjs.parentNode.insertBefore(js, fjs);(document, ‘script’, ‘facebook-jssdk’));
As part of our Internet.org efforts, we’re working on ways to use drones and satellites to connect the billion people…
Posted by Mark Zuckerberg on Wednesday, July 1, 2015
Filed under: Internet
Source: Mark Zuckerberg (Facebook)
Facebook has a new logo, but the differences are subtle
Facebook’s last logo update came in 2005, but this year, the folks in Menlo Park felt it was time for a change. While the iconic white “f” and blue square will remain, places where the full name is used will see this new wordmark. Working with Eric Olson of Process Type Foundry, Facebook’s in-house designers created custom lettering to make the logo “feel more friendly and approachable,” according to creative director Josh Higgins. Olson’s Klavika typeface was used in the current mark, and collaborating with him makes sense given the changes. “While we explored many directions, ultimately we decided that we only needed an update, and not a full redesign,” Higgins explained. That decision seems like a good move, since the current logo is so recognizable after 10 years of use.
Filed under: Internet, Facebook
Via: The Verge
Source: Christoph Tauziet (Twitter), Brand New
Facebook rolling out a better UI for Android tablets
Facebook is set to revamp its UI for Android tablets, and thanks to Droid-Life tipster Mike, a sneak preview of what’s to come has been posted online. Currently, Facebook on Android tablets looks decent, but it’s far from having a simple and seamless design. This new overhaul aims to fix some of those problems.
This new change should give Facebook users on Android tablets a more card-like experience that you might see on Google Now. Unfortunately, most Facebook UI changes are done server-side, meaning that even if you’re running the latest build of the app, you may not necessarily see the new changes yet. It could take a few weeks to roll out to all of its users.
Read more: Messenger no longer asks for a Facebook account
Despite taking some time to roll out to everyone, some Android users may not like the new design, as it looks like a spitting replica of the UI on iOS. Either way, check out some of the preview photos below.
To check out more photos of Facebook’s UI update, hit the source link below.
source: Droid-Life
Come comment on this article: Facebook rolling out a better UI for Android tablets
Zuckerberg reveals Facebook’s AI, VR and Internet.org plans
In a Q&A on his profile today, Mark Zuckerberg explained how he and his team are preparing Facebook for the future. In it, he revealed that he believes the ultimate communication technology will allow us to send thoughts to each other. “You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like,” he said. But until that happens, the company is focusing on developing (1) AI, because the company “think[s] more intelligent services will be much more useful” to consumers, (2) VR, as it’s the “next major computing and communication platform,” and (3) its internet.org project, since it’s “the most basic tool people need to get the benefits of the internet,” including jobs, education and communication.
Zuck says Facebook’s in the midst of building AI systems “that are better than humans at our primary senses.” They’re designing one to be able to detect everything in an image or video: people, objects, animals, backgrounds and locations, among others. If it can understand what’s in an image or video, it could, for instance, tell a blind person what it’s about. The other system they’re working on focuses on language, so that it’ll be able to translate speech to text and text from one language to another, as well as answer questions in conversational lingo. Of course, these AI systems’ most obvious application is being able to surface more relevant News Feed entries for users and giving everyone a new way to consume posts on the site.
The CEO thinks that VR glasses would be part of our every day lives in time, giving us the ability to share “experiences with those we love in completely immersive and new ways.” He also revealed that the company is working on drones, satellites and even lasers to expand its Internet.org project. “The idea is that in the future, we’ll be able to beam down internet access from a plane flying overhead or a satellite flying way overhead — and they’ll communicate down to earth using very accurate lasers to transfer data.”
One user asked Mark if Facebook plans to end the practice of requiring “real names” on the website. If you recall, it got a lot of backlash from the LGBT community and Native Americans after the company froze a number of accounts whose names were reported to be fake. Based on his answer, it doesn’t sound like the company has any plans to remove the policy, as it believes it helps keep people safe. “We know that people are much less likely to try to act abusively towards other members of our community when they’re using their real names,” he said.
However, he clarified that real name doesn’t have to be your legal name — it’s whatever you want to be called and whatever your friends and family call you. In order to prevent unjust banning, though, Facebook is working to conjure up more ways for a user to prove that his real name is what he says it is. Now, if all these sound trivial, as what you really want to know is why in the world the Poking feature ever existed, Zuck answered that, too: “It seemed like a good idea at the time.”
[Image credit: niallkennedy/Flickr]
Source: Facebook
Facebook Messenger’s money-sending tool arrives for all US users
When it first announced plans to let you send money to your pals in its Messenger app, Facebook said the feature would roll out in the States in the coming months. Well, the time has come. After flipping the switch for folks in New York City and the surrounding areas in late May, the social network is letting users in the rest of the US beam funds to friends, too. To leverage the currency tool, you’ll need to link a debit card before money can be transferred from your bank account to a recipient. For added security, you’ll have to input a PIN before each transaction and iPhone/iPad users can employ Touch ID to verify their identity. And all of the transferred data travels via an encrypted connection. Messenger may not be your first choice to reimburse someone for concert tickets or for picking up your tab, but if you use the app to chat with friends or family, it could come in handy.
Filed under: Software, Mobile, Facebook
Source: David Marcus (Facebook)
















