Facebook is laying odds that artificial intelligence (AI) can trump human intelligence – or, to be more precise, a lack of human intelligence.
When your judgment is compromised by emotion, mind-altering substances or anything else, Facebook itself could be a better friend than any of the hundreds or thousands you allegedly have in that virtual world, by warning you that something you’re about to post could come back to haunt you, professionally or personally.
That capability is one of the major goals of the company’s AI guru, Yann LeCun, the New York University professor and researcher who has also been director of Facebook’s Artificial Intelligence Research lab since December 2013.
In an interview with Wired last month, LeCun said he hopes the form of AI called “deep learning” will be able to recognize your face when it’s not looking like it usually does. That would mean if you’re drunk out of your mind and start posting a selfie in that condition, Facebook will know, and at least attempt to save you from yourself.
Beyond that, LeCun says deep learning can help Facebook deliver essentially designer content to users, so they see more of what they want to see and less of what they don’t. The site is already using facial recognition to help tag people in photos that are posted, and, according to LeCun, will soon analyze the text in posts to suggest Twitter hashtags.
All of which sounds benevolent – a free digital assistant that helps keep you out of trouble.
But it also raises the possibility of a computer program knowing users better than they may want to be known. Most might welcome a digital “nanny” to let them know they’re about to get themselves in trouble, but what assurance is there that such relatively intimate knowledge will always be used for benevolent purposes?
Michele Fincher, chief influencing agent at Social-Engineer, noted that Facebook does offer users ways to manage their privacy.
“But the bottom line is that users are responsible for knowing the limits and their rights,” she said. “After all, it is a public platform for the posting and sharing of information, which is in direct opposition to privacy.”
–Michele Fincher, chief influencing agent, Social-Engineer
LeCun himself told CSO that the goal of his research is more, not less, privacy for Facebook users, if that is what they want. The “drunken selfie” warning, he said, is not yet a reality.
[ Messenger app users worry how Facebook uses a device's phone, camera ]
“It is a fictitious example to illustrate the use of image recognition technology for privacy protection – not invasion,” he said. “Although the technology is within the realm what is possible today, it is not an actual product, or even a prototype. But given the amount of press it generated – largely positive – the idea struck a chord.”
LeCun said such a system would not necessarily recognize specific faces, but would be based more on, “facial expressions and the context of the scene – is it in a bar, do people hold drinks, etc.”
If it becomes an actual product, and, “you found it creepy or not useful, you would turn it off,” he said, adding that Facebook’s “blue privacy Dinosaur” is activated when it thinks the privacy settings for a post may be too broad.
“Again, this is designed to help you protect your privacy, not to help anyone invade it,” he said.
But, of course, the issue is not just privacy from, or visibility to, other users of Facebook, but from Facebook itself. In other words, does “deep learning” go deep enough to become invasive?
The Facebook press office did not respond to an email request for comment.
Fincher said it should be obvious to users that they control how much Facebook knows about them, through what they choose to post or even “like.”
“If information is posted online, it’s not private, period,” she said. “Once information has left your hands, or your computer, you no longer have control over it.”