Facebook is working on a tool to keep you from posting all those drunk pix
From the Pandora’s Box of “is this something we really need” comes the idea for a Facebook digital assistant that would (and likely will) discourage you from posting those super late night pictures that you almost always regret the morning after. So do we need this? Yes, we really need this. This is big, guys. Really big. This could mean far less Facebook remorse, and far less “looks like you had fun Friday night” comments when you get to work on Monday morning.
The proposed tool will utilize image recognition technology, called deep learning, to recognize if you’re posting something embarrassing and remind you that what you’re doing is public. Basically, the software will be able to distinguish between your drunk face and your regular face, and will facilitate a grace period before it posts what it determines to be a “drunk pic.” This doesn’t bode too well for all those stoic drunks out there, but if you have an obvious buzz-mug, then you are the target demographic.
While this may seem a little futuristic to us laymen, it really is the natural next step from Yann LeCunn, one of the top researchers at New York University and overseer of FAIR, the Facebook Artificial Intelligence Research Lab (that’s a real thing, I swear). Facebook already uses algorithms to track what we click, post and share, and face recognition software currently exists to help you better tag your friends in the pictures you post. At this point, Facebook can almost anticipate what you click, and provides content that it thinks you will notice and like. It’s like Facebook knows you; I mean really knows you.
While this will definitely seem invasive and intrusive to some of us, who may fear that our information is being captured and stored somewhere for later use and/or don’t want artificial intelligence (AI) telling us what to do, LeCunn swears the drunk tool (and all the other AI software) is designed to offer us more control over what we post. He told Wired that this type of software could spill over into protection applications, alerting you when someone has posted an unflattering or explicit picture of you to the web.
LeCunn, and most everyone else in his field, say the aim is to be able to closely analyze not just pictures, but all sorts of data from Facebook, and other media outlets. He says, in order to do that, “You need a machine to really understand content and understand people and be able to hold all that data.” Doing so will help the folks at Facebook, Google, Twitter, and Amazon better understand our needs, and probably better understand us as consumers. But, real talk, is that what we really want?
Although LeCunn’s is a pretty pervasive philosophy in the tech culture of today, it’s hard not to feel like users are being taken for granted. Are we not savvy enough to know what we want to look at, or post, or even say, without being told by someone else? Should we really be Ok with social media telling us how to act? While, sure, this app could save us from a few awkward moments in the short term, is it creating more problems for us in the long term? Advances like this might just condition the logical discernment right out of us. The less we have to decide for ourselves, the less we’ll know how to decide for ourselves. It really is a bit of a brain melt and it raises some heavy philosophical question about our reliance and co-dependence on tech. Am I overreacting? Am I under-reacting? What do you guys think?