This New Face Detection Technology Could Redefine What it Means to Leave Your House

(Photo by Sylenius via)

Something way scarier than The Singularity is already happening inside our computers, and it could change our lives sooner than you think.

“No one would have believed,” said HG Wells once upon a time, that “as men [everyone was men in those days] busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water.”

Videos by VICE

At first glance, the announcement of a new face detection algorithm by Yahoo and Stanford University last week may not look like a big deal. For years, it’s been pretty simple to detect faces in images. If you have a smartphone or camera, they can probably do it when you take a picture. The trick, devised by Viola and Jones about 15 years ago, is to look for some easy-to-spot features in an image. A bright patch between two darker patches might be a nose. A dark band above a brighter band might be a pair of eyes above some cheeks. If you see those and a few other features near each other in the image, there’s a good chance you’re looking at a face. It works pretty well, as long as you’re looking at a face the right way up, without anything hiding it.

The new system, the Deep Dense Face Detector, uses something called a convolutional neural net (CNN). What makes these neural nets special is the way they build up a mental picture of the world. Suppose you have a baby AI and you want to show it the difference between cats and lizards by feeding it a load of images from Google. Most AIs would look at the images, compare each image to a very abstract idea of what a whole cat or a whole lizard looks like and pick the closest match. Their entire understanding of the world is two objects – cat or lizard.

When an AI based on a convolutional neural net looks at the same images, it doesn’t just file them as “cat” or “lizard”; it breaks them down. It spots common features within the images – things like scales, legs, noses, eyes, ears. When you show it an image it hasn’t seen before it doesn’t just try to classify the whole thing as “cat” or “lizard”; it scours the image for features, picking up clues – a tail, a patch of fur, the shape of a paw – and puts those together before making a decision as to what it’s seeing. It has a deeper understanding of the world, and that’s what people are talking about when you hear reports on “deep learning” in the news.

So what does this mean for spotting faces? Well, the old system had a pretty rigid idea of what a face looks like, and needed a portion of the image to match it. DDFD can learn more about the different parts of a face and deal with them even if they’re turned around, partly covered or missing. That means you could be caught in the background of a photo, turned away and partly out of shot, and this thing could still spot your face.

In practical terms, that means the number of opportunities for your image to be captured, flagged and analysed just increased massively. Face detection isn’t the same as face recognition, but Facebook are working hard on that problem. When you combine that kind of technology with the petabytes of image and video data being captured and uploaded daily, the implications are mind-boggling.

We’re getting near the point where a camera could identify and Google you on site. Facebook is working to automatically recognise and tag users in photos, but there’s no reason the same technology couldn’t be applied to YouTube videos in time. You could be caught in the background of some guy’s holiday clip walking into a coffee shop with your secret lover, and suddenly find yourselves tagged, the video appearing under a search for your name on Google and sent to your friends’ feeds. CCTV could track you through cities, camera to camera, allowing authorities to build up profiles of where you go and whom you meet with. Journalists covering protests could be identified on sight by police.

So how do we survive in this brave new world? Weirdly, fashion could provide the answer. Several years ago, Brooklyn artist Adam Harvey came up with the concept of ” anti-face“. Just as a zebra uses black and white stripes to help break up the outline of its body and fool predators, stylists can use bold patterns of hair and make-up to disrupt the features that face detection algorithms look for.

All this raises an interesting question about our lives in the future. In 2015, we’re still trying to deal with the many problems of online privacy, of how to keep our personal information safe online. By 2020, those problems could be spilling out into everyday life. Today, we worry about our misjudged tweets going viral, but tomorrow it could be comments we make to our friends on the street that come back to haunt us. We’re on the edge of redefining what it means to leave your house, and that’s a lot scarier than any far-off Singularity.

@mjrobbins