Veerender Jubbal, a Canadian writer, was wrongly identified as one of the Paris assailants in a photo that was circulated shortly after the attacks.
Someone later confirmed to be a Gamergate supporter photoshopped a selfie Jubbal had taken, fitting a suicide bomber's vest to his torso and slapping a Koran decal on his iPad. They also added a dildo to the corner of his bathtub, just for laughs. The media ate up the doctored image. Not only did an Italian news site tweet the photo but it was also splayed across the cover of a major Spanish newspaper.
While citizen journalists have misidentified culprits before—most notably during the Boston bombings—this was the first I'd heard of someone taking advantage of a catastrophe to ruin an online enemy's life. I reached out to Claire Wardle, Research Director at the Tow Center for Digital Journalism at Columbia's School of Journalism and expert in digital verification, to find out if this case is significant and what journalists should do to ensure they aren't complicit in the framing of an innocent man.
Broadly: Have you heard of any other instance where social media was malevolently used to frame someone during a crisis situation?
Claire Wardle: I don't know of any examples like this when people's images are taken without their knowledge and photoshopped. The examples of Baltimore looters is the nearest, when a group claimed to have takenoldphotos of African-Americans and made it look like they were looters. There are also the obvious examples where the media have made mistakes in identifying shooters—e.g., Ryan Lanza and the Newtown shooting, and Sherman Lea, who was misidentified as the Virginia journalist shooter Vester Lee Flanagan.
How do crowds online behave differently during crises? Is there more of an impulse to share content without analyzing it first?
The work of Craig Silverman touches on this. He has a nice section on the psychological literature around this sort of behavior. There was also research by the New York Times back in 2011 about what makes people share things online, and I'm pretty sure a big element was wanting to share what you think others don't know, so people in breaking news situations are driven to show that they are the first to know something, without checking.
While it seems obvious in retrospect that Jubbal's image was photoshopped (Korans don't have selfie cams, for one) some in the media didn't pick up on this fact. What kind of authentication techniques could they have used to avoid this?
Yes, there were obvious clues, but news organizations are often running images or videos that have similarly obvious clues. The best test for checking images is to run a reverse image search either on Google images or via tineye. In this situation, as it was an image on Twitter originally, it probably wasn't indexed.
However, in all the training I run on verification, one of the key checks is looking at the source who shared the image. What is their digital footprint? Where are they from? Do they usually share similar images? Are they in the location where you would expect them to be? What do their other social profiles say about them (people often have the same usernames across different profiles)? Can you find contact information for them to call them up? For a story as serious as this, any journalist should have run these checks.
There's something that reminds me of revenge porn here.
Jubbal says he's considering suing the publications that unwittingly defamed him. Do you think he has a case?
Yes, absolutely. In AFP vs Morel, for example, a man downloaded [freelance photographer] Daniel Morel's photos from Haiti and retweeted them. AFP contacted him and said, "Can we use your photos?" and he said yes, even though they weren't his. The AFP lost the case and had to pay a lot of money, as they should have done the verification to see whether or not the photos were indeed his.
What do you think is most significant about this story?
This is pretty unbelievable behavior and a reminder that you have no control over what you post publicly. There's something that reminds me of revenge porn here: the fact that people can take your image and use it in ways that you would never have considered to cause harm. As a user, that is what's so shocking about this. Yes, newsrooms screwed up, but they're screwing up all the time when it comes to running with user generated content that they haven't verified. Hopefully this will be an additional piece of evidence for news managers that they have to provide their staff with training.