This article originally appeared on VICE UK.
On February 24, the police department for Craigavon in Northern Ireland issued a warning on Facebook about an "online app character" named Momo, a "freaky-looking creature... run by hackers looking for personal info." Said hackers, the police warned, would send you or your child a "curse contact"—a number that could potentially arrive through any app with a chat function. Should the contact be accepted, Momo would appear on WhatsApp, sending threatening messages with instructions to self-harm.
In part, the police seem to have meant their message to be reassuring. After all, they state, despite the threats, Momo "isn't going to crawl out of your childs [sic] phone and kill them." The danger, they claim, lies with kids "feeling pressured to follow the orders of ANY app via 'challenges,'" or peer pressure which spreads panic. The problem was that by offering any sort of warning about the "Momo challenge" at all, Craigavon police were precisely capitulating to panic and peer pressure: They had fallen for it.
The media panic saw tabloid stories drawing in not only the police, but schools, celebrities, MPs, and the BBC. All of this flourished from a single comment on the "Love Westhoughton" Facebook group, a local page for a suburb of Bolton, based on an anecdotal comment a mother had heard from her son at school.
When the Momo story broke, it was immediately compared to reports from 2016 about the “Blue Whale Challenge," an “online suicide game” that had supposedly been linked to around 200 teen deaths in Russia. To this day, no one seems quite sure if Blue Whale was ever in any way real before irresponsible reporting precipitated a surge of copycats. The specific register of panic that Momo generated also drew comparison with the vast wealth of pre-internet urban legends, like saying "Bloody Mary" three times into the mirror at sleepovers.
But personally, when I saw reports suggesting that a nightmarish bird-woman creature from Japan was trawling WhatsApp and hacking into videos of Peppa Pig to tell children to commit suicide, my thoughts immediately turned to something quite different: Amber Rudd on the Andrew Marr show in April 2017, back when she was still Home Secretary in the UK, claiming that her government was going to fight the spread of extreme material online by hiring people who "understand the necessary hashtags."
Momo wasn't just a story about how the internet was being used to spread panic: It was also a story about how almost no one in any position of power, from law enforcement to schools to news organizations, seems to understand how the internet works.
Seen in this light, Momo becomes part of a trend of stories that also includes the unsurprising revelation that over-65s are more likely than anyone else to share fake news: Younger people recognize the fakes when they see them, as though intuitively, but older people do not. Another example would be the story from this January about YouTube star James Charles (a man who, in fairness, I'd never heard of either) bringing Birmingham city centre to a standstill after he made a 30-second public appearance at a cosmetics store: Police were simply unprepared for the 8,000 fans who, it has been claimed, turned out to see him.
In short, it is becoming increasingly clear that when it comes to the internet and social media, a critical mass of people—many of whom are also tasked with regulating and reporting about it—are functionally illiterate.
At a certain point in your life, it becomes a lot more difficult to learn new languages as you stop being able to internalize new grammars properly; to hear and reproduce the necessary ranges of sounds. The internet, it seems, is not too different from that.
This is a problem that is having real-world effects—and not just because it's prompting schools to issue bafflingly irresponsible letters freaking out ten-year-olds. When it comes to regulating the internet, lawmakers often seem clueless: from GDPR and its massive proliferation of pop-ups, to the US Federal Communication Committee's repeal of net neutrality regulations. The deeply controversial FOSTA-SESTA bill, intended to curb sex-trafficking, is in practice a grave threat to the wellbeing and livelihood of sex workers—in large part because it completely fails to understand the mechanics of how people sell sex over the internet.
These regulations typically serve the interests of established internet giants over those of ordinary users—as does, to give another example, recent legislation passed in the UK to impose age restrictions on porn. When these measures are implemented (probably on the April 1, of this year, although they've been subject to delays before), they are widely thought to benefit one company and one company only: MindGeek, creators of an age verification software called AgeID and (by some sort of brilliant, unprecedented coincidence) the owners of PornHub, YouPorn, and RedTube.
Ignorance of how the internet works seems to go along with ignorance about what the internet is, or could be. The other day, MySpace lost all the music uploaded to its platform from 2003 to 2015 during a server migration, instantly deleting around 50 million songs from around 14 million artists. The writer Kate Wagner has compared such losses to the burning of the Great Library of Alexandria, where much of the knowledge of the Ancient world was lost. Much of this music may not exist at all elsewhere—and while there could be temptation to dismiss MySpace in particular as something silly, old-fashioned, teenaged, the sheer numbers involved here are enough to make the cultural impact of this loss potentially vast. Who knows what future treasures that lost data could have contained?
If MySpace were a physical artifact, there would be laws preserving it, which would mean its owners could be punished for treating it irresponsibly. There needs to be a serious discussion about what legislation can be passed to help preserve our heritage online. But it barely seems to have even occurred to any politicians to have it.
Still, more serious was the situation created by the horrendous massacre in Christchurch, prior to which the shooter had posted a 74-page manifesto, "The Great Replacement," to edgelord imageboard 8chan: a document peppered with the "ironic" language of right-wing shitposters, complete with references to memes like "remove kebab" and copypastas designed to mislead any out-groupers looking in. As other writers have pointed out, the vast majority of journalists—including many journalists immediately tasked with responding to the shooting—are simply not equipped to penetrate the dense web of half-meant assertions, endorsements, and misdirects which characterizes the Christchurch shooter's writing, from telling his followers to “subscribe to PewDiePie” to claiming, tongue lolling somewhere either in or around his cheek, that Turning Point USA commentator Candace Owens is too extreme, "even for my tastes."
So what's the solution? Simply put: We need more digital natives in positions of influence and power. Perhaps times are changing. In the US, Alexandria Ocasio-Cortez has recently become the first politician of national prominence to obviously and intuitively understand memes.
But the gulf of understanding between digital natives and their opposites is arguably working to prevent this change from happening as fast as it needs to. Despite everything, the internet continues to suffer from a sort of diminished reality—the idea that there is any sort of meaningful distinction to be made between the internet and the “real” world is roughly as ridiculous as the idea that one might distinguish between phone calls and things people "really" say, but it nevertheless remains common to privilege “real life” interactions over their online equivalents.
Ultimately, of course, digital illiteracy will die out with the last generations not to simply take the internet for granted. The real question is how much damage it might be allowed to do before then.
Follow Tom Whyman on Twitter.
This article originally appeared on VICE UK.