Last night was Motherboard's publisher's birthday, and standing in a bar surrounded by a bunch of people whom I very much care for and many other people I've never seen in my life and probably will never know, I, a person who's dealt with as much social anxiety as any of us, felt more at ease than I have in awhile. Why? Well, regardless of whether or not we'd ever actually shoot the shit, I could at least rely on the fact that—barring some sort of They Live situation—everyone in the room was real.
The internet is very real, an existent space where we work and love and no longer have to preface any of those things with "cyber" to denote that they're only half-real. The internet is a real enough space for us to colonize, real enough to lay siege to. But as we further accept the internet as an actual venue in which we visit and live (for better or worse), a little problem that's licking at the edges of our metaverse is only getting bigger: The internet as a whole may be very real, but it's virtually impossible to know just how real its constituent parts actually are.
It's not some grand metaphysical problem, it's just little stuff. Some people spend a little more time honing their tweets to be funnier than they are in real life, others have figured out the perfect angle to contort their faces for more attractive selfies. Lots of people fudge their LinkedIn just a little bit; many many more say things they'd never say face-to-face.
Humans are incredibly subjective creatures who happen to be piss-poor at perceiving reality in any sort of uniform fashion, and data sets (even enormous ones) based on human experiences are extremely messy. This will change.
There's an off chance you heard about Peeple this week, a so-called "Yelp for People" that aims to answer the question of how shitty we all are. Laying aside the issues of spamming and vote rigging and reputation hacking inherent to a network where you can review anyone, regardless of whether or not they have a profile, I think our own Jordan Pearson hit the nail on the head: Peeple is for employers more than anything else.
This, of course, assumes that Peeple will actually take off. Jason Koebler argues that it won't, which is fair. It's certainly not the first attempt. But it's equally fair to guess that Peeple is just hoping its reputation-verification system gets acquired by LinkedIn, or perhaps Facebook, which very much is interested in ensuring all of its users are just as real as they are in real life.
Any time I'm looking at résumés I tend to skip right on to a person's Twitter or Instagram account just to try to get a sense of what they're like. I, being notably useless at both, try to take any impression I get with a grain of salt. I'd like to think I wouldn't put too much emphasis on Peeple if it were to take off, but I'd still check it.
For as much as we know that Yelp and its ilk are prone to bias and vote rigging and sheer human idiocy—what kind of dipshit does Joseph K. have to be to think Greenpoint Heights' tacos suck?—much as anything online, we still use it because there's not really a better alternative. Still, there's that nagging doubt that what everyone else is saying is nonsense, a feeling that's reinforced every time we happen across a poorly-rated gem.
So how might we solve this? Data! I think perhaps the most prescient blog post I've read this year comes from Daniel Miessler, who discusses the real promise of the Internet of Things: Real, incontrovertible data about how great or terrible you are, not from a bunch of biased humans, but from your refrigerator.
As Miessler writes:
Because so many of the objects we interact with will be daemonized, we'll be receiving an extraordinary amount of information from the world around us. This information will be used to create full-scope life dashboards that will illuminate and guide our behavior with regard to finances, health, social interaction, education, etc. Personal dashboards will be displayed on our living room walls, showing how the family did that day in food intake, calories burned, steps walked, and Karma gained and lost. Heads of household will see how college saving is going, how the family's investments are doing, and what if any tweaks should be made to existing strategies. The same will exist for businesses, with unified dashboards showing employee morale, cyber risk, public sentiment, logistical efficiency, employee health, and any anomalies worth noting, along with a list of recommendations for improvement.
Problem: How can you trust that someone who says they are detail-oriented isn't blowing smoke up your ass? That the person on Tinder actually cares about adventures and being outgoing and isn't actually glued to his couch, wondering if he can justify wearing a diaper every now and then because it saves effort? How can you trust that anyone on the internet is genuine at all?
Solution: You take a risk, get to know them, and potentially get burned. Future solution: You collate all of the data on how well they clean their toaster and how often they've listened to their gym playlist and whether or not they've bought dog food, just in case that Tinder dog is a hired actor.
The Internet of Things will become a font of data about who you are and whether or not you're remotely responsible or fun or cool. People tend to call this the reputation economy, because that makes it sound like a concept that's going to make people rich. But reputations are a very dated and imprecise solution to the problem of knowing if someone is who they say they are. What we're really talking about is the Internet of Trust.
Relying on hard data seems more trustworthy than Yelp, and it probably is, but you can see where this all gets fucked up. Yes, in a perfect scenario, we'll be better able to trust whether or not we can trust someone after seeing that she hasn't thrown away spoiled milk despite her smart fridge's incessant reminders, or that he activates his house's smart locks way too late for a Tuesday, or that holy shit, this beautiful human also listens to great music.
But that's a perfect world, which isn't ours. As John Welsh so neatly summed up in my favorite blog post of last year, "the internet of things won't work because things don't work." He explains:
If the complexities and bugs of one device are seemingly never ironed out completely, then asking more than one of these devices to talk to each other will only create headaches. Asking them to be seamless, reliable, habit-recognizing, and invisible is a very tall order. What if an interior display on a car windshield freezes for one tenth of a second, causing the outside world to be on a delay? Would you have to turn the whole car off and back on? What if the system that reports traffic is down, and you don't know a road is closed? Would you be late to work? What if you don't know how much turkey you have left? Would you be able to eat an appropriate amount of turkey? These are curmudgeonly questions, but they need to be asked.
This sounds like a massive pain in the ass, and also rather worrisome if our reputations rely in part on a buggy smart toilet. But flawed as it may be, relying on machines to crowdsource reputation data will make the Internet of Trust more efficient than the human crowdsourcing that powers Yelp or Peeple, which is more efficient than just gambling on whether or not someone sucks. And those marginal gains in efficiency in proving you pretend to be on the internet guarantees the Internet of Trust will arrive.
What does that mean in practice? For one, we're gonna have to come up with some new slang for hacking the Internet of Trust. (Really, is paying a hacker to ghost you into the gym a couple extra times a month any different than wearing that special shirt you only wear a couple times a year because you couldn't really afford it and don't want to wear it out but it makes you look so good and this is a really important date?) But more importantly, it's very hard to argue with data, even if you don't like it, and even if it's flawed.
Here's an entirely hypothetical scenario that will happen at some point: It's 2025, and Joe is puttering along in his autonomous car without a care in the world. Then his tire—which is a little bit low on air and was just on the edge of manufacturing tolerances and oh yeah that pothole hasn't been fixed because the Department of Transportation doesn't have autonomous road fixing trucks yet—blows up right as his car is making a turn. The car swerves just enough to kill a grandma.
Whose fault is it? Not the car company's, because the company's lawyers can show the car's avoidance algorithm performed as admirably as it can. Is it the government's, for not filling the pothole? Potholes have been around for a century and no one's blaming them for deaths just yet. Is it the tire company's, for letting a tire off the line with a thinner sidewall than perhaps was smart? No, because it was within legal guidelines, which everyone promises to review.
Is it Joe's? Well, he did forget to add air to the tire despite a dozen messages reminding him, but that's nothing new. He also forgot to respond to the toaster's decrumbing texts and he currently has two lightbulbs out and there were those three days last year when he unplugged the fire alarm because he didn't have a battery and those things still do that annoying beeping thing in the future.
Maybe the toaster's buggy, and kept sending those annoying alerts despite Joe's diligent crumb cleaning. Joe's lawyers certainly think this is the case, and their own appliances have deemed them highly trustworthy. But someone's got to be blamed for this tragedy, whose significance is all the greater because of how rare such a death has become, and Joe, we can safely say, seems like a generally negligent person. Right?
You're welcome to argue that this hypothetical is insane. We'll certainly be arguing that when it happens. But how can I trust you're right?