Two groups of people make money from the so-called dark web: Criminals who use it to peddle illegal goods, and the companies who offer to track them on behalf of law enforcement and private clients.
Both are now established trades, with the latter growing at an accelerating rate. A handful of dark web monitoring companies exist, some being created specifically to tackle the dark web, and others expanding their services to tackle sites on the Tor network. Last month, one such company, Terbium Labs, scooped $6.3 million, and in January iSight Partners was acquired for $200 million.
But fundamental problems with the very idea of some of these services, such as the issue of verifying information gleaned from forums and marketplaces, means they might be providing an illusion of security, rather than the real thing.
In particular, financial institutions, retailers or e-commerce sites might hire these companies in order to get a head-start on any potential fraud from the sale of stolen data.
"You can use the monitoring of the black market as an early warning system: if you start to see people talk about you, if you start to see data pop up about you, you can see that you're being a target," Adam Meyer, chief security strategist at SurfWatch Labs, told Motherboard in a phone call. SurfWatch offers a "cyber threat intelligence program by providing you with personalized cyber risk intelligence from Dark Web and other related sources," with clients paying anything up to $150,000 per year.
"We're monitoring every day, so if something pops on the black market that is one of our customers, we notify them that day," Meyer continued. With pinched credit card information, banks and credit card companies "get an early jump on it before the transactions start coming behind it, and they can reduce the loss."
This SurfWatch promo video gives a bit of insight into what types of services these companies offer.
Broadly, dark web monitoring companies take two different approaches. Some work with algorithms, and automatically scan and crawl marketplaces for stolen data, such as card info or intellectual property. Terbium Labs' 'Matchlight' product does this, and then compares fingerprints of the uncovered data with that of their clients.
The other tactic is a more human approach, with analysts going undercover in hacking forums or other haunts, keeping tabs on what malware is being chatted about, or which new data dump is being traded. This information is then provided to government and private clients when it affects them, with each monitoring company digesting it in their own particular way.
But, there is a lot of misleading or outright fabricated information in the dark web. Often, particular listings or entire sites are scams, and forum chatter can be populated with people just trying to rip each other off. For that reason, it's not really good enough to just report everything and anything you see to a customer.
"Anything gathered through open source, there is always that element of believability—is the information true? Is it valuable? Is it manufactured to deceive?" said Jeffrey Carr, CEO of cybersecurity company Taia Global, who has criticized threat intelligence more generally in the past.
"How do you determine the ground truth of an internet based collection of information?" he continued. "That's been my longstanding dispute with every company that gathers information off the web and proclaims it as intelligence."
For Meyer, a lot of that relies on the reputation of the hacker claiming to have credit card details, a personal dataset, or whatever else they might be selling. From here, SurfWatch gives the threat a "confidence level" of either low, medium or high.
"In some instances, we may communicate to our customers that we're observing something going on, so it's more of a 'be on the lookout' notification, that we may have a low confidence level in, because there just isn't enough information available," Meyer said. Often, apparent threats that SurfWatch notifies a client about turn out to be illegitimate.
"It's 50/50 to be honest, and that's why the confidence level is so important," he continued.
Giving information to a client that turns out to be false "does happen," John Miller, who leads iSight Partners' ThreatScape Cyber Crime product told Motherboard in a phone call.
Miller said his company looks at the reputation of the person claiming to sell stolen data, and provides a sort-of credibility rating for information provided to clients. But he declined to answer whether the company would obtain samples of data from hackers in order to corroborate it.
"We just have a policy of not talking about specific actions that they have to take for the sake of protecting them, as well as the sake of ensuring that clients get access to helpful data without us disrupting sources through talking about it publicly," he continued, and added that the company is careful to follow local laws.
"There is a veracious appetite for information."
With the apparent issue of not all claims being verifiable, perhaps it would make sense to only report information that was near certainly related to a credible threat, but Meyer then asks what if his company didn't report something that then was a real problem.
For that reason, "We tend to really, really push the confidence level: we're going to tell you everything we know, but we're going to also give you our confidence level. You can make your own risk determination internally in the company," Meyer said, who added that some client do opt to receive less information, so as to not be overloaded.
Critics are also concerned that companies may just provide data to clients when there really isn't a need.
"When there isn't useful information the companies need to be mature enough to explain that to customers [versus] establishing useless feeds of data just to meet a quota," Robert M. Lee, a former US Air Force cyber warfare operations officer and CEO of Dragos Security, told Motherboard in an email.
"The reason why they still continue is because there is a veracious appetite for the information," said Carr from Taia Global.
As for whether all of this is actually worth it to companies, Meyer thinks so.
"If a bank is paying us $150,000 a year to monitor this stuff for them, we have the potential of saving them million of dollars in fraud, depending on the hack. That right there pays for itself," he said.
"Most companies will not benefit at all from these services," Lee said. "Most companies need to focus on being able to detect and respond better in their environments first."
"But for some companies who've done the basics and the more advanced security practices another source of information about potential threat actors or capabilities could add value," he added. "The danger is in the price tag—companies should be very careful in making sure they need these services and are getting a good return on investment."