Palantir Says Faulty AI and Privacy Regulation Are a Risk to the Company

Palantir has never made a profit, and according to its S-1 filing, may never make a profit. But it has no plans to stop now.
Palantir, a Denver-based surveillance firm that contracts with commercial firms and government agencies such as U.S. Immigration and Customs Enforcement, published its public filing documents on Tuesday ahead of plans to go public by the end of September.
NurPhoto / Contributor

Palantir, a Denver-based surveillance firm that contracts with commercial firms and government agencies such as U.S. Immigration and Customs Enforcement, published its public filing documents on Tuesday ahead of plans to go public by the end of September. 

The filing, particularly the section that covers potential risks that the company sees for itself, provides a window into Palantir’s weak points. Among the company’s worries are negative press, privacy laws, and blowback over algorithmic bias. 


Since its founding in 2003, Palantir has never earned a profit. In 2019, the filing reveals, Palantir's net loss was $579.6 million—similar to its 2018 loss of $580 million. Perhaps unsurprisingly, the filing warns that Palantir “may not become profitable in the future.” Palantir has opted for a direct listing, meaning that unlike a traditional IPO there will be no new shares offered and no capital raised—it will, however, give investors and employees a chance to cash out and avoid hefty fees from having IPO shares underwritten by an investment bank. 

No new capital means it's not clear how Palantir plans to continue some of its more expensive operations, and the company’s filing reveals that it is in a financially precarious position with expenses expected to increase. 

Its finances, however, are frankly the least interesting and important part of these documents—more and more "startups" are going public despite no clear evidence they can ever earn a profit. While it may be amusing to look at its unprofitability, markets don't actually care how unprofitable a company is, so long as the company can pitch investors a value proposition, no matter how unrealistic. A more important section to draw insight from is the one covering risk factors. 

One risk factor Palantir lays out is such as reputational harm from news coverage, which the company says "presents, or relies on, inaccurate, misleading, incomplete, or otherwise damaging information.” This is notable, because Palantir has long faced criticism over its controversial customers, namely government agencies like ICE and the Department of Defense. Another risk factor warns that such customers raise concerns about its “support for individual privacy and civil liberties” and might result in “adverse business and reputational consequences.”


For years, the surveillance firm denied reports that it was working intimately with ICE despite numerous leaks showing otherwise and split hairs over which ICE department it worked for, saying that it did not work for the deportation arm of the agency. Last year, Palantir CEO Alex Karp admitted in an interview with CNBC that his company “[finds] people in our country who are undocumented.”

Palantir also lists as a significant risk factor the fact that its business may be limited by laws and regulations impacting "privacy, data protection and security, technology protection, and other matters” such as the California Consumer Privacy Act, a law that went into effect this year and allows consumers to opt out of having their personal data sold. Changes to these laws could result in negative outcomes such as "claims, changes to our business practices, monetary penalties, increased cost of operations, or otherwise harm our business."

Another risk factor highlighted by Palantir is its "use of artificial intelligence" which could result in "reputational harm or liability" as its algorithms and datasets might “contain biased information” of the sort that runs through seemingly every algorithm and dataset. The risk is even greater for a company like Palantir as such bias may have "real impact on human rights, privacy, employment, or other social issues" given its close relationship with government agencies concerned with "national security," the filing states


Another interesting part of the S-1 document is a five-part letter written by Karp. Part IV of that letter argues that Palantir shares "fewer and fewer of the technology sector's values and commitments." While Palantir has "repeatedly turned down opportunities to sell, collect, or mine data," other companies have made billions selling personal data. And yet, the company is constantly attacked for simply wanting help to "target terrorists and keep soldiers safe.” 

Karp’s letter is misleading. While he decries collecting and mining data, Palantir created a tool for the NSA called XKEYSCORE Helper that allowed the agency to parse the huge amount of information it was siphoning up. The tool provided a means for the NSA to funnel people’s information directly to Palantir, The Intercept reported. Or take Palantir’s claim about advertising tech being problematic; Palantir has worked extensively with advertisers and adtech companies and boasts of how it "armed advertising teams” at an unnamed US broadcast network “with best-in-class data and metrics on demand.”

Karp’s letter may be ridiculous, and Palantir’s financials bleak, but it shouldn’t be discounted. As Data & Society researcher Moira Weigel put it in a thread on the letter, Palantir and its founders see the world as a land of borders. The goal, Weigel wrote, is to someday profit from creating technologies that can make more borders and reinforce them.

Palantir declined to comment.