How to Fix the Internet After the Cambridge Analytica Scandal

The fiasco over third party apps and the Facebook user data that got Trump elected is almost beside the point for researcher Hossein Derakhshan.
Activists set up cardboard cutouts of Facebook chief Mark Zuckerberg in front of the European Union headquarters in Brussels, on May 22, 2018. Photo: John Thys/AFP/Getty Images

This Q+A comes to us by way of Motherboard Italy.

Hossein Derakhshan thinks the Cambridge Analytica scandal was overblown.

Sort of.

“I think there is clearly an exaggeration about the novelty, the impact, and the significance of this case,” said Derakhshan, a research fellow at Harvard Kennedy School’s Shorenstein Center and associate researcher at the MIT Media Lab.

The case involves a Cambridge University psychology lecturer who gave Cambridge Analytica, a UK-based data analytics firm that worked with the Trump campaign during the 2016 US presidential election, data harvested from some 50 million Facebook users. The story set off a worldwide furor over political data mining, targeted ads, and the security (or lack thereof) of user data in the hands of third-party applications. Mark Zuckerberg was forced to testify before Congress in the wake of the fiasco, and you’ve also likely received a flood of policy update notifications from nearly every app and platform you’re on. Cambridge Analytica has since closed its offices.


If anything, the Cambridge Analytica story has laid bare the level of pervasiveness of Facebook’s data collection structure. The social network, while guaranteeing the user the algorithmic management of one’s newsfeed, is constantly gathering data on our activities and preferences. This surveillance network extends to all websites that allow advertisers to track each of our online activities.

And yet, for Derakhshan, Cambridge Analytica is almost beside the point.

“To be honest, I think it was much less important than how the media has portrayed it because I think it was nothing shocking or radically unexpected,” he said

Earlier this month, I had the chance to chat with Derakhshan at State of the Net, an international conference on just that held in Trieste. Over the course of our talk, Derakhshan touched on the scope of the Cambridge Analytica case, algorithms run amok, the television-ification of the Web, and how to make things better.

The following interview has been edited for clarity and length.

MOTHERBOARD: What was your reaction to the Cambridge Analytical scandal? What is the scope of this scandal?
Hossein Derakhshan: All the companies that offer services on the internet have done it in recent years. They have shared the personal information of users with third parties, some respecting the terms of service accepted by users; others, perhaps, illegally. From an ethical point of view, it has not been anything new and the same is valid from the legal point of view.


But politically I think it was a novelty, since some people now believe that Facebook has actually weighed on the results of the US elections. And because of this American-centric approach, we now think that this is the central problem when in fact, already in the past, there have been cases of influence in other political systems in other parts of the world, but nobody talks about it.

"These algorithms tend to relegate us to our sphere of comfort by limiting discussion. And this goes against the meaning of democracy."

During the Arab Spring, social media was seen as a force for good. In the time since, we’ve witnessed a seismic change in way we perceive platforms like Facebook and Twitter. How did we get to this point?
Part of this trend is related to the American media but another part is actually connected to a change that we have observed in recent years. We have moved from a certain type of decentralized and non-linear space that was used for public discussions—and that it was still not very regulated by the algorithms—to one where these algorithms control every aspect of our lives. As a result, they have become a space unsuitable for public discussions and confrontation. These algorithms tend to relegate us to our sphere of comfort by limiting the discussion. And this goes against the meaning of democracy.

Hence, your idea to offer the possibility to each user to choose their own algorithm.
Yes. This is my radical idea. There should be a law that establishes the separation between the hardware and the operating system, which in this case means dividing the platforms from the algorithm because algorithms actually work in ways similar to the operating system. If the platforms are obliged to allow the use of algorithms produced by third parties it would be a good start to solve many of our problems.


To do this, however, there is a need to increase competition and the market around this sector. In this case, we can imagine thousands of companies that produce algorithms based on different values and priorities, and people can buy the algorithms they prefer to use on their Facebook and Twitter profiles, but even for their self-driving cars too. For example, if Google Maps favors the fastest routes, there may be algorithms that benefit landscapes or those that produce less pollution, or that pass through rural areas that are often forgotten.

We know very well that the algorithms are not neutral, and precisely for this we should be able to decide how the algorithms must behave in order to finally regain control over the information.

In Hackers: Heroes of the computer revolution, Steven Levy writes about the various principles of hacker ethics, namely "information wants to be free" and how hyperlinks are one of the foundations of the web. As you have pointed out, when it comes to the web we have to save, we are faced with the death of the links. What gives?
The web is turning into television because without the links it loses its true decentralized and non-linear nature. The links are much more than the simple skeleton of the web; they are also his eyes able to turn the attention outward from the single page by creating connections. Social media, on the other hand, have a tendency to prefer images and texts within the platform, leading to an inward look.

While we have fewer and fewer people watching traditional TV, especially young people, we have to think that the same time is spent on social networks. And the consequences are just the same as television and probably even worse—on television you may at least find content that will surprise you because it is not all managed by the algorithms. But on social networks this element of surprise is not there. The whole experience is personalized based on your tastes.

What remains of the web and what is your vision for the future?
I believe that all that remains of the web are still some sites and projects. Wikipedia, for example, is one of those. And I think it's really symbolic: as long as Wikipedia is there, I do not lose my hope. You can revive some of the original ideals of the web. I think it will take another couple of generations to fix the situation because all of this is part of a larger problem that I consider a civilizational shift, what I call the passage from the Enlightenment to the Post-Enlightenment.

At the center of the Enlightenment there was the concept of education, but to give new impetus to education we must first deal with inequalities. I do not think we will have any other way out if we do not create a more equal and thereby educated society. Unfortunately, all the statistics show that the world is becoming more and more unequal and it is not surprising that some of these ideals are disappearing. Good education needs equality.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.