Last month, the Wall Street Journal (WSJ) carried a story detailing how Facebook overlooked hate speech posted by accounts linked with members of Narendra Modi’s Bharatiya Janata Party (BJP).
Since then, a parliamentary committee grilled a top Facebook executive. At present, the Delhi Assembly Committee on Peace and Harmony is probing into Facebook’s alleged role in the riots that broke out in New Delhi in February 2020, for “deliberate and intentional” inaction towards hate content during the violence.
The allegations and the ongoing investigation is significant. India is Facebook’s largest market with a user base of over 290 million.
The reach of the platform has influenced not just India’s society and culture, but also politics. Following the WSJ report, The Indian Express found that the BJP is the largest advertiser on Facebook on “social issues, elections and politics” over the last 18 months. BJP spent INR 46 million (over USD 625,000) on the platform since February 2019.
On Wednesday, September 23, the Supreme Court responded to Mohan’s petition and ordered the Delhi Assembly’s committee not to hear the Facebook matter again.
VICE News spoke to Sarvjeet Singh, Executive Director at the Centre for Communication Governance at the National Law University, New Delhi. Singh is an expert on the intersection of technology, national security and digital rights frameworks in India. He broke down the investigation, what it reveals about Facebook, and why it’s significant.
VICE News: Why does Facebook appear to be reluctant to respond to allegations of hate speech and political bias?
Sarvjeet Singh: There is some reluctance. While Facebook appeared before the Parliamentary Standing Committee on IT and answered questions, they refused to appear before the Delhi Legislative Assembly Committee on Peace and Harmony. This would have been a good way for Facebook to clear its stand on issues of hate speech.
In the Supreme Court on Wednesday, September 23, Facebook’s lawyers stated that Facebook is a US company and that the Delhi Assembly summons has political overtones.
But the WSJ allegation and follow-up articles indicate that [Facebook] favour one political party over the other. The Indian IT minister wrote to Mark Zuckerberg claiming bias against the ruling party. The development means Facebook is already in the middle of Indian politics.
We don’t know how the court case will play out. But considering Facebook’s position in India, the number of people and the democratic processes in India they impact, and the fact that we’re one of their biggest markets—just going before the Committee would have been a show of good faith and answering doubts on hate speech and content moderation issues.
Social media platforms have often promised they will curb misinformation and hate speech, but not done enough. During this investigation too, Facebook said they could have been more careful. What do you make of such promises?
Social media companies have done very little against hate speech, misinformation and disinformation. Because of public pressure, Facebook has, over the years, made efforts. Whether those are enough is another debate.
We should also look at the measures companies like Facebook take in western countries to curb hate speech or misinformation—where they’re a lot more proactive—in comparison to the ones they implement in global south countries. In the global south, they’re not very proactive, which is problematic. In smaller Latin American countries, Facebook didn’t have the resources to deal with problems such as misinformation and fake accounts, so they let it slide.
In India, the government has become more proactive over the last, say, five years. One side of the argument is that they now have more control on the internet and censorship. But it’s no denying that they’re also taking active steps to fight these issues.
India does not have a data protection law yet, nor laws that regulate surveillance. How easy is it for a company like Facebook to function in a country like India?
Companies like Facebook get away with a lot of things because there aren’t laws to protect digital spaces or personal data. While the government needs to be more proactive with how they deal with social media companies, they also need to put out clear, coherent, rights-respecting laws that these companies are bound to follow. Otherwise, it gets to a point where, if Facebook wants to apply certain privacy practices, it can do that, and even go beyond what the Indian law mandates.
Do you think the ongoing investigations will yield any results?
A lot was made out of the fact that Facebook has been called in by the committees, and that they’re being investigated. In a democratic country like India, it’s a good thing. What we missed is that the Parliamentary Committee itself does not have the power to do anything against Facebook. It can only finalise its report on the issue, give it to the government with certain recommendations, and then it’s up to the government whether to take action or not.
The problem is with follow-ups. It’s not enough to say Facebook is biased. You need clearer rules and laws.
What powers does the Delhi Assembly panel have?
My understanding is that the state assembly has similar powers as the Parliamentary Committee. There will be a report in this case, too, and is the most they can do.
Have these committees been effective in the past in similar cases?
In the recent past, we haven’t really had examples of technology issues being raised in the legislative assembly. The Parliamentary Committee has called social media companies for hearings but did not publish reports on most of the issues they examined over the past few years. It’s possible that not publishing those reports might just allow social media companies to not take these committees seriously.
Would you say that social media platforms have exacerbated India’s disinformation problem, which has seen violent manifestations in the past?
I won’t say they’re responsible for all of it because we’ve had misinformation and disinformation before social media too. All those were perpetrated by individuals. It’s just been made worse by social media’s speed and virality. Some social media companies understand that. In 2018, rumours on WhatsApp led to mob lynchings. After that, WhatsApp made global changes.
I don’t think it’s reasonable to expect social media platforms to be completely free of hate speech, misinformation or illegal and harmful content because of the scale of it. What we can best hope is these companies are transparent, they consult different stakeholders for solutions and talk to the government.
In the ongoing investigation, shouldn't the political parties in question also be subjected to the same processes?
The accountability lies with both. The terms of services of many social media companies and their application are not uniform and transparent. There is also the question of how they operate. In most social media companies, their teams that regulate the content also deal with policy. The job of a policy team is to make sure the government, their institutions and laws are favourable to the company. And if the same team is responsible for content moderation, there will be conflict.
It might not be a bad intention; it’s just the nature of the job. The companies should separate these departments.
The political parties in a democracy have always been accountable. But for them, there is one important question: Who will hold them accountable?
Follow Pallavi Pundir on Twitter.