Centrelink’s Debt Crisis Shows How the Government Can Abuse Big Data

Nobody has ever accused Centrelink of being an efficient institution. It’s something more akin to a national joke. Still, its new automatic data-matching system, which has been falsely accusing thousands of people of owing money to the government, has taken things to a whole new level.

Federal government policy now dictates that if someone has received a Centrelink payment over the past six years, the income they reported while receiving benefits must be automatically cross-checked with Australian Tax Office (ATO) records. If there is any discrepancy, the person is invoiced for the difference—in other words, hit with a surprise bill that might total thousands of dollars. 

Videos by VICE

According to the government, the idea behind all this is to catch out people who are cheating the system. Unfortunately, there appear to have been thousands of data matching errors, which means some of the most vulnerable people in Australia are suddenly being billed for thousands of dollars that they don’t owe. Don’t worry though, Centrelink has been helpfully referring them to a suicide hotline.

In basic terms, these errors are occurring because the ATO and Centrelink look at income in very different ways—the former in terms of years, the latter in terms of fortnights. Which means that, frequently, dividing a yearly earning by 26 can re-distribute the money in a way that creates a false debt. 

“The data matching errors mean up to 20 percent of the ‘debts’ are just plain wrong,” Dr Suelette Dreyfus, a lecturer in computing and information systems at the University of Melbourne, tells VICE.  “The department has recognised that up to one in five of these letters is based on a false conclusion. If the figures provided by the media are correct, it’s potentially more than 30,000 people who have been falsely accused.”

An automated debt recovery system, it turns out, is about as Orwellian as it sounds. Dreyfus explains that relying on simplistic methods to crunch extremely complex sets of data may lead to high failure rates. To a computer algorithm, your personal circumstances—those which forced you to apply for welfare benefits in the first place—mean absolutely nothing. The numbers are all that matter. Unfortunately, especially when devoid of context, numbers can be wrong.

Despite Centrelink being inarguably at fault—given its own system made the alleged overpayments in the first place—it is the accused citizen who must somehow prove that they haven’t been overpaid, and they only have a 21 day period in which to do so before facing harsher penalties. This process might involve them producing ancient payslips from a six month period of casual employment three years ago. 

“In this system, data matching failures lead straight to debt collectors by design,” Dreyfus says. “In other words, the hapless have to pay the cost of the government’s errors. That’s utterly unfair.”

Perhaps even worse than a surprise debt is actually having to deal with Centrelink in the first place. Spending hours on hold to operators just as confused by the whole ordeal as you are, or waiting in a physical queue all day. “It’s impossible for victims of errors in this system to get the problem fixed easily,” Dreyfus acknowledges. “Slim bureaucratic capacity to solve problems quickly for each wronged person is a big failure.”

Despite its obvious failures, politicians continue to defend Centrelink’s data matching system. This perhaps makes sense—how can someone who doesn’t think twice about using their parliamentary entitlements to attend a polo match, or buy a luxury Gold Coast apartment, think that this is a problem? For a person relying on Centrelink, $3,000 could mean food and rent for three months. For their local member, it’s a couple of charter flights.

“It is astonishing that the government would continue operation of a Centrelink system that has been so clearly shown to be very broken,” Dreyfus says. “If you know a system is broken, you roll up sleeves, get in there, stop it and fix it. If your ATMs were putting out false numbers, you would stop the ATM’s service as soon as you knew.”

When it comes down to it, Dreyfus says, the Centrelink crisis isn’t something we can blame on faulty computer systems. An algorithm can’t act malevolently, but the person who uses it can. “This is a political failure dressed up as an an IT failure,” Dreyfus says. “Big Data combined with data analytics and predictive analytics has the potential to give us better answers on many things. View it as a powerful tool. How that tool is used—for good or evil—depends on how accountable the people are who wield it.”

While the recent Centrelink system errors are making life hard for a lot of cash-poor, vulnerable people, their implications stretch far beyond the welfare system. “I expect we will see more of this kind of failure in the future,” Dreyfus says. “The use of algorithms to deny passports, job applications, and pensions is already happening.”

What concerns Dreyfus about so-called Big Data is the government’s ability to delve deep into the lives of its citizens, and to use or misuse that information to unilaterally cut off services in an automated fashion while abrogating responsibility for failures. “That is one reason that a person’s right to have control over their private information is so very important,” she says.

Her case study is the federal No Jab No Pay policy, which links Child Care Benefit (CCB) payments and immunisations. “The condition is the child must be up to date on vaccinations to get child-related payments,” Dreyfus explains. “Agencies then presumably do data matching with their immunisation register and send letters to parents saying ‘we can’t find a match with our register and your CCB—we’re going to cut off payment if you don’t help us solve this.’”

There’s massive room for error here, and when there is, multiple letters and phone calls from desperate parents may ensue. Yet, as with the Centrelink data matching, these are errors that are almost impossible for a citizen to rectify by themselves.

Data matching and analysis has long been used for malicious purposes. Throughout the “redlining” crisis of the 1970s, major US banks illegally refused loans to people based on their postal codes, after crunching the socio-economic statistics within their suburbs and deeming them unsuitable. Similarly, predictive policing methods currently used in both the US and UK profile innocent people as criminal threats in ways that bring to mind a real-life version of Minority Report.

When life starts to resemble a Tom Cruise movie, it’s time to take a step back.  “The only way this is going to be fixed is to put heavy penalties on decision makers in government who authorise this kind of digital harassment where they know it will hurt vulnerable people,” Dreyfus says. “And requiring much more transparency of government agencies in how they share and link data, and what data is stored about each of us….In addition, allowing each person to determine exactly how much data their government is allowed to keep about them.”

Data matching is scary, but it’s also the future. And we don’t need to be afraid of it, so much as wary that its processes aren’t misused.

“Bad applications of IT such as this situation give technology a bad name,” Dreyfus says. “And that’s sad, because technology can do so much to make our lives better—if the humans who operate it don’t use it as a tool of oppression.”

Follow Kat on Twitter