Facial Recognition Will Be Used to Boost Profits in Some Australian Prisons

Digital rights campaigners say the move will create security risks and usher in a new era of oppression.
A facial recognition system is demonstrated on a screen at the World Artificial Intelligence Conference (WAIC) in Shanghai.
Photo by Qilai Shen / Bloomberg via Getty Images

Some Australian prisons will soon be retrofitted with new facial recognition devices, even as backlash over its use continues to attract the attention of watchdogs around the world. 

Details of a new deal between Corrective Services New South Wales and the United States-based IT firm, Unisys, first surfaced early last week. As part of the deal, which will cost the NSW government just shy of $13 million over the next four years, the state’s inmates will be forced to hand over their biometric data as part of an effort to “provide more safety” to people visiting correctional facilities, and make the state’s prisons system more profitable, by cutting operational costs.


The deal joins a queue of others like it around the world, and comes just three months after Kmart and Bunnings were referred to the Office of the Australian Information Commissioner for a probe into whether it used the technology beyond the bounds of privacy law. Digital and human rights campaigners say the tech should never have made its way to prisons.

Samantha Floreani, program lead at Digital Rights Watch, told VICE she and her colleagues are concerned it will be used in prisons, and are worried about where it might end up next. 

“Harmful or controversial technology like this is often tested out on the most vulnerable in society, especially in places where they have limited rights and agency such as prisons, schools and workplaces,” Floreani said. 

“We already know that current facial recognition technology is riddled with racial biases, and that Aboriginal and Torres Strait Islander peoples are overrepresented in Australia’s prisons. The use of facial surveillance in prisons will only exacerbate the harms caused by our biased justice system.”

Australian law enforcement agencies have a chequered track record on using it, too. 

In March 2020, the Australian Federal Police and state police forces across Queensland, Victoria and South Australia were reported to have registered accounts with the controversial facial recognition firm, Clearview AI, without oversight. The tool scrapes social media platforms to form a sprawling database of photos and personal information.


The roll out of facial recognition across NSW prisons next year offers Floreani and scores of other experts immediate cause for concern. Among the most serious are fears that the biometric data handled in correctional facilities will be mishandled, or stored insecurely. 

“Given that the contractor, Unisys, is based in the US, this also raises questions about transferring Australians’ biometric information overseas,” Floreani said. 

“While we don’t know the specific details of the contract, it is reasonable to be concerned about how, where and how long this data is being stored, who else might have access to it, the security mechanisms in place, and what protections are in place to prevent both Corrective Services and Unisys from mishandling the data.”

According to a spokesperson for the NSW Department of Communities and Justice, data captured in the state’s prisons will be captured and stored by Unisys hardware, and then managed by Corrective Services NSW. 

The spokesperson couldn’t be drawn on what training the government’s staff would undergo before handling the sensitive information, or whether the department has put any safeguards in place at all, other than to say the “solution” will “operate within the existing legislative framework.”

But there currently are no laws for the use of facial recognition in Australia, where the technology is used freely by businesses and government agencies alike. As a result, experts are calling for its use to be halted until new protections are enshrined into law. 


One of them is Australia’s former Human Rights Commissioner, Ed Santow, who is leading a push for a new “model law”, after handing down a landmark report in 2021 that  recommended an urgent moratorium on the use of both facial recognition and artificial intelligence in high-risk settings across Australia. 

“New technology should give us what we want and need, not what we fear,” Santow said in May last year.

“Our country has always embraced innovation, but over the course of our Human Rights and Technology project, Australians have consistently told us that new technology needs to be fair and accountable,” he said.

“That’s why we are recommending a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision-making by corporations and by government.”

Floreani called for the same. She said the use of facial recognition in high-risk settings like law enforcement “deserves more scrutiny, not less”. 

“I worry that some people may be tempted to dismiss legitimate concerns about surveillance technologies in a prison context, because of ongoing stigma of incarcerated people, or perceptions that they are an exceptional case or an ‘other’,” she said.  

“But people in prison, and those who visit them, shouldn’t be subject to invasive surveillance technology like facial recognition. What’s more, we would be extremely foolish to believe that technologies used in a carceral context won’t be rolled out more broadly as they become normalised.”

Follow John on Twitter.

Read more from VICE Australia.