The use of artificial intelligence to replace or help human decision makers in Canada’s immigration and refugee system could have “alarming implications” on the human rights of those it’s being used on, says a new report, which calls on the government to proceed with caution.
Since at least 2014, Canada has been developing a system of “predictive analytics” to automate some tasks carried out by immigration officials, which could be used to “to identify the merits of an immigration application, spot potential red flags for fraud and weigh all these factors to recommend whether an applicant should be accepted or refused.”
Videos by VICE
The government has already been using AI to categorize cases as “simple” or “complex.”
The trend of replacing humans with artificial intelligence, especially in immigration decision making, is creating “a laboratory for high-risk experiments” that put vulnerable people at risk, according to the report, produced by the International Human Rights Program at the University of Toronto’s faculty of law, and by the Citizen Lab.
The use of such tools in immigration decisions is especially alarming, the report notes, because many of them hinge on the decision maker assessing the credibility of a claimant’s story — whether a refugee’s life story is “truthful,” for example, or whether a prospective immigrant’s marriage is “genuine”.
The government is also asking the private sector for input on using artificial intelligence for immigration decisions and assessments, including humanitarian and compassionate applications and pre-removal risk assessments, the report notes. They hope the technology will help with everything from conducting legal research to using historical data to predict whether or not a case will be successful to identifying and summarizing past similar cases.
The report warns, however, that the “nuanced and complex nature of many refugee and immigration claims may be lost on these technologies,” leading to issues like bias, discrimination, and privacy breaches, among others.
The report takes readers through the entire application process and highlights a number of risks posed by automatic decision making — for example, a program flagging an application as “potentially fraudulent” could potentially prejudice the applicant’s claim for protection.
The report also highlights examples of how algorithms can be used to discriminate — for example, an algorithm used in U.S. courts to decide whether someone should be in pre-trial detention or if they’re at risk of commiting a crime, has come under fire because it resulted in racialized and vulnerable communities being detained at a much higher rate than white offenders.
“Vulnerable and under-resourced communities such as non-citizens often have access to less robust human rights protections and fewer resources with which to defend those rights,” wrote the report’s co-author Petra Molnar in an email. “Adopting these technologies in an irresponsible manner may only serve to exacerbate these disparities.”
The report recommends that Ottawa establish an independent, arms-length body to oversee and review all uses of automated decision systems by the federal government; make all current and future uses of AI by the government public; and create a task force made up of government stakeholders, academics and civil society members to better understand the impacts of automated decision system technologies.
Cover image of Prime Minister Justin Trudeau at a press conference on Parliament Hill in Ottawa on June 7, 2018. Photo by Patrick Doyle/The Canadian Press