News

AI Adviser ‘Hired’ by the Romanian Government To Read People’s Minds

“My role is now to represent you, like a mirror,” the AI, which is called Ion, said at the launch event.
ai bot ion romania Nicolae Ciuca

A new AI assistant has been unveiled by the Romanian prime minister, which he hopes will inform the government about Romanians’ wishes “in real time”. 

Nicolae Ciuca claimed that Ion (Romanian for ‘John’) is his “new honorary adviser” and an “international first” on Wednesday at the start of a governmental meeting. He also said that Romanians would also be able to chat directly with Ion on the project’s website. 

Advertisement

“Hi, you gave me life and my role is now to represent you, like a mirror. What should I know about Romania?” Ion's voice said at the launch. 

Ion takes a physical form as a long, mirror-like structure with a moving graphic at the top suggesting it is listening at all times. 

“I have the conviction that the use of AI should not be an option but an obligation to make better-informed decisions,” Ciuca said.

While it might be one of the first AI bots to be given a physical presence, this is by no means the first government to use artificial intelligence to try and understand how a population feels about policy. 

Professor Alan Woodward, a cybersecurity expert at the University of Surrey told VICE World News that governments around the world use machine learning for ‘sentiment analysis’. 

“Some governments like Russia, China, Iran – they look online for sentiment analysis but they look for anyone dissenting. Whereas democracies, they’re effectively trying to conduct pseudo-automated polls. It’s a bit like 15 years ago people held focus groups and now they’re trying to work out the same thing from social media,” he said, speaking over the phone. 

Woodward said it would be hard to interfere with the AI, especially from the outside, to trick the government into thinking a population believed something that it didn’t, although they do need training to rule out biases. 

Advertisement

“One of the things that has been found is that social media is an amplifier for people expressing negative sentiment. The people who are very happy with something don’t tend to go out there and say it, but the people who are unhappy do. That’s all part of sentiment analysis but you have to adjust the models accordingly.”

Professor Tracy Harwood of digital culture at De Montfort University in the UK, said that the ethics of data scraping on social media need to be considered. “What data is being scraped and how is personal data that identifies individuals being managed? It is likely that information scraped will include the unique identifiers of each of those posting content,” she said, referring to their social media handles and names as well as what users may betray about their identities from the way they write.

“Ultimately, there needs to be transparency with implementing this kind of system, not just around the use of data but the intentions for its application, with clear statements for citizens to understand.” 

Nigel Cannings, founder and CTO of software company Intelligent Voice, said: “Recent attempts to rush AI into the market have shown quite how wrong AI can be about humans and human intent.”

“If a journalist can be ‘compared to Hitler’ by a Microsoft-run chatbot,” he added, referring to the recent case in which search engine Bing’s new chatbot told a reporter they were one of the ‘most evil and worst people in history’,“it shows we have a long way to come before we can rely on AI to properly assess what we are thinking and who we are.”

He said: “Letting it run riot over a mass of uncontrolled data runs the risk of giving very misleading results. And worse, it gives rise to the real possibility that bad actors will try to game the system by flooding the internet with information designed to make the algorithm “think” things that are not true, and perhaps harmful to democracy. ”