This story is over 5 years old.

AI Professor Proposes 'Turing Red Flag Law'

Artificial intelligence should have to make clear that it's artificial intelligence and not an actual human.
Image: AH86/Shutterstock

At the dawn of motor vehicle transportation, the UK parliament passed the Locomotive Act of 1865, aka the Red Flag Act. It mandated that all self-propelled vehicles be accompanied by a crew of three and, in the event that such a vehicle was connected to two or more carriages, a man on foot with a red flag must precede the train by a minimum of 60 yards. A machine is coming: Watch the fuck out.

The law was perhaps overzealous, but at the time no one really knew what we were getting into with these self-propelled vehicles. It's this sentiment—and this uncertainty—that forms the foundation of what Australian artificial intelligence professor Toby Walsh calls the Turing Red Flag Law. The danger in question this time is an artificial intelligent agent, but the warning is the same: A machine is coming.


The problem, as Walsh sees it, is that we may not recognize the machine even if it's sitting there right there in front of us. After all, the defining trajectory of artificial intelligence is toward computers that behave as humans, and, following from that, computers that we may not recognize as computers. This is the Turing Test: Can a machine successfully fool a human into thinking that it is not a machine?

As normal people we shouldn't have to exist as Blade Runners ready to subject every human-seeming thing to a battery of carefully designed probing questions. Is it so unreasonable that we design artificial intelligence to be transparent about its artificiality? This is the essence of Walsh's law, defined below:

An autonomous system should be designed so that it is unlikely to be mistaken for anything besides an autonomous system, and should identify itself at the start of any interaction with another agent.

Walsh is less concerned with the specifics, the actual red flags of the Red Flag Law, than the basic principle of transparency. "Legal experts as well as technologists will be needed to draft such a law," he writes in the Communications of the ACM. "The actual wording will need to be carefully crafted, and the terms properly defined. It will, for instance, require a precise definition of autonomous system. For now, we will consider any system that has some sort of freedom to act independently."


Walsh raises four examples of why such a law might be useful, but seems to focus in particular on self-driving cars. On our roads, among the most dangerous environments we regularly subject ourselves to, we will soon mingle regularly with non-human agents and as human drivers we may have some different expectations of those non-human agents. Many of those expectations actually have to do with computers being much better drivers than humans, according to Walsh:

There are many situations where it could be important to know that another road vehicle is being driven autonomously. For example, when a light changes we can suppose that an autonomous vehicle approaching the light will indeed stop, and so save us from having to brake hard to avoid an accident. As a second example, if an autonomous car is driving in front of us in fog, we can suppose it can see a clear road ahead using its radar. For this reason, we do not have to leave a larger gap in case it has to brake suddenly. As a third example, at a four-way intersection, we can suppose an autonomous car will not aggressively pull out when it does not have right of way.

In the end, it may be more so that the red flags (or their absence) offer warnings about the presence of fallible human drivers more than all-seeing overcautious machines.

The three other examples Walsh digs into include personal assistants (Siri etc.), online poker, and computer-generated text (automated journalism, that is). Which are all pretty obvious, I think, but his ACM piece linked above is free-to-read.

"In many U.S. states, as well as many countries of the world including Australia, Canada, and Germany, you must be informed if your telephone conversation is about to be recorded," Walsh notes in conclusion. "Perhaps in the future it will be routine to hear, 'You are about to interact with an AI bot. If you do not wish to do so, please press 1 and a real person will come on the line shortly.'"

I wouldn't count on that, but it's a nice idea.