Can lazy machines cruising a synthetic internet defend against sloppy humans?
Researchers at the University of Southern California led by Jim Blythe hope so. They’ve devised a system to test computer-security networks by having machines themselves mimic man’s mistakes. These “seemingly innocuous actions” – users downloading files, or IT personnel ditching security features that can bog down machines – can leave networks exposed to nefarious activity.
Videos by VICE
An estimated 60 percent of computer security breaches are triggered by user slip-ups. But Blythe says such factors are typically overlooked when testing security systems, in part because it’d be impractical to steer user behavior “in ways that could give meaningful results.” So he and his team created lazy, sloppy computer programs, cognitive agents that can be programmed to fumble tasks with all the temerity of real, human employees, with the added bonus of having precise records of what happened to pick apart after the fact.
Designed as stand-ins for managers or IT staffers – “run-of-the-mill-users” – agents are actually frighteningly individualized. Agents are given distinct belief sets, desires and intentions, and are given jobs with deadlines.They can be assigned group tasks, which may be swayed by group dynamics. “They can have friends,” Blythe explains, “shared interests and power relations, and can trust some agents more than others, all of which affect how quickly they perform the job at hand.” All actions and operations associated with jobs are moderated though a standard Windows interface plugged into the security system.
To err is human, and now computer
It goes beyond personal values and social dynamics, then. We mere mortals get groggy and hungry and occasionally must answer nature’s call. So, too, do (must) this software: “We have focused mainly on fatigue,” Blythe says, “the physical need to take breaks at regular intervals, or the need to go to the bathroom.” They can also drone out while cruising around a synthetic internet created by the researchers.
Blythe and his team will present preliminary results August 9 at the 25th annual symposium of the Association for the Advancement of Artificial Intelligence, before launching a full-scale test later in the year. One intriguing, albeit no-brainer, result, is that as users take phishing bait, unknowingly giving away sensitive information while online “or allowing code that corrupts work files to be downloaded,” the abilities of IT staffs plummet as fatigue and stress ratchet up.
And no surprise here, but things could get really interesting once money enters the equation. Blythe and his team will eventually throw in financial pressures – by restricting, say, an agent’s income compared with the money needed to meet overheads. This could potentially goad agents into double-dealing. What we’d have, then, are Blythe’s agents confirming the idea that behind errors pegged on computers there are at least two human missteps – including the misstep of blaming computers.
The EconomistMore
From VICE
-

Screenshot: Activision, Nintendo -

Witthaya Prasongsin/Getty Images -

Anadolu/Contributor/Getty Images -

Humanoid HMD 01 Alpha Bipedal (Credit: Humanoid)