Image: iStockphoto/Stockernumber2

With the rapid integration of AI into our lives—in the form of driverless cars, supercomputers, and even Barbie dolls—the science-fiction-inspired fears of a machine takeover no longer seem quite so far-fetched. But while it’s unlikely that robots will turn against us anytime soon, AI can still be dangerous. Roman Yampolskiy, the head of the Cybersecurity Lab at the University of Louisville, explores what happens when AI systems becoming unsafe for humans—and why we should be worried about a future of ‘malevolent AI.’

What exactly is “dangerous AI”?

Yampolskiy, author of the recent release, Artificial Superintelligence: A Futuristic Approach, defines “dangerous” broadly—it includes anything that has a net “negative rather than positive effect.” A dangerous system is one in which “its values and goals would be misaligned with those of humanity. It is also important to note that the system, itself, does not have to be “explicitly antagonistic,” said Yampolskiy. Simply being out of alignment with human society is sufficient cause for concern.

The obvious example might be an intelligent robot soldier. “They’re dangerous by design—they kill people,” said Yampolskiy. And like all machines, they could be vulnerable to hacking, malfunction, or misuse. It could also be something like a smart computer virus. Or a chatbot (one you can talk to online) hacked by criminals to steal identities. In the military, drones capturing visual data can be tapped into.

According to Yampolskiy, there are several ways AI systems could change from helpful to harmful. The shift could be “unintentional or intentional on the part of the programmer,” said Yampolskiy. “Unintentional pathways are most frequently a result of a mistake in design, programming, goal assignment or a result of environmental factors such as failure of hardware.”

Dangerous AI could cause serious harm

Most AI systems, said Yampolskiy, have some capacity of dangerousness—they “fall in the middle on the spectrum from completely benign to completely evil.” What does that mean? It includes non-lethal behaviors that are still unsafe, such as:

  • Taking over (implicit or explicit) resources such as money, land, water, rare elements, etc. and establishing a monopoly over access to them.
  • Seizing political control of local and federal governments as well as of international corporations, professional societies, and charitable organizations.
  • Revealing informational hazards. Certain kinds of information can be dangerous to be aware of. For example, knowing a state secret may make you a target for kidnaping.
  • Establishing a total surveillance state, reducing any notion of privacy—including privacy of thought—to zero.

AI will be everywhere

AI controls more systems than we might think. “Wall Street trading, nuclear power plants, social security compensations…are only one serious design flaw away from creating disastrous consequences for millions of people,” wrote Yampolskiy. And if we consider the military, which receives the majority of funding for AI research, drones, robot soldiers, and cyber weapons, there are currently humans in control—but this, according to Yampolskiy, is simply because of how they’re currently designed. “It is not a technical limitation,” wrote Yampolskiy, “it is a logistical limitation that can be removed at any time.”

Safety critical to AI success

Current safety work in AI, said Yampolskiy, is not sufficient. It is aimed at AI systems—systems that may become dangerous because of poor design. The more pressing issue, according to Yampolskiy, is “intentional, malevolent design resulting in evil AI.”

“We should not discount the dangers of intelligent systems with semantic or logical errors in coding, or goal alignment problems,” said Yampolskiy, “but we should be particularly concerned about systems that are unfriendly by design.”

The solution to ensuring safe AI systems, Yampolskiy believes, is to include AI safety as well as an AI ethics board, in the creation of the AI system. It is also important to be aware, wrote Yampolskiy, that even with the creation of secure AI systems “it doesn’t mean it will not become unsafe at some later point.”

“Few people, even in the AI safety research community, consider dangers of AI designed to be malevolent on purpose,” said Yampolskiy. “But it is the biggest danger we face.”

Also see

This article:

AI gone wrong: Cybersecurity director warns of 'malevolent AI'