testsetset
An attack by artificial intelligence on humans, said Google software engineer and University of Michigan professor Igor Markov, would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50 percent of the population.
“Virus particles were very small and there were no microscopes or notion of infectious diseases, there was no explanation, so the disease spread for many years, killed a lot of people, and at the end no one understood what happened,” he said. “This would be illustrative of what you might expect if a superintelligent AI would attack. You would not know precisely what’s going on, there would be huge problems, and you would be almost helpless.”
Rather than devising technological solutions, in a recent talk about how to keep superintelligent AI from harming humans, Markov looked to lessons from ancient history.
Markov joined sci-fi author David Brin and other influential names in the artificial intelligence community Friday at The AI Conference in San Francisco.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
One lesson from early humans that could help in the fight against AI: make friends. Domesticate AI the same way Homo sapiens turned wolves into their protectors and friends.
“If you are worried about potential threats, then try to use some of them for protection or try to adapt or domesticate those threats. So you might develop a friendly AI that would protect you from malicious AI or track unauthorized accesses,” he said.
Markov began and ended his presentation by calling himself an amateur and saying he doesn’t have all the answers, but he also said he has been thinking about ways to prevent an AI takeover for more than a year. He now believes the most important way for humans to prevent the rise of malicious AI is to put in a series of physical world restraints.
“The bottom line here is that intelligence — either hostile or friendly — would be limited by physical resources, and we need to think about physical resources if we want to limit such attacks,” he said. “We absolutely need to control access to energy sources of all kinds, and we need to be very careful about physical and network security of critical infrastructure because if that is not taken care of, then disasters can obviously happen.”
Calling upon a background in hardware design, Markov suggested steps be taken to separate powerful systems and have deficiencies built in to act as a kill switch, because if superintelligent AI ever arises, it will likely be by accident.
He strongly urged that limits be placed on self repair, replication, or improvement of AI and that specific scenarios be considered, such as a nuclear weapons attack or use of biological weapons.
“Generally, each agent, each part of your AI ecosystem needs to be designed with some weakness. You don’t want agents to be able to take over everything, right? So you would control agents through these weaknesses and separation of powers,” he said. “In the discipline of electronic hardware design, we use obstruction hierarchies. We go from transistors to CPUs to data centers, and each level typically has a well-defined function, so if you’re looking at this from the perspective of security, if you are defending against something, you would want to limit or regulate every level, and you would want the same type of limitations for AI.”
Markov’s presentation relies on predictions made by Ray Kurzweil, who believes that in a decade, virtual reality will be indistinguishable from real life, after which computers will surpass humans. Then, through augmentation, humans will become more machine-like until we reach the Singularity.
Markov also pointed out that there is a range of opinions on malicious AI. Stephen Hawking believes AI will eventually supersede humankind, telling the BBC, “The development of full artificial intelligence could spell the end of the human race.”
In contrast, former Baidu AI head Andrew Ng said last year that people should be as concerned about malicious AI as they are about overpopulation on Mars.