Commentary: What To Do Against the ‘Nightmare Scenario’?
0
Votes

Commentary: What To Do Against the ‘Nightmare Scenario’?

We should fear Artificial Intelligence. Not in the future but now. Ask Sheryl Sandberg, chief operating officer of Facebook. She announced that her company, with its over 2 billion users, built software it cannot fully control. “We never intended or anticipated this functionality being used this way,” Sandberg said, “— and that is on us.” Facebook’s operating systems had allowed Russian operatives to create accounts and ads aimed at influencing the 2016 U.S. presidential election. The gigantic network seems to have created systems that are ungovernable.

Facebook’s problem hints at the extreme dangers lurking within Artificial Intelligence as it grows throughout the world. AI Experts are already talking about a “nightmare scenario,” where nations’ AI systems could ignite real-time conflicts. Consider, hair-trigger AI systems could eventually control several nations’ military responses’ and some error in any one algorithm could possibly lead to a nuclear catastrophe.

Between the Facebook case and the nightmare scenario is the immediate problem of millions of people losing jobs. Around the globe, programmable machines — including robots, cars and factory robots — are replacing humans in the workplace. Automation threatens 80 percent of today’s 3.7 million transportation jobs, one U.S. government report estimated, including truck and school bus drivers, taxi drivers and Uber and Lyft drivers. Another report indicates AI is threatening aspects of the many different jobs, including call center operators, surgeons, farmers, security guards, retail assistants, fast food workers and journalists. A 2015 study of robots in 17 countries found that they accounted for over 10 percent of the countries’ gross domestic product growth between 1993 and 2007. Consider, a major supplier for Apple and Samsung cell phones and computers, China’s Foxconn Technology Group, is planning to automate 60,000 factory jobs with robots, replacing its existing employees. Meanwhile, Ford’s factory in Cologne, Germany, not only replaced human workers with robots but also on some jobs stations position robots beside human workers — they are called cobots.

But these employment issues, as troubling as they are, cannot compare to the dangers envisioned by Elon Musk and Stephen Hawking. They are among the dozens of thought leaders who signed a letter harshly condemning governments’ increasing reliance on AI for military use. Their chief concern is autonomous weapons, another example of AI. The U.S. military is already developing armaments that do not require humans to operate them. These weapons are being created to offer battlefield support for human troops. Autonomous arms are dramatically easier to develop and mass-produce than nuclear weapons. They will likely to soon appear on black markets around the world, certain to be favored by terrorist groups. To quote from the open letter, the new autonomous weapons would be ideal for dark actions including “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group."

There are some economic optimists like MIT’s Erik Brynjolfsson and Andrew McAfee, who feel that AI will eventually bring long term prosperity to the world, but even they admit that finding common ground among, economists, technologists and politicians is daunting. Obviously, it will be very difficult to craft legislation about AI without more agreement about its potential effects.

We should definitely be fearful of artificial intelligence, not just because it is clearly destined to affect the number of available jobs, including those in middle and even upper middle class domains, but because its potential military use can lead to a perilous future, if not controlled. As the open letter signed by Musk and Hawking concluded, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

The author is director of the International Center for Applied Studies in Information Technology (ICASIT) http://policy-icasit.gmu.edu/