What Is Terminator Conundrum? 'Killer Robots' In Military Raise Ethical Concerns
As Artificial Intelligence (AI) is coming into its own, it is creating a significant impact in our everyday lives. The use of AI in self-driving cars, industrial mechanics, space exploration and robotics are some of the examples that show how it is paving its path into the future. But the technology has also found its way into the defense industry leading to a worrisome increase in the manufacture of autonomous weapons.
The so-called “thinking weapons” were described by the Air Force Gen. Paul Selva, Vice Chairman of the Joint Chiefs of Staff, in a 2016 presentation at the Center for Strategic and International Studies in the U.S. State Department of Defense. “We’re not talking about cruise missiles or mines. But robotic systems to do lethal harm… a Terminator without a conscience,” he said while referring to the 1984 cult science fiction film starring Arnold Schwarzenegger, “Terminator.”
A huge question mark hangs over the future of AI in warfare. The term “Terminator Conundrum” has been used over the years by scientists and military personnel for automated weapons to describe the destruction the lethal weapons could cause. Although the origin of the term is not known, it usually finds its way into conversations surrounding killer robots, much like the cyborg in the cult movie.
Advantages of such weapons were discussed in a New York Times article published last year, which stated that speed and precision of the novel weapons could not be matched by humans. These automated weapons would significantly decrease the cost of warfare and the number of soldiers exposed to potential death, the report said.
Regardless of the benefits, many scientists are reluctant to put automated weapons to use citing concerns that mostly revolve around ethics, fear of the pace at which AI is developing, and the political impact. These weapons could include, for instance, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria programmed into them.
The official stance of the United States on such weapons, was discussed at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems held in 2016 in Geneva, where the U.S. said that “appropriate levels” of human approval was necessary for any engagement of autonomous weapons that involved lethal force.
An article published by aviation news website Flight Global in 2016 states the U.S. government has initiated and then canceled several unmanned combat aerial vehicle (UCAV) and missile programs that would have autonomously identified and destroyed targets based on “hard-coded” decision metrics.
As per the Times article, a small, unarmed drone was tested in the summer of 2016 on Cape Cod in Massachusetts, which after taking flight decided how to execute orders on its own. The drone could also be easily armed. The project was handled by the Defense Advanced Research Projects Agency or Darpa— a company that develops software required for machines that could work with small units of soldiers.
Major Christopher Orlowski, a program manager at Darpa, said the drone did not need to be controlled by a remote. “It works with you. It’s like having another head in the fight,” he said.
Selva, during the presentation, also revealed the U.S. was a decade away from possessing the technology to build a fully independent robot that could decide when and who it would kill. However, the department has no intentions of actually building one. He also acknowledged that process of building the weapons was "governed by law and by convention.”
“That ethical boundary is the one we’ve draw a pretty fine line on. It’s one we must consider in developing these new weapons,” he said.
In 2015, numerous scientists and experts signed an open letter that warned that developing such intelligent weapons could set off a global arms race. The warning was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky. “Autonomous weapons will become the Kalashnikovs of tomorrow,” the letter read and cautioned the extremists may be able to access the independent robots.
A similar letter, urging the United Nations to ban killer robots or lethal autonomous weapons, was signed by world’s top artificial intelligence (AI) and robotics companies in the International Joint Conference on Artificial Intelligence (IJCAI) held in Melbourne in August. The signatories, including founder, CEO and lead designer at SpaceX and co-founder of Tesla, Elon Musk, stated in the letter: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The UN’s Conference of the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems in 2016 to formalize their efforts in dealing with challenges raised by the AI weapons that are thought to become the “third revolution in warfare.” The first and second revolutions on this list provide a scale to gauge the seriousness of the problem: the invention of gunpowder and development of nuclear bombs.
© Copyright IBTimes 2024. All rights reserved.