Elon Musk Stands Behind the Open Letter Against AI Weapons

Big names in robotics and artificial intelligence are using their voices to stop the United Nations from developing and using killer robots.

Tesla’s Elon Musk and Google’s Mustafa Suleyman are the leaders of a group of 116 specialists from 26 different countries who have united to demand a ban on autonomous weapons.

Recently, the UN voted in the favor of beginning formal discussions on weapons like drones, tanks, and automated machine guns. Prior to this, the group of founders of AI and robotics companies have sent an open letter to the UN asking it to prevent the arms race that is currently underway for killer robots.

The founders have said in their letter that the direction in which the UN’s decision is heading is towards the third revolution in warfare, after gunpowder and nuclear arms.

They also wrote that once these lethal autonomous weapons are developed, they will make armed conflict possible to be fought on a scale bigger than ever and faster than people can understand. They warned of the weapons being easily turned into something horrific and used for wrong purposes. They also added that there isn’t much time for them to act and that once the weapon goes out into the world, it would hardly be stopped.

Like the experts have warned us previously, the AI technology is at a point where using autonomous weapons is just a few years away from us, and not decades like we have thought. And while AI indeed can be used to make the battlefield safer for soldiers, experts are also warning that weapons that work on their own will raise the magnitude of warfare and result in more human lives being lost.

The letter is launching today at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne. Behind the letter are standing high-profile figures in the robotics field who strongly stress the need for urgent action.

People who wrote the letter say the lethal autonomous weapons are morally wrong and are asking for them to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.

Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales in Sydney commented on the subject and said that almost every technology can be used for both good and bad, just like artificial intelligence. In contrast to what the founders think will happen with the AI weapons, Toby said that the technology can also be used to solve many current problems we as a human race face, such as inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis.

But he also says that the technology can industrialise war and warns that we as people need to make decisions today that will determine if we will get the future we want.

Musk, one of the biggest co-writers of the open letter has been warning about the importance of the pro-active regulation of AI. But as much as Musk sees AI be dangerous, some think this won’t be a problem for a much longer period of time than he does.

Ryan Gariepy, the founder of Clearpath Robotics has responded and said that no matter what some people think of this, it is not just science fiction but a real potential to cause significant harm to innocent people along with global instability.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.