Should algorithms launch nuclear weapons, launch codes? The US says no

Last Thursday, the The US State Department outlined a new vision for developing, testing and verifying military systems, including weapons, that use AI.

The Political statement on responsible military use of artificial intelligence and autonomy represents a US effort to guide the development of military AI at a critical time for the technology. The document is not legally binding on the US military, but the hope is that allied nations will agree to its principles, creating a sort of global standard for building AI systems responsibly.

The statement states, among other things, that military AI must be developed according to international laws, that countries must be transparent about the principles underlying their technology and that high standards are applied for verifying the performance of AI systems. It also says that only humans should make decisions about the use of nuclear weapons.

When it comes to autonomous weapon systems, US military leaders have often reassured that a human being “stays on the ball” for decisions about the use of lethal force. But the official policyfirst issued by the DOD in 2012 and updated this year, does not need this is the case.

Attempts to enforce an international ban on autonomous weapons have so far come to nothing. The International Red Cross and campaign groups such as Stop killer robots have pushed for a deal at the United Nations, but some major powers – the US, Russia, Israel, South Korea and Australia – have proven unwilling to commit.

One reason is that many within the Pentagon view increased use of AI in the military, including beyond non-weapon systems, as essential and inevitable. They argue that a ban would slow US progress and hinder its technology against adversaries such as China and Russia. The war in Ukraine has shown how quickly autonomy in the form of cheap, disposable drones, increasingly capable thanks to machine learning algorithms that help them sense and act, can give them an edge in a conflict.

Earlier this month, I wrote about one-time Google CEO Eric Schmidt’s personal mission to bolster the Pentagon’s AI to ensure the US doesn’t fall behind China. It was just one story that emerged from months of reporting on efforts to deploy AI in critical military systems, and how that is becoming central to US military strategy, even as many of the technologies involved remain in development and have not yet been tested in any crisis.

Lauren Kahn, a research fellow at the Council on Foreign Relations, hailed the new US statement as a potential building block for more responsible use of military AI around the world.

Twitter content

This content can also be viewed on the site arises by.

A few countries already have weapons that operate in limited conditions without direct human control, such as missile defense systems that must react with superhuman speed to be effective. Greater use of AI could lead to more scenarios where systems operate autonomously, such as when drones operate out of communication range or in swarms that are too complex for a human to manage.

Some statements about the need for AI in weapons, especially from companies developing the technology, still seem a bit far-fetched. There have been reports of the use of fully autonomous weapons in recent conflicts and from AI assists in targeted military attacksbut these have not been verified, and in reality many soldiers may be wary of systems that rely on algorithms that are far from foolproof.

And yet, if autonomous weapons cannot be banned, then their development will continue. That makes it vital to ensure that the AI ​​involved behaves as expected, even if the technique required to fully implement intents like those in the new US statement has yet to be perfected.

Leave a Reply

Your email address will not be published. Required fields are marked *