Weaponised AI is the future of war

[ad_1]

Last month marked the 17th anniversary of September 11. All these years later, the wars of 9/11 continue, with no end in sight. By embracing the latest tools that the tech industry has to offer, the US military is now creating a more automated form of warfare — one that will greatly increase its capacity to wage war.

The US defence department is going to launch the Joint Enterprise Defence Infrastructure (Jedi), an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as $10billion (Dh36.7 billion) over 10 years, which is why big tech companies are fighting hard to win it. The real force driving Jedi is the desire to weaponise AI — what the defence department has begun calling “algorithmic warfare”.

The US has also established the Joint Artificial Intelligence Centre (JAIC), which will oversee the roughly 600 AI projects currently under way across the department at a planned cost of $1.7 billion. And in September, the Defence Advanced Research Projects Agency (DARPA), the Pentagon’s storied R&D wing, announced it would be investing up to $2 billion over the next five years into AI weapons research.

AI has already begun rewiring warfare, even if it hasn’t (yet) taken the form of literal Terminators. There are less cinematic but equally scary ways to weaponise AI. You don’t need algorithms pulling the trigger for algorithms to play an extremely dangerous role.

To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself isn’t particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and some 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.

The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy. But who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?

This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries — a boon to the contractors, bureaucrats and politicians who make their living from US militarism. But the vagueness of the enemy also creates certain challenges. It’s one thing to look at a map of North Vietnam and pick places to bomb.

It’s quite another to sift through vast quantities of information from all over the world in order to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labour-intensive. This is where AI — or, more precisely, machine learning — comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.

Pathfinder project

The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.

In the case of weaponised AI, however, the knives in question aren’t particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms — algorithms that can’t recognise black faces, or that reinforce racial bias in policing and criminal sentencing. The line between civilian and combatant is highly porous in the era of the forever war. The so-called “signature strikes” conducted by the US military play similar tricks with the concept of the combatant. The problem isn’t the quality of the tools, in other words, but the institution wielding them. And AI will only make that institution more brutal.

AI also has the potential to make wars more permanent, by giving some of the country’s largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military. But algorithmic warfare will bring big tech deeper into the military-industrial complex, and give billionaires a powerful incentive to ensure the forever war lasts forever.

— Guardian News & Media Ltd

Ben Tarnoff is a technology specialist, columnist and co-founder of the Logic Magazine

 

 

[ad_2]

Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*