Published in The Observer, 21 Nov 2010
Big money is pouring into military robotics to design aircraft and vehicles that make their own decision on where to go – and when to fire. Jon Cartwright asks how this could change warfare, and what legal and ethical challenges it brings
Faced with an enemy fighter jet, there’s one sensible thing a military drone should do: split. But in December 2002, caught in the crosshairs of an Iraqi MiG, an unmanned US Predator was instructed to stay put. The MiG fired, the Predator fired back and the result, unhappily for the US, was a heap of drone parts on the southern Iraqi desert.
This incident is often regarded as the first dogfight between a drone, properly known as an unmanned aerial vehicle or UAV, and a conventional, manned fighter. Yet in a way, the Predator hardly stood a chance. American and British UAVs are operated remotely by pilots sitting thousands of miles away on US turf, so manoeuvres are hobbled by signal delays of a quarter-second or more. This means evading missiles will always be nigh-on impossible – unless the UAVs pilot themselves.
In July this year, amid a haze of dry ice and revolving spotlights at the Warton aerodrome, Lancashire, BAE Systems launched a prototype UAV that might do just that. With a development cost of more than £140m, the alien-looking Taranis was billed by the Ministry of Defence as a “fully autonomous” craft that can fly deep into enemy territory to collect intelligence, drop bombs and “defend itself against manned and other unmanned enemy aircraft”.Lord Drayson, minister for defence procurement from 2005-2007, said Taranis would have “almost no need for operator input.”
Taranis is just one example of a huge swing towards autonomous defence systems: machines that make decisions independent of any human input, with the potential to change modern warfare radically. States with advanced militaries such as the US and the UK are viewing autonomy as a way to have a longer reach, greater efficiency and fewer repatriated body bags. The government’s Strategic Defence and Security Review, published last month, cited it as a means to “adapt to the unexpected” in a time of constrained resources. But behind the technological glitz, autonomous systems hide a wealth of ethical and legal problems. […]
The rest of this article is available here.