In an open letter, more than 100 leading robotics experts and artificial intelligence (AI) specialists have urged the United Nations to take action to prevent the development of ‘killer robots’.
Warning of the “third revolution in warfare”, the letter cautions that, “once developed, [lethal autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”.
AI is beginning to play a bigger role in our everyday lives. Many of us will have been familiar with Apple’s voice powered personal assistant, Siri, for some time now, but wider applications of AI are starting to creep in incrementally.
Great strides are being made in autonomous vehicles, as an example, with automatic cruise control, braking and parking becoming more common features of new vehicles. A point that’s often made in favour of driverless cars is the potential for the technology to eradicate human error, which accounts for the vast majority of accidents. Putting an artificial intelligence in charge of a vehicle that won’t drive while tired or be distracted by a phone will reduce the number of deaths on our roads, so the theory goes.
Similar arguments are being made in favour of next-generation weapon systems. Military drones are already being used by several nations, but I suspect the discomfort for many of us will be around how technology will develop in the future; will robots ultimately decide for themselves when to kill?
As Mary Wareham – one of the founders of the Campaign to Stop Killer Robots, which calls for a global ban on fully autonomous weapon systems – argued in an interview with Channel 4 News: “We draw the line at the weaponisation of artificial intelligence to the point that we lose human control over the critical functions of a weapon system – that is the selection of a target and the use of force to fire on it.”
Do the benefits of AI outweigh the negatives? Or are we opening a Pandora’s box by overlooking the potential pitfalls?
Tech titans on both sides of the fence are Tesla co-founder and CEO, Elon Musk – who was among the open letter’s signatories – and Facebook chief Mark Zuckerberg, who have publicly clashed over just that. For Zuckerberg, Musk is a “naysayer”, drumming up an unhelpful ”doomsday scenario”; for Musk, Zuckerberg’s enthusiasm is borne out of his ”limited” understanding of the subject. Ouch.
Undoubtedly, AI introduces significant risks. The question is then whether sufficient safeguards can be built in, and if they’ll be able to keep up with the pace of change.
For more on AI, read Government Inquiry Into Impact of AI – It’s Time to Have Your Say