Subscribe and track India like never before..

Get full online access to
Civil Society magazine.

Already a subscriber? Login

War, ethics and AI

Microsoft employees protest against use of AI for weapons in Gaza

War, ethics and AI

image

KIRAN KARNIK

STUDIES on the ethics of AI-based autonomous weapons are being funded by US Defence Advanced Research Projects Agency (DARPA) to the extent of millions of dollars. The significance of this recent announcement, especially for those not fully plugged into this area, requires a brief and simple explanation of the backdrop.

Artificial Intelligence (AI) and its successor, Generative AI (Gen AI), have taken the world by storm, especially with user-friendly apps. ChatGPT and its like have enabled even those with minimal computer literacy to access and use AI. The ability of these apps to understand text or audio and to respond almost instantly with creative (if not original) text or video images is truly magical. Further progress towards artificial general intelligence (AGI) and ultimately to artificial super intelligence opens mind-boggling possibilities, to the point of being scary.

There is much concern about job loss because AI, in its various forms and levels of sophistication, is already capable of replacing many functions at present being done by humans. Tasks like data analysis, computer coding, software testing, even programme writing are being increasingly automated. So are accounting and book-keeping. Already, on the shop floor in factories, people are being replaced by automatic systems and robots. This is displacing tens of thousands of workers in offices and factories; more sophisticated forms of AI will cause greater disruption and will include higher level jobs.

This writer has argued, based on historical evidence, that technology disruptions  lead to creating more jobs. However, there is a lag, and most of the new jobs require skills that existing workers do not possess. Therefore, there is a period of pain and fewer jobs. Governments must proactively plan for this by immediately providing re-skilling opportunities and adequate social safety nets to tide over a period of higher unemployment. 

Beyond this, there are serious worries about more radical changes, with machines — having long surpassed human physical capabilities — now overtaking human intelligence. The practical and philosophical dimensions of this are yet being debated, with no consensus on what it might mean, or how to deal with this. Some renowned experts have suggested a moratorium on development of AI till there is clarity on a way forward. Concern stems from what autonomous intelligent systems could do, given their superiority over humans. Already, there is data-based proof of how driverless cars are safer than human-driven ones. 

According to figures from Google-backed Waymo, its driverless taxis in San Francisco and Phoenix, in the 33 million miles completed by them in 2024, had 81 percent fewer collisions that caused injuries, and 64 percent fewer car accidents reported to the police. Handing over many tasks — especially those involving split-second reaction time and those related to safety — to AI machines, therefore seems sensible and inevitable.

Automated cars, though, face various dilemmas — e.g., of avoiding a serious collision by necessarily hitting either a younger person or a senior citizen. The algorithm for decision-making in such cases will necessarily reflect the values of the programmer. Similarly, there may have to be a choice based on gender or race/colour. Biases and preferences — articulated or unconscious — get embedded in the algorithm and will be based on values and ethics. As we move to greater control and decision-making by machines, these considerations of ethics will assume great importance. 

It is in this context that the report we began with takes on special importance. Defence establishments around the world are now working on AI-enhanced technology to create more lethal weapons, including so-called lethal autonomous weapon systems (ironically abbreviated as LAWS, more commonly and appropriately called “slaughterbots”). These are robotic weapons based on AI that can independently (without human assistance) identify, select, and attack or engage targets, based on software that includes facial recognition from a given profile or photo. Unlike unmanned military drones, which are controlled by a remote human “pilot” who decides whether or not to fire a weapon, these autonomous weapons make their own decision without any human intervention or control. They will doubtless be useful for identifying and attacking a tank, a gun placement or a military bunker. The problem arises when the target is a human. Not bothered by the euphemistically-called “collateral damage”, mission-focused LAWS may, in the process, well kill innocent women and children too.

The new ethics projects are being funded by DARPA, famed for its blue skies and pioneering work (DARPA was instrumental in creating the internet, for example). This effort, with a budget of $5 million this year and $22 million next year, will involve philosophers, amongst others, in studying ethics related to such autonomous weapons. This would be a laudable initiative, but for the report that most of the contractors are armament-makers. This is rather like an encounter and bulldozer-happy police force being asked to enforce human rights! Yet, one hopes the effort will set some ethical guard rails and — more importantly — stir up debate.

Who will follow any ethical guidelines is, however, not clear. A recent semi-autonomous implant in pagers and cell phones killed dozens of innocent Palestinians in the process of targeting a few leaders of Hamas and Hezbollah. Also, despite (or because of?) human control, precision bombs too have killed tens of thousands of innocent civilians in Palestine, including children and babies, making a mockery of ethics and human values. Sadly, this and the earlier killing of hundreds of Israeli civilians was the work of democratically-elected governments.

Studies and research about runaway AI ruling over humans and about the ethics of autonomous weapons are necessary and most welcome. However, what might be equally relevant is to study the ethics of presidents and generals who use technology to inhumanly pursue their goals. That humans are humane and machines are not is a trait that puts us in a superior position. Yet, facts — including genocides in Sudan, Myanmar, Gaza and elsewhere — belie this assumption about humanity. Are we, as a species, evolving towards the insensitivity of machines?

 

Kiran Karnik is a public policy analyst and author. His most recent book is ‘Decisive Decade: India 2030, Gazelle or Hippo’

Comments

Currently there are no Comments. Be first to write a comment!