Daniele Amoroso

AI-Based Decision Support Systems and War Crimes: Some Remarks in the Light of a Recent Independent Investigation

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

Recent reports by +972 magazine have highlighted the extensive use of AI-based decision support systems by the Israeli Defense Forces (IDF) in military targeting. In particular, the investigation into the use of the so-called ‘Lavender’ system to identify members of the Hamas military wing describes decision-making processes that, if confirmed, could support allegations of war crimes such as ‘At-tacks against the civilian population’ and ‘Excessive collateral damage’ under Art. 8(2)(b)(i) and (iv) of the ICC Statute. To be clear, Lavender is not an Autonomous Weapon System (AWS), but a de-cision support system that merely provides recommendations for the identification of legitimate tar-gets and the execution of attacks. The final decision on target selection and engagement, in other terms, remains with human commanders and operators. Yet, the +972 investigation illustrates how the pervasive integration of AI into decision-making processes, even when not taking the humans out of the loop, can profoundly affect the conduct of hostilities. Against this backdrop, this article will use the +972 investigation into Lavender as a case study to discuss the implications, for international criminal law, of the increasing reliance on AI systems in targeting processes, considering both the hurdles to establishing individual criminal responsibility resulting from such reliance, and the new ways of perpetrating criminal conduct made possible by this technology

Keywords

  • Lavender
  • artificial intelligence
  • individual criminal responsibility
  • mens rea
  • proportionality

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat