Artificial Intelligence (AI) is widely used in society today. The (mis)use of biased data sets in machine learning applications is wellknown, resulting in discrimination and exclusion of citizens. Another example is the use of nontransparent algorithms that cant explain themselves to users, resulting in the AI not being trusted and therefore not being used when it might be beneficial to use it. Responsible Use of AI in Military Systems lays out what is required to develop and use AI in military systems in a responsible manner. Current developments in the emerging field of Responsible AI as applied to military systems in general (not merely weapons systems) are discussed. The book takes a broad and transdisciplinary scope by including contributions from the fields of philosophy, law, human factors, AI, systems engineering, and policy development. Divided into five sections, Section I covers various practical models and approaches to implementing military AI responsibly; Section II focuses on liability and accountability of individuals and states; Section III deals with human control in humanAI military teams; Section IV addresses policy aspects such as multilateral security negotiations; and Section V focuses on 'autonomy' and 'meaningful human control' in weapons systems.
|