Abstract | The European Union (EU) has undertaken policies which address the research and development of artificial intelligence (AI). In light of debates of technology assessment which focus on risks
for humans and questions of control of AI, the EU has propagated an ethical, human-centred approach
of the application of AI. It is important to identify how the EU envisions AI as this may guide emerging
norms in AI governance and today’s research and development of (weaponised) AI. Building on works
of Human-Computer Interaction (HCI), this work derives the actor’s understanding of human-AI inter-
action, including conceptualisations of explainability, interpretability, and risks. Analysis of EU docu-
ments on the implementation of AI as a general-purpose technology and for military application reveals
that explainability and risk identification are crucial elements for trust, which itself is a necessary
component in the uptake of AI. Interdisciplinary approaches allow for a more detailed understanding of
actors’ fundamental views on human control of AI, which further contributes to debates on technology
assessment in professionalised political contexts. |
---|