The report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased data-bases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance.
Author: Stefan Larsson, Associate Professor at LTH, Lund University / affiliated researcher at AI Sustainability Center
This is an inventory of the state of knowledge of ethical, social, and legal challenges with artificial intelligence, carried out in a Vinnova-funded project led by Anna Felländer. Based on a survey of reports and studies, a quantitative and bibliometric analysis, and area depths in health care, telecom, and digital platforms, three recommendations are given: Sustainable AI requires that we 1. focus regulatory issues in a broad sense, 2. stimulate multidisciplinarity and collaboration, and that 3. trust when building the use of socially applied artificial intelligence and machine learning is central and requires more knowledge in the relation between transparency and responsibility.
Authors: Stefan Larsson, Lunds universitet, Fores Mikael Anneroth, Ericsson Research Anna Felländer, AI Sustainability Center Li Felländer-Tsai, Karolinska institutet Fredrik Heintz, Linköpings universitet Rebecka Cedering Ångström, Ericsson Research
Available in both English and Swedish
Fredrik Heintz is an Associate Professor of Computer Science at Linköping University, where he leads the Stream Reasoning group within the Division of Artificial Intelligence and Integrated Systems in the Department of Computer Science. His research focus is Artificial Intelligence especially autonomous systems, stream reasoning and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program, the President of the Swedish AI Society, and a member of the European Commission High-Level Expert Group on AI.
Stefan Larsson is a lawyer and Associate Professor in technology and social change at LTH, Lund University, and head of the Digital Society Program at Swedish think tank Fores. He has a PhD in Sociology of Law and a PhD in Spatial Planning. His research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies.
Li Felländer-Tsai is Professor and Senior Consultant in Orthopaedics at Karolinska Institutet and Karolinska University Hospital. She is also Director of the Center for Advanced Medical Simulation and Training (CAMST) at Karolinska University Hospital. Li has previously been registrar of the Swedish National Knee Ligament Register and is active in registry based and clinical research as well as surgical technology. She is co-editor of Acta Orthopaedica and co-opted member of the Board of the European Federation of National Associations of Orthopaedics and Traumatology (efort.org).