ON THE GOVERNANCE OF ARTIFICIAL INTELLIGENCE THROUGH ETHICS GUIDELINES
ON THE GOVERNANCE OF ARTIFICIAL INTELLIGENCE THROUGH ETHICS GUIDELINES
This article uses a socio-legal perspective to analyse the use of ethics guidelines as governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU. Particular emphasis is in this article placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on Artificial Intelligence, published by the EU Commission in February 2020.
Author: Stefan Larsson, Associate Professor at LTH, Lund University/ Affiliated Researcher at AI Sustainability Center
TRANSPARENCY IN ARTIFICIAL INTELLIGENCE
This conceptual paper addresses the issues of transparency as linked to artificial intelligence (AI) from socio-legal and computer scientific perspectives.The authors discuss and show the relevance of the fact that transparency expresses a conceptual metaphor of more general significance, linked to knowing, bringing positive connotations that may have normative effects to regulatory debates. They also draw a possible categorisation of aspects related to transparency in AI, or what is interchangeably called AI transparency, and argue for the need of developing a multidisciplinary understanding, in order to contribute to the governance of AI as applied on markets and in society.
Authors: Stefan Larsson, Associate Professor at LTH, Lund University/ Affiliated Researcher at AI Sustainability Center; and Fredrik Heintz, Linköpings universitet/ Affiliated Researcher at AI Sustainability Center
Co-founder Anna Felländer has jointly published a paper on the role of artificial intelligence in achieving the Sustainable Development Goals (SDGs), together with Max Tegmark, Ricardo Vinuesa and other prominent researchers within the field. The authors conclude that AI can enable the accomplishment of 134 targets across all SDGs, but it may also inhibit 59 targets.
The report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased data-bases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance.
Author: Stefan Larsson, Associate Professor at LTH, Lund University / affiliated researcher at AI Sustainability Center
This is an inventory of the state of knowledge of ethical, social, and legal challenges with artificial intelligence, carried out in a Vinnova-funded project led by Anna Felländer. Based on a survey of reports and studies, a quantitative and bibliometric analysis, and area depths in health care, telecom, and digital platforms, three recommendations are given: Sustainable AI requires that we 1. focus regulatory issues in a broad sense, 2. stimulate multidisciplinarity and collaboration, and that 3. trust when building the use of socially applied artificial intelligence and machine learning is central and requires more knowledge in the relation between transparency and responsibility.
Authors: Stefan Larsson, Lunds universitet, Fores Mikael Anneroth, Ericsson Research Anna Felländer, AI Sustainability Center Li Felländer-Tsai, Karolinska institutet Fredrik Heintz, Linköpings universitet Rebecka Cedering Ångström, Ericsson Research
Available in both English and Swedish
Fredrik Heintz is an Associate Professor of Computer Science at Linköping University, where he leads the Stream Reasoning group within the Division of Artificial Intelligence and Integrated Systems in the Department of Computer Science. His research focus is Artificial Intelligence especially autonomous systems, stream reasoning and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program, the President of the Swedish AI Society, and a member of the European Commission High-Level Expert Group on AI.
Stefan Larsson is a lawyer (LLM) and Associate Professor in Technology and Social Change at Lund University, Department of Technology and Society. He holds a PhD in Sociology of Law as well as a PhD in Spatial Planning and his research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies. In addition, Dr Larsson is scientific advisor for the Swedish Consumer Agency, the AI Sustainability Center (AISC), as well as the Swedish agency for digital government (DIGG).
Latest Publication Transparency in artificial intelligence, co-authored with Fredrik Heintz
Dr. Ricardo Vinuesa is an Associate Professor at KTH Royal Institute of Technology. He obtained his Bachelor in Mechanical Engineering at the Polytechnic University of Valencia (Spain), and he completed his MS and PhD in Mechanical and Aerospace Engineering at the Illinois Institute of Technology in Chicago (USA). His main area of research is turbulent flows in complex geometries, such as the flow around airplane wings or urban environments. His work is mainly based on high-performance simulations on large supercomputers, and on the use of machine-learning methods to analyze these flows. His research team has pioneered the use of deep neural networks to model and predict wall-bounded turbulence. He has led international efforts to combine experimental, numerical and AI-based methods to understand complex flows, which are responsible for around 25% of the energy used by industry worldwide. He is the recipient of several awards, including the 2020 Göran Gustafsson Foundation Award to Young Researchers. He has published over 50 articles in high-impact international journals, and he has an h-index over 20. He has recently led an international multidisciplinary work aimed at assessing the impact of AI on the 17 UN Sustainable Development Goals.
Li Felländer-Tsai is Professor and Senior Consultant in Orthopaedics at Karolinska Institutet and Karolinska University Hospital. She is also Director of the Center for Advanced Medical Simulation and Training (CAMST) at Karolinska University Hospital. Li has previously been registrar of the Swedish National Knee Ligament Register and is active in registry based and clinical research as well as surgical technology. She is co-editor of Acta Orthopaedica and co-opted member of the Board of the European Federation of National Associations of Orthopaedics and Traumatology (efort.org).