Go to top


The authors analyze several member states’ national AI strategies in order to determine the extent of which they have been influenced by European AI policy. Countries examined are Portugal, the Netherlands, Italy, the Czech Republic, Poland, Norway and the Nordic countries as a whole.

The anthology’s starting point is the European Commission’s approach on “trustworthy AI”, expressed in the European Commission’s guidelines for ethical and reliable AI, in the Commission’s 33 policy and investment recommendations for trustworthy AI, as well as in the Commission’s White Paper on Artificial Intelligence. A central and guiding question is to what extent the elements of the Ethics Guidelines, such as ethical principles and key requirements, are displayed in the national strategy. Furthermore, the authors examine how countries talk about and promote AI in terms of applicability, consumer awareness, entrepreneurship, environmental concern and inequality.

Contributors: Stefan Larsson, Associate Professor at LTH, Lund University/ Affiliated Researcher at the AI Sustainability Center and Fredrik Heintz, Linköpings universitet/ Affiliated Researcher at the AI Sustainability Center 

Human centred AI in the EU


A variety of deep learning architectures have been developed for the goal of predictive modelling and knowledge extraction from medical records. Several models have placed strong emphasis on temporal attention mechanisms and decay factors as a means to include highly temporally relevant information regarding the recency of medical event occurrence while facilitating medical code-level interpretability. In this study we utilise such models with a large Electronic Patient Record (EPR) data set consisting of diagnoses, medication, and clinical text data for the purpose of adverse drug event (ADE) prediction. The first contribution of this work is an empirical evaluation of two state-of-the-art medical-code based models in terms of objective performance metrics for ADE prediction on diagnosis and medication data. Secondly, as an extension of previous work, we augment an interpretable deep learning architecture to permit numerical risk and clinical text features and demonstrate how this approach yields improved predictive performance compared to the other baselines. Finally, we assess the importance of attention mechanisms in regards to their usefulness for medical code-level and text-level interpretability, which may facilitate novel insights pertaining to the nature of ADE occurrence within the health care domain.

Author: Jonathan Rebane, AI and Machine Learning Expert at the AI Sustainability Center

Journal of Artificial Intelligence in Medicine


This article uses a socio-legal perspective to analyse the use of ethics guidelines as governance tool in the development and use of artificial intelligence (AI). This has become a central policy area in several large jurisdictions, including China and Japan, as well as the EU. Particular emphasis is in this article placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on Artificial Intelligence, published by the EU Commission in February 2020.

Author: Stefan Larsson, Associate Professor at LTH, Lund University/ Affiliated Researcher at AI Sustainability Center



This conceptual paper addresses the issues of transparency as linked to artificial intelligence (AI) from socio-legal and computer scientific perspectives.The authors discuss and show the relevance of the fact that transparency expresses a conceptual metaphor of more general significance, linked to knowing, bringing positive connotations that may have normative effects to regulatory debates. They also draw a possible categorisation of aspects related to transparency in AI, or what is interchangeably called AI transparency, and argue for the need of developing a multidisciplinary understanding, in order to contribute to the governance of AI as applied on markets and in society.

Authors: Stefan Larsson, Associate Professor at LTH, Lund University/ Affiliated Researcher at the AI Sustainability Center and Fredrik Heintz, Linköpings universitet/ Affiliated Researcher at the AI Sustainability Center 



Co-founder Anna Felländer has jointly published a paper on the role of artificial intelligence in achieving the Sustainable Development Goals (SDGs), together with Max Tegmark, Ricardo Vinuesa and other prominent researchers within the field. The authors conclude that AI can enable the accomplishment of 134 targets across all SDGs, but it may also inhibit 59 targets.




The report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased data-bases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance.

Author: Stefan Larsson, Associate Professor at LTH, Lund University / affiliated researcher at AI Sustainability Center


This is an inventory of the state of knowledge of ethical, social, and legal challenges with artificial intelligence, carried out in a Vinnova-funded project led by Anna Felländer. Based on a survey of reports and studies, a quantitative and bibliometric analysis, and area depths in health care, telecom, and digital platforms, three recommendations are given: Sustainable AI requires that we 1. focus regulatory issues in a broad sense, 2. stimulate multidisciplinarity and collaboration, and that 3. trust when building the use of socially applied artificial intelligence and machine learning is central and requires more knowledge in the relation between transparency and responsibility.

Authors: Stefan Larsson, Lunds universitet, Fores Mikael Anneroth, Ericsson Research Anna Felländer, AI Sustainability Center Li Felländer-Tsai, Karolinska institutet Fredrik Heintz, Linköpings universitet Rebecka Cedering Ångström, Ericsson Research

Available in both English and Swedish

More from our researchers

Fredrik Heintz

Fredrik Heintz is an Associate Professor of Computer Science at Linköping University, where he leads the Stream Reasoning group within the Division of Artificial Intelligence and Integrated Systems in the Department of Computer Science. His research focus is Artificial Intelligence especially autonomous systems, stream reasoning and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program, the President of the Swedish AI Society, and a member of the European Commission High-Level Expert Group on AI.

Stefan Larsson

Stefan Larsson is a lawyer (LLM) and Associate Professor in Technology and Social Change at Lund University, Department of Technology and Society. He holds a PhD in Sociology of Law as well as a PhD in Spatial Planning and his research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies. In addition, Dr Larsson is scientific advisor for the Swedish Consumer Agency, the AI Sustainability Center (AISC), as well as the Swedish agency for digital government (DIGG).

Ricardo Vinuesa

Dr. Ricardo Vinuesa is an Associate Professor at KTH Royal Institute of Technology. He obtained his Bachelor in Mechanical Engineering at the Polytechnic University of Valencia (Spain), and he completed his MS and PhD in Mechanical and Aerospace Engineering at the Illinois Institute of Technology in Chicago (USA). His main area of research is turbulent flows in complex geometries, such as the flow around airplane wings or urban environments. His work is mainly based on high-performance simulations on large supercomputers, and on the use of machine-learning methods to analyze these flows. His research team has pioneered the use of deep neural networks to model and predict wall-bounded turbulence. He has led international efforts to combine experimental, numerical and AI-based methods to understand complex flows, which are responsible for around 25% of the energy used by industry worldwide. He is the recipient of several awards, including the 2020 Göran Gustafsson Foundation Award to Young Researchers. He has published over 60 articles in high-impact international journals, and he has an h-index of 24. He has recently led an international multidisciplinary work aimed at assessing the impact of AI on the 17 UN Sustainable Development Goals.

Li Felländer-Tsai

Li Felländer-Tsai is Professor and Senior Consultant in Orthopaedics at Karolinska Institutet and Karolinska University Hospital. She is also Director of the Center for Advanced Medical Simulation and Training (CAMST) at Karolinska University Hospital. Li has previously been registrar of the Swedish National Knee Ligament Register and is active in registry based and clinical research as well as surgical technology. She is co-editor of Acta Orthopaedica and co-opted member of the Board of the European Federation of National Associations of Orthopaedics and Traumatology (efort.org).