When: December 3rd, 2019, 14.00-16.30
Where: Lindholmen Science Park, Gothenburg
The AI Sustainability Center, Chalmers AI Research Centre, and AI Innovation of Sweden welcome you to join the discussion on challenges and opportunities with the upcoming EU regulatory framework, how organizations can be ahead of the regulatory curve, and the dilemmas of AI transparency.
14.00 – Doors open, mingle
14.15 – Welcome and introduction
14.20 – Introduction to General Insights and AI Sustainability Center and CHAIR partnership
14.30 – Introduction of today’s theme, ‘Transparency and evolving legal frameworks’
14.35 – Introduction to a new research report on the socio-legal dilemmas of AI and transparency
Speaker: Stefan Larsson, Associate Professor LTH / affiliated researcher at AI Sustainability Center
14.55 – Q&A
15.05 – Panel introduction by moderator
15.10 – Panel: Transparency and the evolving legal and regulatory landscape
David Frydlinger – Managing Partner, Cirio law firm
Olle Häggström – Professor, Mathematical Sciences, Chalmers
Agnes Hammarstrand – Lawyer/Partner, Delphi
Moderated by: Elaine Weidman Grunewald, co-founder, AI Sustainability Center
15.30 – Q&A / Discussion
15.45 – Coffee and Networking
16.30 – Event end
Chalmers AI Research Centre and AISC have a joint focus on AI and ethics, where transparency is a key concept and often seen as a cornerstone for understanding data-driven technologies, and vital for achieving accountability. In the future, legal and ethical frameworks will be necessary in order for companies to ensure that AI is developed and implemented in an ethical way, as well as to maintain trust in both the technology and the organizations utilizing it.
The report draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased data-bases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance.