The AI Sustainability Center offers practical methods and tools for organizations to mitigate ethical and societal risks, ensuring human values are at the core.
A new and just approach to AI is possible, that considers both positive and negative impacts on people and society on the same level as commercial benefits or other efficiency gains. We call it sustainable AI.
We are creating a world-leading center for identifying, measuring and governing ethical and societal implications of AI and data-driven technology. The AI Sustainability Center is a collaborative, research-focused environment for applying Al sustainability strategies and frameworks. The center focuses on mitigating ethical risks as well as facilitating the realization of the vast gains to organizations, society and individuals by acting proactively. It will provide an opportunity for partners, not only to avoid future pitfalls, but also to gain a competitive edge by applying Al in a sustainable way.
The AI Sustainability Center and MetricStream are jointly offering an automated tool for risk scanning of data and AI solutions. We are combining the MetricStream cloud-based governance, risk management, and compliance platform and AISC’s framework for Sustainable AI, thus allowing organizations to effectively gain insights into their use of AI and data-driven solutions and where ethical and societal risks might occur.
Industry professionals, researchers, policymakers and regulators, and
individuals need to be aware of the ethical and social implications of Al. Risks such as privacy intrusion, discrimination, and the proliferation of fake news can result in negative consequences
– intended or unintended – if Al is not governed in a sustainable way.
Unintended ethical breaches are often a result of algorithms using historical data that incorporates bias. Programmers who often lack proper skills and knowledge to understand AI’s broader societal implications can create intended or unintended values. Breaches can also result from immature Al, for example, if algorithms use a training data set which is insufficient or not diverse enough.
Organizations and individuals may rely on being GDPR compliant. But as technology is ahead of regulation, this could lead to a false sense of security. There is a regulatory blind spot.
The AI application/solution could be overly intrusive (using too broad or too deep open data) or it could be used for unintended purposes by others.
Values and bias are intentionally or unintentionally programmed by the creator who may also lack knowledge and skills of how the solution could scale in a broader context.
Insufficient training of algorithms on data sets as well as lack of representative data could lead to incorrect or inappropriate recommendations.
The data available is not an accurate reflection of reality or the preferred reality and may lead to incorrect and unethical recommendations.
• How far can we go in our collection of personal information for credit risk scoring?
• Would you want your algorithms to pick or support your next decision about your CEO?
• Is it ok when media recommend articles to a person with racial opinions supporting his or her orientation?
• Should public agencies be contacted if a person drives drunk?
• Is it ok for a gaming app to capitalize on persons with gambling addictions?
Billion VC funding Into AI Startups In 2019
Funding for AI startups has increased at an average annual growth rate of over 48% between 2010 to 2019.
AI predicts to contribute $15.7 trillion to the global economy by 2030.
*Statistics according to National Venture Capital Association, the AI Index 2019 Report, Stanford HAI, and towardsdatascience.com
Together with Massachusetts Institute of Technology (MIT) and the Royal Institute of Technology (KTH), the AI Sustainability Center hosted a webinar discussing the sustainable development of AI and findings from our study published in Nature Communication assessing the potential positive and negative impact of AI on all 17 of the SDGs adopted by the United Nations. Speakers include: Max Tegmark (MIT), Francesco Fuso Nerini and Ricardo Vinuesa (KTH), Anna Felländer and Elaine Grunewald (AI Sustainability Center), Manuela Battaglini (Transparent Internet) and Shivam Gupta (University of Bonn). View the recording here.
Risk för fördomsfulla robotar undersöks
AI-experten: ”Vi kan nå paradiset i stället för bakgården”
“The 10 Most Influential Women in Technology 2020”
Ethics, trustworthiness and human control are at the core of European guidance documents for artificial intelligence. In the Coordination Plan on Artificial Intelligence member states have been encouraged to develop their own national AI strategies built on the work done at the European level. This begs the question: how well is the European view of trustworthy AI echoed in national strategies and member state decision-making? Read the anthology here.
Located at Epicenter – a community of digital scale-ups, corporates, intrapreneurs and entrepreneurs.