Go to top

Subscribe To Our Newsletter

What we do

We are creating a world-leading center for identifying, measuring and governing ethical and societal implications of AI and data-driven technology. The AI Sustainability Center is a collaborative, research-focused environment for applying Al sustainability strategies and frameworks. The center focuses on mitigating ethical risks as well as facilitating the realization of the vast gains to organizations, society and individuals by acting proactively. It will provide an opportunity for partners, not only to avoid future pitfalls, but also to gain a competitive edge by applying Al in a sustainable way.

The AI Sustainability Center Ethical Risk Profiler, powered by MetricStream

The AI Sustainability Center and MetricStream are jointly offering an automated tool for risk scanning of data and AI solutions. We are combining the MetricStream cloud-based governance, risk management, and compliance platform and AISC’s framework for Sustainable AI, thus allowing organizations to effectively gain insights into their use of AI and data-driven solutions and where ethical and societal risks might occur.

AI poses ethical risks that are difficult to predict

Industry professionals, researchers, policymakers and regulators, and
individuals need to be aware of the ethical and social implications of Al. Risks such as privacy intrusion, discrimination, and the proliferation of fake news can result in negative consequences
– intended or unintended – if Al is not governed in a sustainable way.

Unintended ethical breaches are often a result of algorithms using historical data that incorporates bias. Programmers who often lack proper skills and knowledge to understand AI’s broader societal implications can create intended or unintended values. Breaches can also result from immature Al, for example, if algorithms use a training data set which is insufficient or not diverse enough.

Technology is ahead of regulation


Organizations and individuals may rely on being GDPR compliant. But as technology is ahead of regulation, this could lead to a false sense of security. There is a regulatory blind spot.


Misuse / overuse

The AI application/solution could be overly intrusive (using too broad or too deep open data) or it could be used for unintended purposes by others.

The bias of the creator

Values and bias are intentionally or unintentionally programmed by the creator who may also lack knowledge and skills of how the solution could scale in a broader context.

Immature AI

Insufficient training of algorithms on data sets as well as lack of representative data could lead to incorrect or inappropriate recommendations.

Data bias

The data available is not an accurate reflection of reality or the preferred reality and may lead to incorrect and unethical recommendations.

Ethical questions need to be on top of the agenda

How far can we go in our collection of personal information for credit risk scoring?

• Would you want your algorithms to pick or support your next decision about your CEO?

• Is it ok when media recommend articles to a person with racial opinions supporting his or her orientation?

Should public agencies be contacted if a person drives drunk?

• Is it ok for a gaming app to capitalize on persons with gambling addictions?


Billion VC funding Into AI Startups In 2019


Funding for AI startups has increased at an average annual growth rate of over 48% between 2010 to 2019.


AI predicts to contribute $15.7 trillion to the global economy by 2030.

*Statistics according to National Venture Capital Association, the AI Index 2019 Report, Stanford HAI, and towardsdatascience.com

Join the movement

Video spotlight

Together with Massachusetts Institute of Technology (MIT) and the Royal Institute of Technology (KTH), the AI Sustainability Center hosted a webinar discussing the sustainable development of AI and findings from our study published in Nature Communication assessing the potential positive and negative impact of AI on all 17 of the SDGs adopted by the United Nations. Speakers include: Max Tegmark (MIT), Francesco Fuso Nerini and Ricardo Vinuesa (KTH), Anna Felländer and Elaine Grunewald (AI Sustainability Center), Manuela Battaglini (Transparent Internet) and Shivam Gupta (University of Bonn).  View the recording here.

Human centred AI in the EU

Ethics, trustworthiness and human control are at the core of European guidance documents for artificial intelligence. In the Coordination Plan on Artificial Intelligence member states have been encouraged to develop their own national AI strategies built on the work done at the European level. This begs the question: how well is the European view of trustworthy AI echoed in national strategies and member state decision-making? Read the anthology here.

Join the conversation

How to find us

  • Address: Mäster Samuelsgatan 36, 111 57 Stockholm
  • E-mail: info@aisustainability.org

Located at Epicenter – a community of digital scale-ups, corporates, intrapreneurs and entrepreneurs.

Subscribe To Our Newsletter