The AI Sustainability Center and MetricStream are jointly offering an automated tool for risk scanning of data and AI solutions. We are combining the MetricStream cloud-based governance, risk management, and compliance platform and AISC’s framework for Sustainable AI, thus allowing organizations to effectively gain insights into their use of AI and data-driven solutions and where ethical and societal risks might occur.
Industry professionals, researchers, policymakers and regulators, and
individuals need to be aware of the ethical and social implications of Al. Risks such as privacy intrusion, discrimination, and the proliferation of fake news can result in negative consequences
– intended or unintended – if Al is not governed in a sustainable way.
Unintended ethical breaches are often a result of algorithms using historical data that incorporates bias. Programmers who often lack proper skills and knowledge to understand AI’s broader societal implications can create intended or unintended values. Breaches can also result from immature Al, for example, if algorithms use a training data set which is insufficient or not diverse enough.
Organizations and individuals may rely on being GDPR compliant. But as technology is ahead of regulation, this could lead to a false sense of security. There is a regulatory blind spot.
The AI application/solution could be overly intrusive (using too broad or too deep open data) or it could be used for unintended purposes by others.
Values and bias are intentionally or unintentionally programmed by the creator who may also lack knowledge and skills of how the solution could scale in a broader context.
Insufficient training of algorithms on data sets as well as lack of representative data could lead to incorrect or inappropriate recommendations.
The data available is not an accurate reflection of reality or the preferred reality and may lead to incorrect and unethical recommendations.
• How far can we go in our collection of personal information for credit risk scoring?
• Would you want your algorithms to pick or support your next decision about your CEO?
• Is it ok when media recommend articles to a person with racial opinions supporting his or her orientation?
• Should public agencies be contacted if a person drives drunk?
• Is it ok for a gaming app to capitalize on persons with gambling addictions?
Billion VC funding Into AI Startups In 2019
Funding for AI startups has increased at an average annual growth rate of over 48% between 2010 to 2019.
AI predicts to contribute $15.7 trillion to the global economy by 2030.
*Statistics according to National Venture Capital Association, the AI Index 2019 Report, Stanford HAI, and towardsdatascience.com
Co-founder Elaine Weidman Grunewald moderating the conversation with Jamie Metzl, Max Tegmark in ‘Hacking Humans: A unique conversation about gene-editing and AI’ as part of Leaps Talk at the Norrsken House in Stockholm. How to bring forward our ethics and values to meet the level of advancement in technology and biosciences today? A discussion on how the rapid development of gene-editing and AI needs to be guided by standards, ethics, and shared values, in order to achieve breakthroughs safely.
”Sustainability Leadership – The Swedish Way”
Forbes
“The 10 Most Influential Women in Technology 2020”
Analytics Insight
”Äntligen övervakad”
SVT Utrikesbyrån
”Protecting Privacy While Monitoring Employee Health in the Age of COVID-19″
Triple Pundit
”Sex etikfällor”
HR People
New Global Hub Helps Companies Avoid the Ethical Pitfalls of AI
Triple Pundit
“AI-teknologin måste gå att lita på”
Entreprenör