Systemic risks of artificial intelligence
- Project team:
Orwat, Carsten (Project leader); Jutta Jahnel (Project coordinator), Alexandros Gazos, Lucas Staab
- Funding:
Federal Ministry of Education and Research
- Start date:
2024
- End date:
2025
- Research group:
Project description
So far, systemic risks have mainly become more widely known through the financial crisis or climate change. Although they are conceptualized very differently, complex relationships, interactions, and interdependencies often play a role, as do emergent effects, feedback loops, or tipping points. They can lead to widespread dysfunctions or even failure of a system. Systemic risks of AI applications have been little researched to date, especially concerning fundamental rights. However, there are already concrete indications of systemic risks arising from the coupling of AI-based services and products with foundation models, such as large language models. The subproject analyzes the various causes, specific mechanisms, and forms of damages of systemic risks of AI in order to derive suitable forms of governance and regulation based on these analyses.
This subproject on systemic risks is part of the two-part interdisciplinary project “Systemic and Existential Risks of AI”, which investigates such risks both theoretically and empirically. The aim is to combine knowledge from different disciplinary fields in order to enable sound and in-depth assessments and recommendations. In addition to our own research, external expert opinions on specific issues will also be commissioned.
In contrast to the direct consequences of AI for affected individuals or companies, there are no consistent approaches for assessing and acting on potential systemic or existential risks of AI. Recently, however, there have been concrete indications of systemic risks, for example due to the consequences of generative AI for society. Furthermore, the media and various stakeholders in the field of AI also point to existential risks for humanity as a whole that could be associated with the unlimited and uncontrolled development of AI. An early consideration of possible risks and concerns is essential for the successful and socially acceptable implementation of AI technologies and thus crucial for their economic success.
Contact
Karlsruhe Institute of Technology (KIT)
Institute for Technology Assessment and Systems Analysis (ITAS)
P.O. Box 3640
76021 Karlsruhe
Germany
Tel.: +49 721 608-26116
E-mail