The digitalization of large spheres of human life is progressing at a rapid pace. It entails not only technical and economic but also social change. IT infrastructures make our modern society work. Networked information and communication technologies make intelligent power grids, new medical applications and new forms of work possible. Behind this are algorithms and huge amounts of data, which bear great potential, but also social challenges, such as adequate regulation.
The scope and dynamics of digital transformation is reflected in many ITAS projects. On the one hand, researchers work on conceptual questions, such as trust and risk, governance of and by algorithms or the ethics of learning systems. On the other hand, they conduct analyses on the advancing automation and digitalisation of society using concrete digital technologies. Of particular importance is the question of how to deal responsibly with the developments of digitalization.
Digital work
Technical advances in robotics, sensor technology, and digital processes are fundamentally changing the world of work. The digital transformation affects all qualification levels. At the same time, new business areas are emerging, for digital corporations, but also for start-ups. Besides the change in industrial production, ITAS is increasingly investigating changes in the service sector. For example, the EU project Crowdwork 21 focuses on the phenomenon of platform work, where work tasks are distributed over the network. In this context, visions for a successful design of the digital transformation of the world of work are also taken into account.
Governance and algorithms
Be it in questions of the allocation of loans, jobs, or study places, the assessment of legal penalties, or even the identification of terrorists. More and more often, computer systems “have a say” in decisions that significantly influence the possibilities of free personal development. ITAS investigates the risks of discrimination by algorithms and, in the GOAL project, deals with structures and design options for the governance of algorithms, especially with regard to risks to fundamental rights and other social values.
Artificial intelligence
Another research focus at ITAS is on learning systems. Here, researchers are investigating the fundamental question of social trust in in technologies that use artificial intelligence. For example, ITAS took a look at political options for ealing with deepfakes - photo, audio, and video recordings manipulated by AI but appearing realistic - for the European Parliament. In addition, the researchers are also investigating societal perspectives on digitization and the use of AI in agriculture and the bioeconomy, for example in the DESIRA project. Prevailing discourses and societal needs are also the subject of research, such as design options or inclusion of disadvantaged groups.
(In-)Security, risk, and politics
Security is of great importance for modern societies. Nevertheless, or precisely because of this, they must constantly deal with insecurity. Politicians in particular often have to rely on uncertain knowledge when making decisions. The MOTRA project examines how the quest for security and the emergence of new insecurity are intertwined in the field of extremism prevention.
Experts
- Artificial intelligence:
Reinhard Heil - Digital work:
Dr. Linda Nierling - Digitalization in agriculture:
Dr. Christine Rösch - Governance of/by algorithms:
Dr. Carsten Orwat - (In)security, risk and politics:
Dr. Christian Büscher
Further contact
Jonas Moosmüller
Public relations
Tel.: +49 721 608-26796
E-mail
Projects on the topic
- DESIRA
- EU project Crowdwork
- Governance of and by algorithms (GOAL)
- MOTRA-TM
- PhD College Accessibility through AI-based Assistive Technology (KATE)
- Real-world Lab “Robotic Artificial Intelligence”
- Risks of discrimination by algorithms
- Social trust in learning systems
- Tackling Deepfakes in the new AI Legislative Framework
- Visions and best practices for the digital transformation
To the complete project list
Publications on the topic
Organising AI for safety: Identifying structural vulnerabilities to guide the design of AI-enhanced socio-technical systems
2025. Safety Science, 184, 106731. doi:10.1016/j.ssci.2024.106731
Ask Me Anything! How ChatGPT Got Hyped Into Being
2024. Center for Open Science (COS). doi:10.31235/osf.io/jzde2
The trustification of AI. Disclosing the bridging pillars that tie trust and AI together
2024. Big Data & Society, 11 (2). doi:10.1177/20539517241249430
Demokratische Technikgestaltung in der Arbeitswelt
2024. WSI-Mitteilungen, 77 (1), 50–57. doi:10.5771/0342-300X-2024-1-50
Digital transformation of fruit farming in Germany: Digital tool development, stakeholder perceptions, adoption, and barriers
2024. NJAS: Impact in Agricultural and Life Sciences, 96, Article no: 2349544. doi:10.1080/27685241.2024.2349544
New and emerging perspectives for technology assessment: Malevolent creativity and civil security
2024. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 33 (2), 9–15. doi:10.14512/tatup.33.2.09
Malevolent creativity and civil security: The ambivalence of emergent technologies
2024. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 33 (2), 8–54. doi:10.14512/tatup.33.2.08
Structuring different manifestations of misinformation for better policy development using a decision tree‐based approach
2024. Policy & Internet. doi:10.1002/poi3.420
Künstliche Intelligenz außer Kontrolle?
2024. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 33 (1), 64–67. doi:10.14512/tatup.33.1.64
KI-Textgeneratoren als soziotechnisches Phänomen – Ansätze zur Folgenabschätzung und Regulierung
2024. KI:Text – Diskurse über KI-Textgeneratoren. Ed.: G. Schreiber, 341–354, De Gruyter. doi:10.1515/9783111351490-021
Algorithmische Differenzierung und Diskriminierung aus Sicht der Menschenwürde
2024. Künstliche Intelligenz und ethische Verantwortung. Hrsg.: M. Reder, 141–166, transcript Verlag. doi:10.14361/9783839469057-009
Normative Challenges of Risk Regulation of Artificial Intelligence
2024. NanoEthics, 18 (2), Art.-Nr.: 11. doi:10.1007/s11569-024-00454-9
What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI)
2024. Humanities and Social Sciences Communications, 11 (1), Art.-Nr.: 760. doi:10.1057/s41599-024-03277-x
Drawings for Insight on Preschoolers’ Perception of Robots
2024. Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 920–924, Association for Computing Machinery (ACM). doi:10.1145/3610978.3640608
TA for human security: Aligning security cultures with human security in AI innovation
2024. TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, 33 (2), 16–21. doi:10.14512/tatup.33.2.16
Trust “in the field”: reflections on a real-world lab deploying social robots in childcare settings
2024. Proceedings of the SCRITA 2024 Workshop on Trust, Acceptance and Social Cues in Human-Robot Interaction at IEEE RO-MAN 2024, 26th-30th August 2024, Pasadena, Ca., CEUR-WS.org