Recommended action for “trustworthy AI“
“Artificial Intelligence“ (AI) is already influencing our lives today and will do so even more in the future. AI-based technologies bring many advantages, but also harbor risks. In this context, a new policy paper by the research consortium “Forum Privacy“, which is coordinated by Fraunhofer ISI, makes 15 recommendations on how to not only preserve, but even promote human self-determination in spite of AI.
AI technologies have made rapid progress over the last few years and are already used in a huge variety of settings in business, industry, medicine and many other fields. The focus here is frequently on the undeniable advantages that AI brings: Using AI technologies in healthcare for people with impaired vision or hearing, for example, can help to compensate limited sensory capabilities. Another widespread application is personalized advertising, which is tailored to the personal profiles of social media platform users.
However, it is certainly problematic if advertising is personalized according to psychometrically determined personality traits, i.e. a psychological profile is created or an individual's personality is read digitally using “micro-targeting“. The individualized marketing or election campaign messages generated in this way can foster behavioral manipulation, which is fundamentally opposed to the idea of human self-determination. This also applies if algorithms "decide" people's creditworthiness and educational opportunities, or self-learning computer systems synthetically create deceptively real images, texts or videos that are then used to harm people (deepfakes).
Foundations for “trustworthy AI“
In order to prevent such misuse, AI should follow the idea of technology development oriented towards the common good, which aims to preserve and expand democratic values and basic rights. It is therefore especially important to lay the foundations for a trustworthy AI now, especially as complete transparency and control of AI – as well as over other complex technologies – is simply not possible nor realistic. Despite this, people should be able to understand AI-based decision logic, which is why principles such as transparency and traceability have an important role to play.
In this context, Forum Privacy, the research consortium coordinated by Fraunhofer ISI, has written a policy paper on the topic of “Artificial intelligence and self-determination“ and drawn up 15 recommended actions for trustworthy AI that not only preserve, but can even promote human self-determination in spite of artificial intelligence.
One of the recommendations is that algorithmic decision-making systems that have effects on human co-existence must be based on democratically-determined fairness criteria. In addition, the level of protective mechanisms – for instance, regulation by the legislator – should increase as the damage potential of AI increases. Besides stricter obligations for the product or technology manufacturers of potentially dangerous AI applications, the representation of data subjects' interests should also be strengthened. In such a concept, civil society initiatives and intermediaries like data protection supervisory authorities, or consumer protection organizations would offer advice about the risks of AI, and would be involved in the critical assessment of AI applications, their certification and the representation of data subjects' interests.
Strengthen critical evaluation skills
Murat Karaboga, a privacy researcher at Fraunhofer ISI and co-author of the policy paper, emphasizes: “In future, more weight should be given to collective participation in shaping AI. In addition to promoting consumers' critical evaluation skills, this means strengthening institutions, such as data protection authorities or consumer protection organizations, in order to be able to conduct independent checks on AI. In view of Germany’s and Europe’s objective of pursuing the so-called 'third way' of AI for the common good – and not the purely profit-oriented digital capitalism of globally dominant IT businesses or the totalitarian digital authoritarianism of the Chinese variety – it should follow its words with action“.
The policy paper is part of Fraunhofer ISI's research activities in the field of Artificial Intelligence. These analyze the development and implementation of AI from an innovation perspective. AI research at Fraunhofer ISI is based on a systemic approach that considers technical, economic, political and social aspects and identifies both the potentials and the critical effects of AI.
The Fraunhofer Institute for Systems and Innovation Research ISI analyzes the origins and impacts of innovations. We research the short- and long-term developments of innovation processes and the impacts of new technologies and services on society. On this basis, we are able to provide our clients from industry, politics and science with recommendations for action and perspectives for key decisions. Our expertise is founded on our scientific competence as well as an interdisciplinary and systemic research approach.