How can trustworthy AI be implemented?

Research questions

  • How can social and economic perspectives be integrated into AI projects?
  • How can responsible, sustainable AI be realized that is compatible with people?
  • What methods of participatory technology design are suitable for AI?
  • What role do privacy, surveillance, self-determination and automation play when using AI?  

Projects

DatenTRAFO – New data protection governance – Technology, regulation, and transformation

The DatenTRAFO project is investigating why privacy-enhancing technologies (PETS) such as homomorphic encryption, secure multi-party computation (SMPC), differential privacy, and federated learning have only seen limited adoption to date, despite strong technological pressure from the GDPR. In addition, the project aims to explore ways of strengthening European providers of these technologies. 

PRETINA – Privacy-preserving and legally compliant eye tracking in everyday digital life

The aim of “PRETINA” is to provide concrete practical recommendations for the privacy-preserving and legally compliant use of eye tracking by developers for users. To this end, the technological consequences, possible privacy risks, and their perception are examined holistically. At the same time, the researchers are using these findings to derive criteria for the use of eye tracking in a manner that complies with data protection regulations and is socially acceptable. These criteria are incorporated into specific technical solutions.

Deepfakes and manipulated realities

In a study for TA-Swiss Fraunhofer ISI investigated the topic »Deepfakes«: First, the technological and research status quo with regard to deepfakes was compiled and their perception among the Swiss population was examined. In addition, the study analyzed possible effects in journalism, law, politics, and business, and derived recommendations for action in these areas.

Trustworthy AI

This project aims to present successful examples of the implementation of trustworthy AI and to introduce methods used by local stakeholders. In particular, it focuses on the involvement of non-technical scientists in the development and implementation process, i.e., “embedded social scientists”, as well as special system design procedures that can be used to implement various dimensions of trustworthy AI.

Platform Privacy – Research for a self-determined life in the digital world

Based on technical, legal, economic and social science approaches, the research consortium Platform Privacy works on an interdisciplinary-based understanding of the role of privacy in the digital present. Concepts are developed for (re)defining and guaranteeing informational self-determination and privacy in the digital world.