© Artem/AdobeStock

We need a common mindset on the use of data

In Germany, we mainly see the risks when it comes to data use. This is also due to the reputation of the General Data Protection Regulation, says law professor Christine Wendehorst. The co-chair of the data ethics commission hopes for a new mindset that also focuses on the opportunities.

Prof. Dr Wendehorst, the Corona pandemic has shown that Germany has some catching up to do in terms of digitalisation and the use of artificial intelligence (AI). Where are the biggest difficulties that we have to overcome - intellectually and legally?

In Europe, and especially in Germany, we tend to associate data use with something potentially threatening. This is especially true for the use of personal data and even more so for health data. The huge potential that lies directly in the use of this data, on the other hand, is not sufficiently anchored in the public consciousness. We have seen this many times during the pandemic, from the discussion about contact tracing apps to the evaluation of vaccination data. For example, we were criticised in a very one-sided way that health data was analysed in Israel for research purposes, which was not possible in Europe. Now, many things may have gone too far in Israel when it comes to the specific disclosure of health data. But the often ill-considered assumption that it is per se illegitimate or even illegal to conduct research with health data scares me. For me, this points to a mindset that has not properly grasped the potential of data and cannot distinguish between good and dangerous data use. Yes, there is a lot of misuse of data, and we must take decisive action against this. But we must not lump everything together and sleep through technical developments because we focus on misuse.

Can you give an example of this?

We are currently wrestling across Europe with the question of what impact vaccination status has on certain freedoms of a person. But we lack reliable scientific data for such a decision. We do not have sufficient studies on whether or not a vaccinated person can pass the virus on to other people. There are studies from Israel and also a few studies from the United Kingdom, but practically no data from Europe. That should not be the case. When we ask experts why there is no data, we sometimes hear that it is not possible in Europe and we hear references to the General Data Protection Regulation (GDPR) or to our rules on clinical studies. Even though these may only be subjectively perceived obstacles, the bottom line is that we do not have a database for such existential questions as transmission protection with vaccinations. Apparently, there are also great inner inhibitions to create or develop this database.

Why are people in this country willing to disclose their personal data to a US company for social networking apps, while at the same time they have concerns about a German privacy-compliant app like the Corona warning App?

I am a lawyer, not a psychologist. But in my estimation, it is a whole range of reasons. It has a lot to do with lying to ourselves and with convenience. Many social network apps offer us terrific services that make our daily lives enormously easier, communication with family and friends, for example. That is tempting. And then there is the invisibility of the price we pay for it. And also - this is an important difference to the Corona warning app, for example - an enormously skilful and non-transparent communication about possible risks. This leads to the fact that we very quickly say: If I want to stay in contact with the others, I have to use it and have no choice but to agree to the conditions. With the Corona warning app, the case is quite different. It is not convenient, it soberly informs me about the processing of personal data and I do not suffer any noticeable disadvantage if I do not use it. These are all levers that ultimately lead to people refusing the processing of their personal data.

You are working together with US colleagues on the project "Principles for a Data Economy". Where do the views on the handling of personal data differ most fundamentally?

There is no one US view. The USA is a divided country, and this also applies to the issue of data use. Very conservative circles see the use of data as the use of information protected by the First Amendment to the Constitution, that is, freedom of speech and expression. As a result, they consider much of what regulates data use to be unconstitutional. For us, this is difficult to understand. Very progressive circles, on the other hand, are making a massive plea for stronger data protection legislation. Some of them are even lobbying to introduce something similar to the GDPR in the US. It is really curious to see that some big Silicon Valley companies in the US are lobbying for the GDPR. They have long since prepared themselves at great expense and recognised that they can continue their business models even under the GDPR. It would now be a competitive advantage for them if the USA were to put something like the GDPR into force across the board.

Do you see a possible common path?

In principle, yes, after all, many American companies would even be prepared to accept something similar to the GDPR. In my research, I personally tend to advocate a "traffic light model" with three areas, each differentiated according to the type or purpose of data use: A large green area in which data use per se is legally justified. By the way, there would also be a lot of use for AI in this area. The yellow area is a corridor of data uses for which one must be careful and which therefore require the free consent of the data subject. And then there is a red area of harmful data uses that are prohibited from the outset.

What would that achieve?

Both the economy and the individual could gain a lot here. For the economy, a "safe harbour" would be created. As long as I do not get near the yellow and red areas, and as long as I do not harm anyone and do not create unjustifiable risks, I can do business with data freely. Only if I come close to the critical area with my business model do I have to check more closely whether the use of data is permissible. This would bring much more freedom and security to the economy. But the individual would also gain. Under the GDPR, we often have merely a semblance of autonomy. In many cases, we have the feeling that we can decide something, but actually we cannot decide anything at all. We click and agree to 50 pages of terms of use that we neither want to read nor can often understand. In the end, we have no choice but to click "OK". I take the view that one must rather create an area of rational indifference for the individual. That means I click OK and can trust that no one may and will go into the red area. Then I can move relatively safely in the data society. This means more opportunities for innovation for the economy, a better quality of life for the individual, and much more freedom for both sides.

What would be an example of the red area?

For example, intrusion into the most private area of my life, into my innermost inclinations and weaknesses. However, we should look less at the type of data and more at the effect of data use. I can process very "sensitive" health data in a completely harmless way, and extract highly sensitive derivations from seemingly completely harmless data - like browsing behaviour or typing speed - and use them for highly risky purposes. The GDPR looks too much at the origin, that is, the type of data, where it comes from and how the controller obtained it. It tends to look too little at what the controller does with the data. If a data use cannot violate individual data subjects' rights at all, then I would see the use in the green area. If, on the other hand, it is clear from the outset that a data use can harm the individual, for example, because I exploit his vulnerability in pricing, you may already be in the red area and have to define very precisely where the boundary between the yellow and red areas is.

Where should we start in Germany to create trust among people without hindering new technologies?

There are many starting points. First of all, it is simply a matter of raising awareness in schools, in training, in further education, in the media. But we should also create new institutions. I believe very strongly in the potential of data trusteeship models. Trusts stand between those who provide data and those who use the data. For example, between patients and a research institution. This trusteeship ensures that the data may only be used in certain ways, but also that they can be used. In this way, we create more freedom and potential for business and research, but at the same time more security for the individual, who is often overwhelmed with declarations of consent. We need a green area of good data uses in the mindset. At the moment, everything is focused on the areas that are potentially yellow or red.

Do you anticipate that the introduction of the European data infrastructure project GAIA-X will change the data mindset in Germany?

Things will certainly change for the better. When we talk about the data mindset, GAIA-X is an important building block for creating trust. At the moment, we do not yet know all that can emerge under the GAIA-X umbrella. But it has a lot of potential.

What else is important to change the data mindset so that we see the opportunities rather than the threats?

For me, the keywords "data awareness" and "data skills" are central here, because what we do not understand and what we do not master ourselves, we tend to perceive as a threat. At the moment, unfortunately, most people have no or very little data literacy. We need to develop basic data literacy at a young age. But actually, we need it at all ages, so that people feel again that they understand what it is all about, that they can have a say and a part in shaping it, and do not see data use as threatening. This would have to be a core element of such a strategy towards a new data mindset.

Christiane Wendehorst has been Professor of Civil Law at the University of Vienna since 2008. At the moment, her research focuses on legal aspects of digitalisation and she has been working as an expert on topics such as digital content, the Internet of Things, AI and data economy, i.e. for the European Commission, the European Parliament, the German Federal Government, the ELI and the ALI.