Wednesday, 24 May 2023
“The use of AI will fail without social acceptance”
Interview with Werner Achtert about the challenges and limits of AI in the public sector.
Mr. Achtert, AI technologies are used by many people in their private everyday life and in their professional environment. How far along is the use of AI in public administration?
AI has so far only been used selectively in a few places in public administration. There are a number of chatbots in public administration for answering citizens' questions. Another area of application is AI-based document recognition. We, for example, have developed a learning system for the Berlin Landesdenkmalamt (historic site management) that recognizes connections between documents and images from the historic site management area. However, AI is hardly to be found in real decision-making processes to date.
Why is that?
Essentially this is for legal reasons. Administrative law sets strict limits on any kind of automation of administrative decisions. Only if there is no latitude in the decision-making process, this process can be automated. For example, vehicle tax can be calculated and determined automatically. In addition, every administrative decision must be clearly comprehensible in accordance with the principles of the rule of law. And that is practically impossible with real AI systems – i.e., based on learning neural networks.
There is actually no statement in administrative law on the use of artificial intelligence, only on the use of automation. In many specialist procedures, automation is already used today through the application of algorithms and rule-based systems to prepare decisions for the clerk or – if permissible – to also make decisions. In this case, however, the human being is still in control, whether as the developer of the specialist procedure or as the person in charge.
A true AI system learns with the help of training data and can make decisions independently on this basis, but in individual cases these decisions are beyond human control. Ultimately, it's about applying statistic methods to large volumes of data.If it's mainly about statistics, what role does the selection of training data play?
The training data for a learning system is the basis for the system’s behavior. AI does not discriminate against anyone. The problem here is not so much the algorithms behind an AI, but the selection of data with which a model is trained. If an AI system is only shown images of light-skinned people for training, it will naturally be unlikely to recognize dark-skinned people. If an AI system is to identify crime hotspots in a big city, a representative database must be provided as basis. We cannot solve ethical issues such as discrimination technically, but only through the correct selection of training data.
Besides the legal aspects, how important is the social acceptance when using AI in public administration?
In Germany, we have highly distinct expectations of government actions. On the one hand, we expect the caring state to make state services available to citizens as easily as possible. On the other hand, we expect the citizen to be protected from state surveillance. Since digitalization is always associated with the collection and storage of data, this results in a difficult area of tension. For example, the healthcare card has been discussed for more than 15 years without any practical result. The main problem is not technical issues, but the fear of the “transparent citizen”.
Any kind of digitalization requires social acceptance. This is even more true for a technology such as AI, which is difficult for the layperson to understand.
The use of AI in the public sector affects not only the citizens as users but also employees working in public administration. How do they need to be prepared to work together with the technology?
AI has the potential to massively change the working environment in public administration too. Unfortunately, the current discussion focuses heavily on negative aspects such as job cutting and the loss of human control in AI-controlled workflows. Due to demographic change, in ten to twenty years we will no longer have enough skilled workers to perform all government functions in the way we are used to today. So we will have to deal with even more tasks with fewer skilled workers, and that can only be done with more digitalization and greater use of AI to assist clerks.
Currently, education and training in public administration is not sufficiently prepared for this. Neither managers nor clerks are given a sufficient understanding of digital technology in their training. The “Qualifica Digitalis“[1] project is analyzing these shortcomings and designing suitable training content.
How do the regulations compare internationally? How liberally or how restrictively are these topics regulated in Germany?
In principle, there are three global areas that need to be considered. These are the US, Europe and China. The US has a fundamentally different understanding of technology applications and data protection than Europe and therefore takes a much more liberal approach to the topic. China's political system is based on a different relationship between the citizen and the state than in western democracies and therefore has fewer restrictions on the use of AI by public institutions. Europe is characterized by very high hurdles regarding data protection and the protection of privacy rights. Within the EU, the Nordic countries fundamentally handle digitalization differently than we do. Although they also have high data protection standards, the digitalization of the public administration is much further along there than in Germany. Denmark, for example, has been handling all contact between public authorities and citizens by e-mail for years. Germany has a lot of catching up to do here, as the slow implementation of the OZG (Onlinezugangsgesetz (online access act) clearly shows.
What does this mean for the EU's competitiveness in global comparison?
The EU Commission argues that particularly high data protection standards and the planned regulation of AI are a competitive advantage for the EU. I would strongly disagree with that. When European companies develop software for the European market, they cannot use all the technological possibilities because of the regulation. A practical example: When taking out insurance, it is common practice in the US to perform a background check on the customer. For this purpose, AI systems are used that analyze publicly accessible content from social networks. In Europe, such a procedure would presumably not be permitted under the planned regulation. Software developed according to European rules would not be competitive in the US.
How do you assess developments in the regulation of AI solutions in Germany and Europe?
The AI Act is currently the subject of controversial debate at EU level. And there are also significantly different opinions on it in Germany – even within the governing coalition. In my opinion, the tendency in Germany is more toward stronger restrictions. And that could have far-reaching consequences. Experts expect that – depending on which side prevails – 60 percent of existing AI solutions could be categorized as high risk applications and thus be banned.
Interestingly, a number of international experts are currently calling for AI development to be temporarily suspended. I don't think the proposal is very practical because research and economic development cannot simply be stopped by decree. That would be like banning the further development of the steam engine 150 years ago because boilers occasionally burst during its operation.
In Europe, we need balanced rules on the use of AI in order to take into account people's legitimate interests in protection on the one hand, but not to disconnect ourselves from technological developments on the other. We need to explain AI to people, thereby allaying their fears and thus achieving social acceptance.