Monday, 12 September 2022
Artificial intelligence (AI) has a key role to play in the business transformation process. In order to keep the focus on the benefit for people in these AI systems, the European Commission's AI Act is expected to come into force in 2024. It will regulate AI across the EU, and it will do so in a profound way. Mark-W. Schmidt has been head of AI at msg for several months. Together with his colleague Christian Meyer, he explains in this viewpoint interview why companies should start preparing for the European AI Act now at the latest.
Is artificial intelligence like that in the classic film "2001: A Space Odyssey" a danger to humans, or why does the EU want to regulate it?
Schmidt: Certain AI applications could certainly pose dangers. We need to address those dangers. AI systems are always trained using data that is constantly changing through updates. This means that even if we create data without discrimination today, it could develop discrimination tomorrow due to data shift. One way out is to test AI systems for reliability. After all, for all the economic benefits AI can provide, we need to ensure that the technology primarily benefits people. By "reliability" we therefore mean aspects of an AI system ranging from freedom from bias, autonomy, explainability and robustness to security and data protection. The EU Commission's AI Act stipulates that this auditing of AI systems be understood as an "on-going" process.
Does the EU's regulatory approach also entail risks?
Schmidt: The EU regulation defines AI too broadly. This means that too many applications are considered AI systems. This creates the risk that we regulate too much and stifle the great potential of AI in Europe. Europe is currently striving for the important goal of digital sovereignty. Consequently, we do not want to become dependent on either American or Chinese digital solutions. But how can we achieve this goal if, at the same time, we ensure that the most innovative technologies are not developed here in Europe because we are too heavily regulated?
How can we conceptualize the AI Act?
Meyer: The AI Act divides AI systems into four risk classes - depending on how much risk to human rights and consumer protection lawmakers see in each system. The risk classes range from Level 4, prohibited practices, to Level 3, high-risk AI systems, and Level 2, systems with transparency obligations, to Level 1, minimal risk systems. The third level includes, among others, financial services or personnel management systems, the second, for example, chatbots.
Thus, AI systems in a wide variety of areas are affected by the regulation?
Schmidt: Exactly. And that's why companies should start internal inventories now: What AI systems are they using? How are they classified by the EU Commission? Are employees equipped for the demanding task of complying with the upcoming transparency and reporting obligations?
And that will be costly.
Meyer: Costly? Yes. But it doesn't have to be chaotic, as it was when the GDPR was introduced. Back then, too, companies had years to make arrangements, but when the regulation came into force in 2018, there was uncertainty and lack of clarity. Companies now have the chance to prepare fully and in good time. Cross-departmental experts need to come together to prepare for all aspects of the new regulation - from legal requirements to technical necessities to new approaches, such as "Datasheets for Datasets" and "Model Cards for Model Reporting".
How can companies deal with this in the best possible way?
Schmidt: The AI Act presents companies and institutions with the challenge of applying the sometimes very abstract requirements in the actual functioning of their AI application. This is another reason why a concrete audit procedure for AI systems is essential, which translates the requirements of the AI Act into workable and real-life practice. With the reliable AI audit procedure, msg already has a tried and tested method in place to help our customers take their first steps towards compliance with the AI Act.