News

Helping companies implement the EU’s AI Act

31 Jan 2025

Companies should be able to easily test their AI systems to determine whether they fulfill the new requirements. Funded by the Bavarian Ministry for Digital Affairs, researchers from LMU, TUM, and UTN are devising a system for automatic verification.

The European legal initiative to regulate AI (Artificial Intelligence Act) has been in force since 1 August 2024. As of 2 February 2025, initial elements from the complex set of regulations will become binding. This poses a particular challenge to small and medium-sized enterprises (SMEs) and start-ups that want to benefit from artificial intelligence and pursue innovations. A new project under the scientific lead of LMU Professor Gitta Kutyniok is designed to support companies in fulfilling the new requirements, and therefore lower barriers to the use of artificial intelligence. This Bavarian AI Act Accelerator is being funded by the Bavarian Ministry for Digital Affairs and implemented by the appliedAI Institute for Europe. Other principal contributors include scientists from the Technical University of Munich (TUM) and the University of Technology Nuremberg (UTN).

Fewer manual tasks, more transparency

The new AI Act from Brussels bans, for example, AI systems with “unacceptable risk.” Furthermore, employees or external parties who use or offer AI systems must receive adequate training so that they possess a certain measure of “AI literacy.” The project addresses such challenges through a combination of research, education, and technological innovation. In the course of the project, Bavarian research teams are developing scientifically founded approaches for the practical implementation of the requirements of the AI Act. At the same time, project members are devising targeted educational offers such as training courses and workshops to prepare companies for the new standards. Businesses will receive help with risk assessment, documentation, auditing, and practical solutions for the fulfillment of regulatory demands. According to plans, automated verification systems will eventually permit continuous and efficient assessment of AI systems for their legal compliance. These tools will minimize manual tasks and increase transparency.

“The goal of the joint research project between LMU, TUM, and UTN is to formalize the legal requirements of the AI Act, because once this is accomplished, explanation and assessment of the AI Act can follow very clear guidelines,” says scientist Gitta Kutyniok, Chair of Mathematical Foundations of Artificial Intelligence at LMU. “An automatic verification system built on this foundation would then allow all companies, as well as testing centers throughout Europe, to evaluate AI technology in an eminently straightforward, consistent, and fair manner. In conjunction with appliedAI, our research findings could contribute to the development of technical and legal standards, which European lawmakers could then consider.”

What are you looking for?