News

New collaborative projects: Artificial intelligence and gripping robots

8 Nov 2024

The BMBF is funding two new collaborative projects with LMU participation. One teaches AI models causal relationships, the other refines the tactile abilities of robots.

CausalNet: AI that understands the principle of cause and effect

Prof. Dr. Stefan Feuerriegel

Prof. Stefan Feuerriegel | © LMU

The machine learning models currently available are typically based on correlations, not on causality. That is to say, they obtain their results based on probabilities, without really recognizing relationships. This can lead to errors and, ultimately, poor performance. It is what causes ChatGPT, for instance, to occasionally spit out answers that sound nice but are factually wrong or nonsense. The potential of artificial intelligence for medical applications is also limited by the inability of the programs to establish causal relationships. With a model that links cause and effect, doctors could make more targeted therapy decisions. And it is a similar story for applications in the domains of science, business, and the public sector.

The new collaborative project CausalNet, which has been awarded almost two million euros in funding by the German Federal Ministry of Education and Research (BMBF), has set itself the goal of getting a new generation of machine learning off the ground within three years. “We want to develop novel methods for the integration of causality into machine learning models,” says Professor Stefan Feuerriegel, who heads the Institute of Artificial Intelligence (AI) in Management at LMU and is the spokesperson of CausalNet. To integrate the principle of cause and effect into future AI models, Feuerriegel is collaborating with experts from Helmholtz AI, the Technical University of Munich (TUM), Karlsruhe Institute of Technology, and Economic AI GmbH.

News

AI in medicine: the causality frontier

Read more

The team plans to tackle the unique challenges of causal machine learning in high-dimensional environments with the help of tools from representation learning, the theory of statistical efficiency, and specific machine learning paradigms. “Furthermore, we will establish the effectiveness and robustness of our methods with theoretical results,” says AI expert Feuerriegel. This is important to ensure the reliability of the proposed methods. “Then we will incorporate causal machine learning into real applications and demonstrate the concrete benefits for business, the public sector, and scientific discoveries.”

CausalNet additionally plans to facilitate the practical use and further development of the technology by making the software, tools, and results publicly available according to the open-source principle. “Over the next three years, we will raise machine learning to a new level and increase the flexibility, efficiency, and robustness of AI applications,” says Feuerriegel.

GeniusRobot: Robots that see and grasp better through AI

The photo shows Professor Gitta Kutyniok, Chair for Mathematical Foundations of Artificial Intelligence in the Faculty of Mathematics, Computer Science and Statistics

Prof. Gitta Kutyniok | © LMU

The reliable gripping and manipulation of objects of all shapes and sizes is one of the major challenges in robotics – from manufacturing to medical applications. In this context, control methods that dynamically adjust the grip are still relatively under-researched. “These approaches require targeted prediction of the effects of an interaction between the robot and its environment. In our project, generative AI will be doing this work,” explains Professor Gitta Kutyniok, Chair for Mathematical Foundations of Artificial Intelligence at LMU. Without targeted prediction, robots cannot adjust flexibly, resiliently, and efficiently to changes in the environment, in the object to be gripped, or in the activity itself. “But with this capability, robots can respond immediately – for example, when an object is slipping out of their hand,” says Professor Björn Ommer from the Chair of AI for Computer Vision and Digital Humanities/the Arts.

Such control methods require not only visual and tactile sensor systems capable of registering contact and shearing forces, but also corresponding multimodal AI models that are able to integrate and interpret sensory information from multiple complementary sources. This is precisely where the “GeniusRobot” project comes in – a collaboration involving the research groups of Gitta Kutyniok and Björn Ommer at LMU along with partner institutions such as the University of Technology Nuremberg and Dresden University of Technology.

Prof. Björn Ommer

Prof. Björn Ommer | © Ansgar Pudenz | Deutscher Zukunftspreis

“Our goal is to develop new, interpretable AI models that facilitate the leveraging of methods from the domain of generative AI for the generation of tactile information from image data in robotics,” explains AI expert Kutyniok. In this way, tactile sensor data will be predicted from camera data for the planning of gripping motions. “Conversely, these predictions will be recalculated into camera images using a further generative model, so that changing objects can be directly visualized through the movements and manipulations of the robot,” adds AI researcher Ommer. This also permits the manipulation of covered objects that can only be partially captured with the camera.

One of the main focuses of the development work is on the interpretability of the models, which is essential for the use of generative AI in safety-critical environments. In the future, the results could therefore also encompass new application scenarios in automated manufacturing and human-machine interactions and furnish new scientific insights in the field of safe and multimodal AI.

What are you looking for?