News

What AI can really do

9 Dec 2024

Where is artificial intelligence heading? LMU researchers on the use and limitations of the technology in medicine, business, and society.

Artificial intelligence (AI) is a key technology in that it enables a wide range of applications. It is already part of our everyday lives: it is the technology behind translation tools and chatbots and is used in areas like medicine.

What are the challenges associated with AI? What aspects are important for the technology’s further development? Scientists from various disciplines across LMU shed some light on the following issues in this article:

  • AI explainability
  • Democratizing AI
  • Hate speech and AI
  • AI in medicine
  • New business models with AI
  • Fake news and AI
  • AI and copyright
  • Data quality within AI
  • AI and literature

The photo shows Professor Gitta Kutyniok, Chair for Mathematical Foundations of Artificial Intelligence in the Faculty of Mathematics, Computer Science and Statistics

Prof. Dr. Gitta Kutyniok is an expert in the mathematical foundations and explainability of AI. | © LMU

How can humans manage to stay in control when AI is being used more and more widely?

“Thanks to artificial intelligence (AI), we are now on the cusp of the 4th Industrial Revolution — our entire society can expect to see radical change in the next five to ten years. If we are to remain in control of this rapid development, we need to have a fundamental, i.e. mathematical, understanding of AI systems. Unfortunately, that is something we have not yet fully developed, especially with regard to the use of AI in critical infrastructure; likewise, there are still major reliability problems with AI systems.

With its ‘right to explanation,’ the EU’s AI Act gives us a framework for retaining control. Specifically, it means that every AI system must provide an explanation for every decision it makes. In the simplest case, what happens is that the components of the input data that are most relevant to the AI’s decision are marked. For example, if AI rejected your loan application, you would expect it to have included your salary. Explainability algorithms therefore offer an excellent way of checking whether an AI’s decision makes sense.”

Prof. Dr. Gitta Kutyniok is Chair of Mathematical Foundations of Artificial Intelligence

Prof. Dr. Gitta Kutyniok: Understanding how machines learn

Björn Ommer researches on computer vision, i.e. the development of algorithms that enable machines to understand visual data on a semantic level. | © Ansgar Pudenz

How can we democratize AI?

“Generative AI has established itself as a key enabler of future technological progress. GenAI is what makes it possible to create content like images or texts on demand. However, for a long time only the big tech companies were able to use it, as not only the training but also the use of these models demands huge computing capacities.

Democratizing AI therefore means making these models so compact that they can be used on affordable consumer hardware and no longer require costly data centers. With Stable Diffusion, we have developed an approach that teaches AI a more efficient language with which it can, for example, display images precisely and resource efficiently on the computer. That means more compact AI models can produce detailed images on small-scale hardware. This opens AI up for wider usage and leads to more innovation, because start-ups, companies, and researchers can use and continue the development of GenAI freely.”

Prof. Dr. Björn Ommer is Chair of AI for Computer Vision and Digital Humanities/the Arts at LMU.

For more on the research of Prof. Dr. Björn Ommer, see: Structures in the fog

Prof. Dr. Sahana Udupa

Prof. Dr. Sahana Udupa is conducting research on AI and extreme speech as part of several projects. | © Andreas Focke

Does AI help to catch hate speech?

“AI can help but it is also part of the problem. AI-powered manipulations have made their way into political discourse. They have expanded from bot activities in previous years to deep fakes that can be created more easily today. On the other hand, there are significant advances in AI-assisted content moderation. Still, it is difficult for current models to detect culturally coded and shifting expressions of hate and to grasp complex dimensions of speech as power.

Frameworks have to be developed for responsible AI use by keeping community voices at the forefront, which we describe as ethical scaling. Corporations have kept their processes out of reach for research, which makes any intervention even harder.”

Prof. Dr. Sahana Udupa is Professor of Media Anthropology at LMU.

Prof. Dr. Sahana Udupa: Moderating hate speech

Prof. Dr. med. Clemens Cyran

Are AI doctors better at diagnosis than human doctors?

“For radiologists at the Department of Radiology of LMU University Hospital, artificial intelligence has been part of everyday clinical routine for years. We operate a fully integrated AI platform that provides various algorithms for automated image analysis of conventional X-rays, CT and MRI scans to provide diagnostic information in real time to support radiologists in their diagnoses.

At LMU University Hospital, we use AI algorithms in a broad range of applications – for detecting bone fractures and lung metastases, for diagnosing breast cancer and prostate cancer. At the Munich Oktoberfest 2024, where the on-site trauma CT scanner was staffed by the LMU Department of Radiology and relied on our AI infrastructure, staff radiologists were supported by an AI algorithm for the detection of intracranial hemorrhage following head trauma.

Prof. Dr. Michael Ingrisch | ©

Results of an accompanying prospective clinical trial showed that diagnostic confidence for the detection of intracranial hemorrhage was significantly higher when the AI algorithm was included into the diagnostic workflow as a second reader. AI can make a valuable contribution in narrowly defined use cases like this, especially as a second reader in the background. Artificial intelligence supports radiologists in further improving diagnostic accuracy by highlighting abnormalities, which then contributes to optimized diagnostic accuracy when all clinical information is drawn together. AI is a supporting tool and complements human expertise — the final diagnosis remains in the hands of experienced radiologists.”

Prof. Dr. med. Clemens Cyran is Vice Chair of the Department of Radiology at LMU University Hospital.

Prof. Dr. Michael Ingrisch leads the research group for Clinical Data Science at the Department of Radiology at LMU University Hospital

Prof. Dr. Stefan Feuerriegel’s research focuses on the challenges of AI and digitalization for companies. | © Florian Generotzky

What challenges does AI pose for companies and their processes?

“The great risk is failing to seize the opportunities from AI technology. AI will fundamentally change the way we work in many professions. The biggest challenge is often not the technology itself, but successfully integrating it into business processes. That means not only implementing software but also rethinking business models.

Old IT systems, rigid structures, and reluctant managers often act as a brake on progress. How many employees are already using AI on a daily basis? Probably not many, and often not in the C-suite either. But companies that don’t invest today will find themselves irrelevant tomorrow.”

Prof. Dr. Stefan Feuerriegel, Head of the Institute of Artificial Intelligence (AI) in Management

Prof. Dr. Matthias Leistner deals, among other things, with questions of intellectual property law. | © privat

How will AI copyright disputes end? What can AI be trained on?

“Looking at the global dimension of current disputes, Japan has the most liberal framework: In Japan, AI training can be done virtually copyright-free. In the United States, most of the legal disputes are pending. The flexible fair use policy of US copyright law could potentially mean that more liberal conditions will ultimately emerge. But it’s not certain.

The EU chose a different solution years ago. Here, training for non-commercial research purposes is free under certain conditions. In principle, there is also a copyright exception for commercial AI training. However, rightsholders can opt-out of this rule and then license their material if they wish to (as is already the case with very high-quality material from individual media groups). The catch is that the opt-out must be declared digitally in a ‘machine-readable’ form. But the requirements for that are highly controversial — Europe’s first legal ruling on this issue has just been handed down by a district court in Hamburg. However, this is only a lower court decision, and it will be many years before the European Court of Justice rules on the open legal questions around this issue. Until then, there will inevitably be some legal uncertainty.

A crucial point will be how this legal uncertainty will affect copyright holders on the one hand, but also Europe as a place for innovation on the other. European copyright law only applies if the models have been trained in Europe (which is not the case for the big US providers). The EU has now attempted to make some amendments in the AI Act to extend the regulations (which fundamentally only apply within the region) to all AIs that are offered in Europe. It remains to be seen what effect this will have.

All in all, much still seems unclear at present. In any case, it will be essential that human authors, as the actual creators, continue to find their audience and their markets, possibly partly with the use of AI, and that they are appropriately remunerated for their work. This is not only in the interests of society at large, but also in the interests of the tech companies, which will remain crucially dependent on high-quality human input in terms of training materials for their AI models.”

Prof. Dr. Matthias Leistner is Chair of Civil Law, Intellectual Property Law with Information and IT Law at LMU

Prof. Dr. Frauke Kreuter is an expert in data science and data quality. | © Fotostudio klassisch-modern

How good is the AI database?

For addressing social and societal questions, the data used in AI systems often leave much to be desired. Achieving a truly representative depiction of society is challenging—and relying solely on arbitrary data scraped from the internet is unrealistic. In the coming years, national data infrastructure projects, including BERD@NFDI, will face the intriguing challenge of preparing and providing high-quality data from social processes as training data, all while protecting individual privacy and minimizing risks to individuals.

In my view, however, continuing to train AI models on poor-quality data poses even greater risks. Survey Methodology is a discipline that developed for years useful frameworks to assess data quality. We are currently working in interdisciplinary teams to adapt those to the AI world. One example is this ICML position paper: Insights from Survey Methodology can Improve Training Data"

Prof. Dr. Frauke Kreuter holds the Chair of Statistics and Data Science at LMU Munich

Prof. Dr. Frauke Kreuter: The data treasure hunter

Alexander Wuttke stands in front of several trees

Prof. Dr. Alexander Wuttke investigates what endangers and strengthens liberal societies. He also uses digital data and AI to do this. | © LC Productions

Can (AI-driven) fake news campaigns potentially influence election results?

“Rumors and half-truths have been a feature of democratic election campaigns since the very beginning. But in this day and age, trust in traditional information watchdogs such as public media outlets and newspapers is crumbling. Fewer and fewer people are turning to institutions that fact-check sources according to the codes of journalism. Instead, social media are gaining importance as decentralized sources of information.

Against this backdrop, artificial intelligence opens up a new dimension, given that GenAI can create fake images and videos that are often initially indistinguishable from the real thing. This makes the value of trustworthy sources even greater. We are therefore experiencing two opposing things at once: the need for guidance is at a peak, but mistrust in traditional authorities that could provide such guidance is at a low.”

Prof. Dr. Alexander Wuttke is Professor of Digitalization and Political Behavior at LMU’s Geschwister Scholl Institute.

Prof. Dr. Alexander Wuttke: Studying how democracies keep going

Prof. Dr. Julian Schröter stands in front of the Philologicum library and looks into the camera.

Prof. Dr. Julian Schröter

takes a digital approach to studying literature. | © LMU / LC Productions

Will AI write bestsellers in the future?

“Depending on what you mean by ‘AI,’ AI has long been involved in the production of novels, some of which have become bestsellers. If you approach the question in this way and see large language models like ChatGPT as examples of AI, then the question is not very spectacular. The question only becomes more explosive if you approach it more radically than just asking whether AI will produce bestsellers without human intervention.

Thanks to the latest software, which, according to a report from October 2024, can apparently assess whether a book will be a bestseller, it is possible to imagine a scenario where all you have to do is tell AI to ‘write a bestseller!’ The public concern that was expressed following this announcement was interesting. It was reported in the newspapers that good literature is not a matter of recombining existing patterns, but must offer something new. An algorithm that predicts bestseller potential would not foster the creation of something new, and thus a work of art, but rather prevent it.

However, this concern is based on people confusing the normative concept of art and the market phenomenon of the bestseller. Many novels that become bestsellers can more or less be described as a recombination of tried-and-tested patterns. I can well imagine a future where stochastic large language models will be able to generate texts that meet all the requirements for becoming bestsellers.

Let’s end with a thought experiment: Suppose AI-generated novels regularly become bestsellers in the future. From that moment on, it will be impossible to prevent publishers, as commercial enterprises, from instructing AI to write bestsellers. Thousands of novels will be generated in that way every day. But to become bestsellers, they need to be bought. And people’s willingness to buy cannot be scaled up as easily as text production can. Which means that only a small proportion of the texts produced in this way can actually become bestsellers.

If AI-produced literature is cheaper to publish than books written by humans, then the model of the AI-generated entertainment product will prevail. Would that be a bad thing? For those who write novels for a living, yes. But works of art in the emphatic sense can still be written by humans — with or without the help of AI. Whether AI can produce art was not the question of this article.”

Prof. Dr. Julian Schröter is Professor of Digital Literary Studies at LMU’s Institute for German Philology.

How human should AI be or behave?

Sven Nyhom wears a white shirt and a dark jacket and looks into the camera with a smile.

Prof. Dr. Sven Nyholm

is an expert on ethical issues that arise in the development and application of AI. | © Angeline Swinkels | fotograaf

“The more human-like an AI technology is, the greater the risk of deception. But at the same time, the more human-like an AI technology is, the more user friendly it can potentially be. Basically, AI technologies should be as human-like as they need to be to foster important human values and interests.

However, AI technologies must not be so human-like that they are misleading or degrading, or violate human dignity. It’s also important to bear in mind what making AI technologies human-like means: it means making them imperfect and flawed in some ways, since being human is, at least in some respects, based on imperfection and limitations.

When you ask them to do too much, humans sometimes say, ‘I’m not a robot!’ So if we want to make AI technologies human-like, they can’t be like robots – so that’s a paradox right there! However, since we want AI technologies to help us with difficult tasks, we often have an interest in them being more capable than humans in certain ways, so that they’re not too flawed.

It’s important that we don’t let human-like AI technologies take over the tasks we would rather keep for ourselves as human beings. Things that are human-specific – and that are part of what makes being human beautiful and meaningful – should remain with humans and not be handed over to human-like AI technologies. It’s better to try to hand over inhuman tasks to AI technologies and leave humanity to the humans.”

Prof. Dr. Sven Nyholm is Professor of Ethics of Artificial Intelligence at LMU.

What are you looking for?