How artificial intelligence is changing teaching and learning
6 Nov 2023
From mathematics to theater studies, teaching at LMU is embracing the possibilities of artificial intelligence (AI).
6 Nov 2023
From mathematics to theater studies, teaching at LMU is embracing the possibilities of artificial intelligence (AI).
“With AI, every student has their personal research assistant at their fingertips,” says Professor Frauke Kreuter. “One that is super-efficient, but that also makes mistakes and is liable to make things up.” Chair of Statistics and Data Science in Social Sciences and the Humanities, Kreuter is one of the growing number of teachers at LMU who are imparting the methods, applications, and risks of AI to their students – in subjects ranging from Assyriology to business studies, and from mathematics to pedagogy.
In the social sciences, explains Kreuter, there is a lot of interest currently in large language models (LLMs) such as ChatGPT. “When it comes to academic research, they are not yet able to penetrate the field, but they are able to reproduce and summarize basic findings.” Moreover, she notes, they could write codes for statistical analyses in the social sciences. “So I give a voice input saying which analysis I need, and I get out the appropriate code. That makes life a great deal easier, even so far as rendering complex programming languages like Python or R accessible to all other disciplines.”
According to Kreuter, the new tools also support text production and data collection: “The challenge is to integrate them in ways that are useful. We encourage students, for example, to get AI to draw up standardized lists of sources or formulate questionnaires for empirical papers. LLMs can adapt them to a target group such that, say, a 12-year-old can understand them – and translate them into 24 languages to boot. That saves a tremendous amount of time.”
Courses on AI in the social sciences do not go deep into the mathematics, but explain how to use AI and lay out the opportunities and risks. “Our approach is akin to that of a driving school,” says Kreuter. “We explain the rules of the road, how to handle the car, and how to steer it safely through traffic, but not how the body is built or how the engine works.” As her colleague Dr. Anna-Carolina Haensch points out, few students have substantially engaged with AI before now, aside perhaps from “putting a joke question to the AI.” For her part, she positively recommends trying out AI when working on papers. “The best way is to start with a familiar topic – to get a feel for where it can help and where it cannot.”
When it comes to reliability in particular, there are still “big question marks,” cautions Kreuter, who is a fellow at the Konrad Zuse School of Excellence in Reliable AI. This advanced training and research program, recently founded by LMU and the Technical University of Munich (TUM), instructs master’s and doctoral students in the background to such reliability problems. As the director representing LMU, Professor Gitta Kutyniok, explains, the presence of such problems is reflected not only in the EU AI Act and the G7 Hiroshima AI protocol, but also in the fact that self-driving cars are still not roadworthy.
“Even stickers on road signs can cause the system to make incorrect decisions, while in the field of medicine, AI can generate wrong diagnoses or ‘hallucinate’ structures in MRI images,” says Kutyniok, Chair of Mathematical Foundations of Artificial Intelligence at LMU. In addition to Medicine & Healthcare, and Robotics & Interacting Systems, the Konrad Zuse School also teaches courses in Algorithmic Decision Making and Mathematical & Algorithmic Foundations. “After all, solutions to the roots of reliability problems cannot emerge from fields of application, but only from theory,” says Kutyniok. The mathematical basis comprises domains such as linear algebra, stochastics, statistics, and optimization. If we wanted to dig even deeper, we could add functional analysis, logic, and approximation theory – “and in one way or another, almost all mathematical domains.”
The Konrad Zuse School, which offers theory and internships as well as residencies abroad in universities such as Stanford and Princeton, generated interest right from the outset. In the first round of applications, it received “a lot of applications from all over the world.” Moreover, its “Mathematics of AI” event at LMU was also “always packed to the rafters.” Contrary to what you might expect, AI professorships are not limited to disciplines such as mathematics, statistics, and IT, but extend to fields like natural language processing, business studies, archeology, and the arts. “I would venture that we’re the only university in Germany that properly reflects current social changes in the breadth of topics we cover at the highest level.”
Arts students, for example, can attend the master’s seminar “Creating Art(efacts): Computer-based Image Generation and Editing” taught by Professor Björn Ommer, who himself has developed an AI-based image generation program. Meanwhile, astrophysics students can take an elective course in Bayesian inference, a commonly used statistical approach in machine learning. For its part, the Institute of Near Eastern Archeology offers the seminar “Archeology in the age of AI – No-coding tools and interactive software,” which teaches students, for instance, how to gauge when the use of AI is effective and justifiable.
As of this semester, bachelor’s students from 27 disciplines (and counting) can take the “AI as a major Minor” program at LMU. This allows them to supplement their major – whether that be English studies or geography, theater studies or political science, Scandinavian studies or musicology – with an educational grounding in a field that makes them fit for the future world of work and opens the door to the thriving AI research sector. “AI technologies – along with their huge potential for various applications – have now arrived in almost all branches of science,” explains Professor Eyke Hüllermeier. “Our new study program equips students with the requisite basic knowledge to use AI effectively in their own field of expertise.”
Building on mathematical and statistical foundations as well as programming skills, the students learn methods that enable them to analyze questions pertaining to their major subject. When it is not formally possible to take this program as a minor subject, students can choose to take an additional AI specialization instead. To this end, there are agreements in place with disciplines such as economics, business studies, psychology, and physics. In each case, the curriculum includes application blocks with contents that are determined by the respective major subject: in astronomy, this can be data evaluation for analyzing signals from space; in archeology, the digitalization of excavation sites; or in medicine, the analysis of X-ray images. Teachers do not go into the math “any deeper than strictly necessary,” notes Hüllermeier. The minor subject is therefore offered in two variants – one for natural science courses, where students already have advanced mathematical skills, and one for the humanities and social sciences.
LMU physics educationalist Professor Jochen Kuhn agrees that students of subjects like English studies or pedagogics do not need to get deep into the math to be able to apply AI to their disciplines. “It’s more about gaining an understanding of the basic principle: If I order something on Amazon, for example, the company collects data, which it feeds into an AI model, thereby training it. This model can then make predictions, on the basis of which certain adverts are displayed to me, and so on. That’s all AI.”
As head of the junior research group for AI at the Chair of Physics Education, Dr. Stefan Küchemann works with Jochen Kuhn to investigate learning with and about artificial intelligence at schools and universities. “Math at school is usually presented as an exact science,” he explains. “AI, however, is about uncertainties, imprecisions, probabilities. In essence, it involves complex optimization functions concerning things like derivatives, such as school students learn about in senior grades.” But AI algorithms contain a large number of parameters, sometimes in the billions, observes Küchemann, which are set on the basis of the optimization functions. “That’s very, very abstract.”
In a positive development, the new PISA competence framework has “uncertainty and data” as a category. “So when future students come to us at university, they should have the basic skills required for AI and other aspects of data science.”
In addition to the use of AI in teaching, Kuhn and Küchemann also study what AI means for teaching. “There’s scarcely a genre of text left where we can clearly identify what was written by a human and what was written by a machine? That’s a fundamental problem, not least for university exams,” says Kuhn. If the use of AI in exams is to be prevented, “exams would have to be written by hand in an examination hall, with no digital devices allowed.”
Distance learning poses a problem here, as large language models are often capable of solving multiple choice tasks. So the teacher setting the exam would have to check in advance whether large language models are able to crack their specific exam. “Other strategies would be to make tasks more complex to compensate for the help given by AI, or to involve questions of an arithmetic or contextualized nature, which the language model cannot classify.”
As well as attending to limits and risks, however, it is equally important to sound out and utilize the great potential of AI for teaching and learning and to study its effectiveness. “As a teacher, I can use a large language model, for instance, to draw up diversified exercises, multiple choice answers, and even lesson plans, and to facilitate individualized learning,” explains Kuhn. This can be accomplished much faster with AI, with a larger corpus and greater variety, “with the proviso that the human checking process is as critically important as the prompting itself.”
LMU educationalist Dr. Florian Schultz-Pernice also believes that AI applications will “fundamentally transform” learning and teaching at schools and universities. “Applications like ChatGPT offer us a hugely powerful tool,” says the Director of Media Education at LMU’s DigiLLab for students training to become a teacher. “They can help educators produce a resource quickly and easily – such as a quiz or a text with questions about the content for testing comprehension.” They can also provide feedback on the spelling, grammar, and style of one’s own texts, he adds. “AI can even generate suggestions on the selection and structuring of teaching contents and devise tasks of different difficulty levels, which are tailored individually to the learners.” Moreover, it can assist with the correcting of articles and seminar papers. “This frees up time resources,” says Schultz-Pernice, “which can be redirected to giving students individual support or to other activities that AI cannot – yet – do.”
“After completing a bachelor’s degree in economics and political science in Heidelberg and a master’s in social data science in Oxford, I’m now doing a doctorate at the Department of Statistics at LMU. I’m researching AI safety, and more specifically what knowledge language models learn and store about us humans and the world. We’re living at a time in which it feels as if new developments in AI are happening every day. That’s super exciting, but it’s also hard sometimes to keep up with the pace of developments.
It’s vitally important that the systems we use are reliable. Research into and knowledge about ‘reliable AI’ will therefore benefit us all in the future. The Konrad Zuse School is such an interesting place, where the topic of AI is observed from a wide variety of perspectives. Because of the multifaceted nature of AI, interdisciplinary research is tremendously important – not to mention, fun.”
Sarah Ball, doctoral candidate in statistics
“What does AI mean? How can I use it in my profession? What risks does it hold? In my bachelor’s and master’s degrees, I had already done deep dives into uncertainties in decision theory and statistics. Uncertainty Representation and Quantification in Machine Learning is now the topic of my dissertation. Adequately representing and quantifying uncertainties in machine learning requires a deep understanding of statistical and probabilistic methods as well as broad knowledge in the field – this can be intimidating at first.
The Konrad Zuse School is fascinating. I love the way you can engage with people from various academic disciplines about mathematical, sociological, and philosophical aspects of AI – and can do so from theoretical and practical perspectives. I want to apply my expertise in future to helping make machine learning and AI more reliable.”
Yusuf Sale, doctoral candidate in information technology
“I’m doing my doctorate at the interface between neuroradiology, neurology, and psychiatry on the effects of contact sport on the brain. I’d already encountered AI before my studies, when I was at a laboratory in the United States and discovered how they used AI for image processing. Because I’m passionate about passing on this knowledge and want to impart exciting insights to the next generation of doctors, I co-founded the LMU AIM (Artificial Intelligence in Medicine) initiative a few years ago. With our programming courses, organized tours to start-ups, lecture series, and much more, we’re preparing medical students and young doctors for the 21st century. After all, if we are to leverage AI applications to help patients, first we need to understand what they’re actually about.”
Tim Wiegand, doctoral candidate in medicine