Clear formulation of problems, precisely adapted algorithms, quality data: Stefan Feuerriegel, business information systems expert, explains how to successfully use AI in management contexts. From INSIGHTS research magazine.
Leaving our interview with Stefan Feuerriegel, Head of the LMU Institute of Artificial Intelligence (AI) in Management, one observation by the researcher stood out above all else: “I spend a lot of time explaining to my students, but also to managers from a wide variety of businesses, all the things that AI cannot do.” His job consists to a large extent in “dialing back the excessive expectations. When I’ve accomplished that, then we can start to work successfully with AI.”
“One size fits all?” There is no such thing in AI, says Stefan Feuerriegel. Companies must define in advance “what exactly their problem is. Then we develop the precise algorithms to match.”
This observation resonates because AI tends to be seen either as a universal panacea or as a superintelligence threatening humanity, but whose obscure operations cannot be second-guessed in either case. People are simultaneously impressed by the creative outputs of AI while fearing the emergent autonomy of computer intelligences, which could seize initiatives to actively shape society, culture, and history. AI is not a tool, but an actor, according to these fears, which sometimes go well beyond general unease. Such people picture AI as a self-organizing black box, whose decision-making processes exceed the capacities of human thought and are shrouded in mystery. AI thinks faster, so the logic goes – it processes more information and takes more parameters into account than people ever could. AI has become a superior adversary which will soon be lording it over us. Is there any substance to such fears?
Not much, when you follow Stefan Feuerriegel and his approach to successfully implementing these technologies.
Successful in its digital habitat
Read more about the topic heritage in the current issue of our LMU magazine EINSICHTEN at www.lmu.de/einsichten. In German
“We think it’s vital to put human decision-makers – whether from the worlds of management, research, or politics – in the driver’s seat in all phases of AI implementation,” says Feuerriegel. “Their specific problems must be solved and they must understand how the AI arrives at its proposed solutions in the case at hand.” The specific applications of AI, he explains, are limited to a relatively small subset of problems, especially in the domain of business management. Given that far-reaching decisions such as the merger, takeover, or closure of companies touch upon fundamental legal and ethical questions, “they must remain in the hands of managers.” Accordingly, AI can serve only as an input for human decision-making processes, not as a substitute. “AI does not formulate any fundamental management positions,” clarifies Feuerriegel. “Rather, it helps managers make well-founded decisions.”
AI is a purely mathematical intelligence. It is software, algorithms, run on computers. The limits of this purely mathematical implementation of intelligence – aside from the size of the datasets on which they are trained – thus reside in the limits of mathematics itself. AI can successfully handle only those real-world problems and tasks that can be represented in mathematical functions. In its digital habitat, AI is very good at recognizing patterns, synthesizing images, texts, voices, and music, and identifying errors in the design of products. But it cannot administer justice, do politics, or find somebody’s true love. It falls down when confronted with many complex real-life issues.
News
How the Meta algorithm influences election advertising
Artificial neural networks structured as deep-learning networks are a cornerstone of today’s AI. These are mathematical simulations running on computers, on the assumption they can model the information processing of the brain. All machine learning is learning from data. But this means: For the learning mathematical function of an AI to be able to model problems, there needs to be sufficient fundamental data.
Feuerriegel’s team consistently puts theories to the test. His large language models (LLMs), for example, were trained to track the long-term effects of hate speech on social media. “We were able to quantitatively demonstrate how infectious hate speech is. The cascades in which they spread online are larger, last longer, and exhibit greater structural virality than the cascades of ‘normal’ posts,” says Feuerriegel. Equally, this method helped his team recognize the connections between emotions and the virality of fake news. Moreover, it allowed researchers to identify pro-Russian propaganda since the start of the illegal attack on Ukraine and to track, for instance, the activity peaks of pro-Russian bot armies in the wake of the military invasion.
Feuerriegel advises his students and clients not only to assess the size and consistency of their data landscape before undertaking any AI implementation, but also to precisely define what problem they want an AI to solve. “On this basis, we develop, implement, and evaluate new AI algorithms to optimize people’s decision-making processes. And it cannot be emphasized often enough: people decide,” explains Feuerriegel. “And therefore the suggestions that AIs make must remain plausible and understandable.”
We think it’s vital to put human decision-makers in the driver’s seat in all phases of AI implementation.
Prof. Stefan Feuerriegel, Head of the Institute of Artificial Intelligence (AI) in Management at LMU
His use cases for companies are designed in such a way that the AI framework employed in a given case empowers managers to make evidence-based decisions according to their self-defined conditions. Feuerriegel describes this “trustworthy AI” approach, which is always focused on decision-makers, as incredibly versatile. The LMU scientist uses it, for example, to help investors identify promising startups on venture capital platforms. Machine learning also makes it possible, say, to efficiently allocate resources for development aid, or to track the effects of innovations in the use of new climate technologies.
Opaque engine room
Despite the advantages of this human-in-the-loop approach, however, companies still find it difficult to effectively combine manager roles and AI applications in collaborative approaches to solving problems. One of the most obvious reasons is that responsibilities cannot be delegated to machines, and this leads organizations to defer changes. Moreover, misgivings about AI persist: When and how should managers intervene in these computerized processes? Should they sign off on all sub-steps in the decision-making process? Or only on those with high risk, where the AI delivers results but it remains unclear how exactly the opaque engine room of the AI came up with them?
Furthermore, almost all companies contain highly specific data assets, built up over decades, as well as their own experts with high tacit knowledge. Indeed, the success of companies is often founded on such in-house knowledge. So how can this exclusive data be used productively for AI forecasts, but in such a way that the exclusive expertise cannot get into the hands of third parties? “That’s why interpretability remains the key,” answers Feuerriegel. “When we advise managers to dial back their expectations of AI, we don’t do so to disillusion them, but to get them on board.”
News
Russia’s war against Ukraine: the bot offensive flanking the invasion
Feuerriegel’s use of AI can be roughly divided into two categories: inherently interpretable models and post-hoc explanation models. In the case of inherently interpretable algorithms, the algorithm’s decision-making process can be directly understood by humans – for example, by consulting the classification rules in decision trees or by determining the coefficients in the linear regression applied by the algorithms. Linear regression means “plausibly reconstructing” the relationship between variables. The AI uses existing data to make statements and predictions about the relationship between them and ‘regresses’ them to show how changing one variable affects another variable.
“When managers hear that we use the technique of linear regression in our AI,” says Stefan Feuerriegel, “that is to say, something they learned in the second semester of their degree courses, they realize that AI is not witchcraft, but just a bit of math combined with computing time and good data.” Using the model of a linear regression generalized in this way, Feuerriegel and his team have investigated, for example, how electricity prices can be predicted based on weather data. And they have been able to generate statements about how long a company can continue to use a certain machine before it will have to be replaced.
Put to the test: Feuerriegel’s AI solutions help improve the distribution of aid supplies, ...
Post-hoc explanation techniques, the other variant of AI model preferred by Feuerriegel, are used when the internal processes of an AI algorithm become too complex to be understood by people, such as the operations that take place in deep neural networks. These are methods employed after the fact to explain the results of black-box algorithms by representing them with a simpler model, such as heatmaps. They visually emphasize the areas which are most relevant for the predictions made by the AI. Experts can then compare the AI explanations against their professional knowledge and either validate them or overrule the AI when it is incorrect.
We provide companies with two crystal balls. One shows the effects of a decision, while the other shows what happens if they duck the decision.
Stefan Feuerriegel
To assist quality management for a semiconductor manufacturer, for instance, Feuerriegel created “digital twins” of the real production setup – real-time virtualizations of the corresponding real, physical decision environments – in which the AI marks faulty products with heatmaps alongside human inspectors and justifies its identification. Feuerriegel likes to call such assistance “minimally invasive AI.”
Another model used by the LMU researcher goes by the name of counterfactual inference – conclusions drawn from a what-if question. This involves getting an AI to simulate what would have happened if it had used other variables under other conditions and therefore had come to other results. Counterfactual inference thus investigates hypothetical situations with the same mathematical meticulousness it devotes to factual ones. This is useful for understanding causal relationships and clarifying whether a certain decision actually had a positive effect. Feuerriegel offers a medical example: Let’s say a patient takes a medication and gets healthy. The counterfactual simulation calculates what would have happened if the patient had not taken the drug, or had taken a different dose. Would he or she still have become healthy? As such, counterfactual inference helps us ascertain how much of an influence a decision or an event actually has on an outcome.
Feuerriegel also collaborated with the website operators of a national daily newspaper to find out the best time and place for putting certain contents on its homepage.
“I think of myself as a problem solver,” says Feuerriegel. “To be able to solve problems, however, organizations must do their part and define in advance what exactly their problem is. Then we develop the precise algorithms to match.” There is no such thing as “one size fits all in AI” – each problem needs to be addressed on its individual merits. “We provide companies with two crystal balls. One shows the effects of a decision, while the other shows what happens if they duck the decision.”
“It cannot be emphasized often enough: people decide,” explains Stefan Feuerriegel. “And therefore the suggestions that AIs make must remain plausible and understandable.”
Prof. Stefan Feuerriegel is Head of the Institute of Artificial Intelligence (AI) in Management at LMU. He has a dual professorship at the LMU Munich School of Management and at the Faculty of Mathematics, Informatics, and Statistics at LMU. Feuerriegel studied at RWTH Aachen and then obtained his doctorate with a dissertation at the Chair of Information Systems Research at the University of Freiburg. Feuerriegel was associate professor at ETH Zurich before coming to LMU in 2021.
Breast cancer – dangerous genes
: With 70,000 new cases every year, breast cancer is the most common cancer affecting women in Germany – and sometimes it lurks in their genes.
Read now!