What is the basis of “artificial” intelligence?
Thomas Seidl, Chair of Database Systems and Data Mining
“In essence, the term ‘artificial intelligence’ (AI) refers to computer systems that mimic behaviors which are regarded as intelligent. In a formal sense, AI can be understood as a mathematical function. Situations, observations, problems and tasks drawn from the real world serve as inputs that are then ‘mapped’ onto appropriate responses, decisions and courses of action as outputs. First-generation AI systems encoded these functions as lists of manually established rules. For example, systems designed to interpret natural languages were modeled by formally encoding all the semantic and syntactic regularities – and many of the irregularities – of the language concerned. Such systems soon ran into problems, since everyday language use is characterized by a much larger set of irregularities than anticipated. Current AI systems automatically learn these functions from carefully selected text samples. This is done by choosing functional architectures based, for example, on the principles used to construct decision trees or neural networks, which are progressively and automatically adapted to the characteristics posed by the training data. This approach permits examples of language found in newspapers, books, lecture texts or copies of parliamentary debates to be used as training models, which makes it possible to formally capture as much as possible of the structures of real languages.
One of the keys to successful automatic (or machine) learning is the use of reinforcement learning. This involves adjusting the system’s behavior by providing appropriate feedback in response to successful or failed attempts to learn the set task. Programming in this context is largely restricted to formulating, in mathematical terms, criteria for the measurement of success or failure – for example, by defining suitable and effective rewards and penalties for the desired outcomes and the incorrect inferences respectively. On the basis of feedback of this sort, a learning system can alter its subsequent behavior in an appropriate fashion. In this sense, AI closely resembles the learning process in humans and other organisms. Both are able to refine their behavior according to the principle of trial and error.”
Will robots cognitively surpass humans?
Markus Paulus, Chair of Developmental and Educational Psychology II
“A fundamental distinction can be made between the two. Human intelligence is embedded in our culture. Our ideas and concepts have developed against the background of our history, they are an integral component of a social way of life. Only an entity that grows up in this culture can really understand it. We are now in a position to build robots that can simulate certain aspects of this culture in ways that resemble human behavior. But human intelligence encompasses so much more! A person knows that red is a color and what it looks like, knows the kinds of objects in the world around us that are normally red, and knows whose favorite color it is. – There is a whole web of knowledge linked to the concept. The ability to perceive and respond to nuances and humor is another component of human intelligence. And much of what intelligence involves can only be understood if one possesses a body that is capable of registering sensory impressions. This is often referred to as ‘embodied cognition’. One must be able to feel pain in order to understand what it means. And one can’t develop empathy with other feeling beings if one cannot understand how they feel. The gulf between robots and people is so wide that AI will never attain anything comparable to human intelligence. Robots may well surpass humans in many individual capacities. But in terms of the breadth and flexibility of our capabilities, and their fundamental significance for us, AI will never come near us.”
Will machines write tomorrow’s bestsellers?
Prof. Dr. Oliver Jahraus, Chair of Modern and Contemporary German Literature and Media
“That artificial intelligence, coupled with artistic creativity, might someday invade the domain of literature – a domain which is preeminently, and in a very special sense, devoted to each individual’s image of the world and quarrel with the self, is an appalling notion. Equipped with empirically acquired knowledge – which we so much enjoy reading – the machine then writes the new Werther for us, which touches our innermost soul. Perhaps debates on the future of AI would be less fraught if we all paid more attention to what one can learn from literature. Literature has always been a matter of communication between an impersonal medium (a script and its semantics, which its readers share) and a highly personalized story. Literature always implies the existence of, and attribution to, an author. The question is not whether AI can write a bestseller, whether it can deceive us into believing that it was written by someone like us, or whether writing robots can replace Goethe. The question is whether AI can be integrated into a complex system of imputations and attributions, within which the relationships between the general and the particular, the collective and the individual must be probed and constantly renegotiated. AI will only be able to assume this function if and when, its products acquire selfhood. And I believe that such a development is – not alone technically, but structurally – impossible.”
Will machines soon be able to translate texts better than any human?
Hinrich Schütze, Chair of Computer Linguistics and Director of the Center for Information and Speech Processing (CIS)
“In some cases, machines will soon be able to translate texts better than people. A computer can do it faster, and has greater access to specialized technical terminology. Computers will have no problems with simple text formats. But there are limits to what they will be able to do. It is doubtful that machine translation could ever transport such subtleties as irony, sarcasm, or indeed all forms of literary texts, with their diverse linguistic registers, expressive nuances and enigmatic allusions. Another point is that, while algorithms can in part enable machines to catch up with human abilities, the user also adapts to the computer. For instance, provided that both sides use simple language, Google Translate can already make it possible for us to communicate with someone who only speaks Thai. That is analogous to how we have learned to communicate with search engines. We type in, say, ‘Einstein birthday‘ and get an immediate response, although we would never put the question like that if we were speaking to someone else. Nevertheless, search engines are a powerful innovation, and have become a useful tool for us all. Similarly, Google Translate has great potential. That is not to say that we are dealing with intelligence in the sense in which we understand the term. But then, what do we take artificial intelligence to mean? Chess computers were once seen as an example of AI. Would anyone take that view today? Clearly, what we choose to define as AI has changed a lot and it will keep changing.“
How will conversing with chatbots change us?
Prof. Dr. Sarah Diefenbach, Department of Psychology, Professor for Market and Consumer Psychology
“In spheres in which logic is dominant, where situations, decisions and actions can be framed in strictly logical terms and reduced to formulas, artificial intelligence can certainly outdo us and do a great deal for us. But what really interests me as a psychologist are the areas in emotional factors play a larger role – the whole question of how AI will alter our social lives and our everyday interactions with each other? Take the services sector, for example. What effect does it have on us when we suddenly realize that we have been conversing with a chatbot for the last 10 minutes – do we feel cheated, insulted, diminished? Does our self-esteem take a knock? Consider the role of bots in social networks. In her Master’s thesis, one of my students is analyzing the impact of Instagram likes on users’ self-esteem. One pertinent question here is whether the source of the accolade – real person or bot profile – makes a difference. In another project, we are looking at social robots and what sort of ‘personality’ such robots should exhibit – in nursing homes, for example. Should robots always be respectful and unassuming, like an obsequious servant? Or might a grumbling robot with needs of its own, with rough edges and moods – a real personality – be a welcome asset in such situations?"
Are machines likely to replace journalists in the near future?
Neil Thurman, Professor at LMU’s Department of Media and Communication
„Can journalism be automated, taking place with little or no direct human control? At the moment, no – not across all its modes and methods. The need to break tasks down into regular, repeatable routines – as authors of automation’s algorithms must do – and machine learning’s reliance on ‘training data’ from past exempla, set limitations on the scope for automation in the complex, creative, and above all contemporaneous task that is journalism.
These limitations have not, however, prevented automation encroaching on some of the tasks that journalists perform, including the identification of story leads, the creation of news texts, and decisions about which stories to publish to whom and with what priority.
Advances such as ‘robot writing’ have, some say, the potential to buttress journalism’s shaky finances and even free up resources for investigative journalism. However, there are also concerns about the privacy implications of computers with a ‘nose for news’, and about the filter bubbles that may be formed by news personalization.
As algorithms and AI develop, we must ensure that journalism can continue to serve the public sustainably, transparently, and accountably.”
Will AI make decisions on the economy for us?
Professor Monika Schnitzer, Chair of Comparative Economics
“Artificial intelligence generates “best estimates” from sets of data. For example, on the basis of available statistics, a credit card company can calculate the probability that a card that has just been used to make a purchase has been stolen from its real owner. Depending on the data, the risk can be quantified on a relative scale, and payment can be rejected or not, as the case may be.
Assessments of this kind are the basis of many business models. And the application of artificial intelligence can drastically reduce the cost of such estimates, because it enables large amounts of data relating to comparable situations in the past to be analyzed very rapidly. What current methods cannot yet do is to independently evaluate the consequences of actions. Which of the available options should be taken into account, and what safety margins should be built into the assessment to avoid nasty surprises? These are the types of decisions that will still have to be made by people in the future. Machines can only execute tasks assigned to them by people.”
Will algorithms determine who gets the job?
Markus Bühner, Chair of Psychological Methodology and Diagnostics
“Algorithms are already being used in personnel selection. They may be of assistance in decision processes, but they cannot replace them. I believe that many of the claims being made for the programs that are on the market will turn out to be unrealistic. Very often, it is unclear to me what real benefits these algorithms are supposed to have. Of course, an assessment of a candidate’s suitability made by an algorithm which takes millions of variables into consideration may be theoretically better. What matters is the quality of the assessment. For example, if an evaluation makes use of vocal characteristics, what is the fate of the candidate who happens to have a cold or is not a native speaker? According to the DIN norm for job related proficiency assessment, the information collected in the context of personnel assessments must be directly relevant to the job demands. One is not permitted to collect and analyze every bit of data one can get one’s hands on. It will be very interesting to see how the application of algorithms can be reconciled with the provisions of our legislation on data protection, which enshrines the right to transparency. And even if the ethical and legal aspects of the practice can be satisfactorily clarified, the question of the validity of the algorithms will remain. If they are to make reliable predictions as to the suitability of candidates with new skills, they must be subjected to constant redevelopment.” Foto: Marek&Beier Fotografen
What is the meaning of ‘intelligence’?
Dr. Orsolya Friedrich, Institute for the Ethics, History and Theory of Medicine
“A counter-question seems to be more exciting at this point, namely what it means to be smart. To grasp the meaning of intelligence already presents us with bigger problems. Equating intelligence with the ability to process information rapidly and efficiently for problem solving would be inadequate, both as a description of the potential of AI and as an attempt to capture the myriad possible connotations of human intelligence. AI has already demonstrated a capacity to come up with creative solutions to problems in certain limited, highly structured contexts, such as chess. Currently, AI has far more difficulty in achieving comparable successes in more complex, less clearly structured settings. Where things become complex in our everyday lives, it is usually considered to be an intelligent approach to ask the right questions first, to consider the issue from different angles or to try to find links between different aspects of the matter in hand. But one could also argue for a concept of intelligence or "being smart", which depends on an adequate emotional perception or a good adaptation to the environment. It is conceivable that AI could achieve a certain degree of success in simulating these aspects of intelligence. However, these capabilities are generally assumed to be existential in human. One reason for the ability of humans to perceive and interpret emotional states, or to adapt flexibly to the environment, can be seen in their existential need to find ways of coping with the world and getting along with other humans. Whether such a difference in turn necessarily restricts how smart AI will get is another of those conceptual questions, which we humans have to think about and answer.”
“We are never going to manufacture real partners”
Julian Nida-Rümelin, Professor of Philosophy and Political Theory
“At the moment, the field of artificial intelligence still finds itself in search mode, so to speak. There is a clear trend in robotics towards the design of machines that can imitate human capabilities, such as face recognition. It is, however, rather unlikely that this will determine future developments in the field. Attempts to animate non-living matter as projections of ourselves would be ill-advised. We are never going to manufacture real interlocutors or partners. Instead, we should concentrate on the productive core of the economy. If efforts to create Industry 4.0 are primarily focused on developing instrumental and technical capacities, digitalization can contribute a great deal to economic progress.”