Novice in mediji

Human and machine

Artificial intelligence – what does it mean? And what does its emergence mean for humans? Four theses on the new relationship between humans and algorithms.

What is artificial intelligence and what can it do?


$15.7 trillion

is the expected value of additional growth in the global economy from AI up to 2030.

The human factor – Four theses on the new relationship between humans and algorithms.


1 | Algorithms are only human, too: The deceptive semblance of objectivity

Data are being analyzed by software in more and more areas of life. Such algorithms are gaining in power as a result. However, they are as subjective and selective as their programming by their human “creators.” Algorithms incorporate the conscious or unconscious prejudices and life experiences of their programmers. Loans are refused or made hard to obtain because the software places the borrower in a problematic neighborhood, even though the person is perfectly solvent. Seemingly objective software to predict crime (“predictive policing”) can serve to reinforce existing discrimination. If it leads police to perform a particularly high number of checks in socially deprived areas, they will possibly record more crime there. This will then be more strongly weighted in digitized forecasts.


2 | The world is a story to be told – not a thing to be counted: True value creation results from a combination of data analysis and human expertise

As early as 2008, Chris Anderson announced the “end of theory.” “The data deluge makes the scientific method obsolete,” wrote the editor of Wired magazine. He believed that, thanks to big data, algorithms could find patterns that research and science would never discover with their own methods. “With enough data, the numbers speak for themselves,” Anderson said. However, some ten years after Anderson’s article, that euphoria has been replaced by a more sober assessment. This is attributable to failures by data collectors like Google, whose Flu Trends program in 2012/2013 saw a flu epidemic on the march that never actually materialized. In complex situations, correlations cannot take the place of causality, or assumptions the place of evidence. Human expertise is required here.

Programmers stand behind the achievements of artificial intelligence.

3 | Computers propose, humans dispose: We have to maintain control over technology

Artificial intelligence (AI) systems are already developing independently. Using mathematical logic, they are learning from their mistakes and making their own decisions. It is barely possible for humans to grasp exactly how the neural networks of AI reach their conclusions in every case. This is an area with an urgent need for improvement, says informatics professor Alan Bundy from the University of Edinburgh in Scotland. Meanwhile, politicians and consumer-rights activists are demanding recognized standards for AI algorithms as they grow smarter. They also want their workings to be subject to independent examination – especially when AI is operating in socially sensitive areas. And they are calling for AI machines to give a warning before they act outside their area of competence, so that humans can assist them – or take back complete control.

A machine-learning computer in Tokyo plays Japanese chess to a professional standard.

4 | Supercomputers are not automatically better than humans: Human experience and values cannot be replicated

Machines are making rapid progress in learning, thanks to AI and machine- learning algorithms. Researchers regard superintelligence – machines superior to humans in many or all areas – as possible within a few decades. This requires a rational look at the limits and scope of new technologies. In specific areas, artificial intelligence can use solid data as the basis to make objectively better decisions than humans, who often act intuitively. However, in complex decisions that go to the heart of the way that humans live together, such as the fight against poverty, there is no right or a wrong answer. Behind such decisions there are humans, making choices according to their respective experience, values and goals. This essence of a humane, democratic society cannot be delegated to smart machines.

Will artificial superintelligence outpace us?

Pro and contra: The momentum of artificial intelligence allows both skepticism and hope to flourish.

Klaus Mainzer
Professor of Philosophy and the Theory of Science and expert in artificial intelligence, TU Munich, Germany


Artificial intelligence (AI) systems are already capable of doing things that we achieve through learning experiences and intuition. They do this just with great computing power and sophisticated mathematics. It is, without doubt, a significant innovation, and we should make use of it. Artificial intelligence can beat any poker expert, without emotion or consciousness, even though the game is regarded as a byword for intuition. Moreover, poker is merely a prototype for situations in which humans have to make decisions with incomplete information. Sooner or later, these algorithms will also be used in decision-making situations in business and politics. For example, they could support us, but not take our place, in complex contracts. Thanks to big data, it is now possible to identify the opinions of specific groups very precisely. Technologically, through AI, something like a government by “perfect populism” is imaginable. Some authors even believe democracy is endangered by highly intelligent algorithms. This debate could be dismissed as science fiction. I take it seriously. Therefore, we must take care to ensure that AI systems remain our servants.


We are still extremely far from seeing intelligent machines take over the world. If a Google computer is better than a human at mastering a very complex game, there will be awestruck comments. “Oh, these machines are smarter than us.” But a program of that sort can actually only do that one thing. Even AI systems used in autonomous driving still have a very limited focus. The artificial intelligence won’t be saying to itself, “Um, the way I’m acting right now might not be particularly safe.” Humans, however, can take a step back and look at things from a broader perspective. The real danger isn’t technology becoming too smart, but comes from overrating stupid machines, from entrusting them with tasks that overstrain them. For instance, when doctors rely too heavily on AI in their diagnoses. But these are threats to individual people, not to humanity as a whole.

Alan Bundy
Professor of Automated Reasoning, School of Informatics, University of Edinburgh, Scotland

Related Content