AI - Artificial intelligence is increasingly shaping our lives: Chatbots answer in the lecture hall, real-time sensors control factories, digital surgical techniques relieve surgeons. Business, healthcare, universities—everyone is looking for qualified specialists to help shape the technological revolution. The 2019 Research Summit and the 2019 Science Year, as well as the Federal Government's AI Strategy and the Franco-German Robotics AI Strategy, all testify to the efforts and discussions associated with this. But what exactly is AI, and where does it start? What can it do, and what can it not do? What is the state of research, also on an international level? And how must we act as a society?
The term “artificial intelligence” is currently difficult to sum up. Coined in the 1950s, it first of all describes in general all machine activities that require intelligence in humans—and even then a striking issue becomes apparent: The understanding of human intelligence itself is not clearly defined.
In 1950, the British mathematician Alan Turing outlined an idea to describe artificial intelligence: A person should have conversations with two counterparts—one of them a human being and the other a technical system. Turing assumed that a comparison of the conversations, which should be conducted solely by keyboard and screen, would show the difference between human and artificial intelligence. Nowadays, the Turing test is considered unsuitable for clarifying this difference. Even if software, a chatbot, or an avatar were able to imitate a human being, it wouldn't say much about whether any of these things are or have human intelligence.
At the moment there are no research approaches that lead to a “strong artificial intelligence”, meaning a system that has cognitive abilities comparable to those of humans, one that chats with us today, publishes ethical questions tomorrow, and develops mathematical models for the description of cancer cells the day after that. Today's systems have “weak AIs”; they are used in clearly defined areas and solve specific problems. There they sometimes achieve amazing things. The computer program AlphaZero, for example, quickly became a master in strategic board games such as chess or Go, after merely being programmed with the rules. In millions of games against itself, without further human intervention, the program developed abilities that go far beyond that of a human being.
In previous decades, AI research was characterised by great promises and equally great disappointments. At first, hopes were placed primarily on “symbolic AI”, in which logical rules and a knowledge base are programmed. However, it soon became apparent that the complexity of reality had been underestimated and it was extremely difficult to adapt to new findings. This heralded the “AI winter” in the 1980s, a period of declining research intensity.
When we talk about artificial intelligence today, we almost exclusively mean artificial neural networks. These are trained on the basis of data material and learning rules and thus made capable of solving comparatively complex tasks. But even if the terminology is based on biological models, none of these neural networks have yet developed capabilities that can be compared with the human brain. In the meantime, discussions are taking place as to whether these systems are gradually reaching their limits. In particular, the need to use ever larger amounts of data, which require ever higher computing power, makes the use of such networks more difficult. While infants often learn new concepts by means of one or two examples, an artificial neural network needs thousands or millions of photos to develop a concept of a cat or a pedestrian.
Editor Newsletter and Project Manager, Science Course for Journalists
|Phone||0345 - 47 239 - 801|
|Fax||0345 - 47 239 - 809|
Head of Department Press and Public Relations
|Phone||0345 - 472 39 - 800|
|Fax||0345 - 472 39 - 809|