Picture: Adobe Stock
From the perspective of society, AI is often seen as a double black box: On the one hand, many of the algorithms underlying AI are owned by start-ups and technology companies and are not accessible to the public. On the other hand, it is part of the nature of artificial neural networks in particular that they are not explicitly programmed, but learn, as it were, independently. This means that we use AI applications whose structure and function we know comparatively little about.
Society will have to deal with this double black box. The European General Data Protection Regulation (GDPR), which has been in force since 2018, lays out strict criteria: According to the regulation, major decisions about our lives must not be made by machines alone. And when algorithms are used, consumers have a right to “meaningful information about the logic involved”. This is because algorithms are always trained with existing data that reflects previous behaviour and decisions. For example, if women are disadvantaged compared to men, or minorities compared to majorities, there is a risk that this disadvantage will find its way into the algorithm and thus be continued in automated selection processes or in automated facial recognition.
At the same time, AI can be a powerful tool in the hands of governments. Since 2014, for example, China has introduced a social credit system designed to increase “honesty in government affairs”. Initially installed on a voluntary basis in the Greater Beijing area, such scoring is associated with the danger of political surveillance and even self-censorship. AI has also long been used in the military sector. Drones and robots equipped with sensors can distinguish between attackers and defenders. But should they, without human control, decide whether a person lives or dies? While the international community is just starting to discuss these ethical issues, countries such as Russia, China and the USA are working on autonomous weapons systems.
Whether and when the step from “weak AI” to “strong AI” will ever be taken cannot be predicted with any certainty. The current focus is more on establishing new structures in research, business and politics for the user-centred integration of AI into society, such as the Munich School of Robotics and Machine Intelligence, the Cyber Valley in the Stuttgart-Tübingen region, the Competence Network for Artificial Intelligence in North Rhine-Westphalia, the AI start-ups “made in Berlin”, the Digital Agenda of Saxony-Anhalt, and the Silicon Saxony high-tech network.
Editor Newsletter and Project Manager, Science Course for Journalists
|Phone||0345 - 47 239 - 801|
|Fax||0345 - 47 239 - 809|
Head of Department Press and Public Relations
|Phone||0345 - 472 39 - 800|
|Fax||0345 - 472 39 - 809|