No. 114/2022

ments in practical applications with some very mysterious mistakes that the machines seem to make if there’s some noise in the image. We don’t understand why because we don’t know what the algorithm really is doing inside,” says Becker-Asano. Humans can recognise a stop sign even when the image is noisy, or the colours are wrong. But AI can get confused even by just a sticker on the stop sign, or weather and light conditions that are different from the ones in the test environment. “When a mistake does happen and the car fails to stop at a stop sign because the AI has classified something wrongly, for instance, theoretically we could then analyse the machine’s memory. There are masses of data on the computer, and you can take a snapshot of the neural network at the very moment the machine makes the mistake. But all you dis­ cover is a load of data,” says Becker-Asano, explaining the problem. Even simply add- ing further training data does not guarantee that something that worked before will work again in the future, Daniel Rückert empha- sises. “Precisely because we don’t exactly know what’s going on inside the black box.” According to Tobias Matzner, making the black box transparent would thus be an important step in AI development so that those who use AI can trust it. It is important to him that people understand what happens to their data when they use artificial intelligence and that they are told how their data can influence › AI has enormous potential for our society, but it’s up to us how we decide to use it. Aimee van Wynsberghe, Alexander von Humboldt Professor for AI at the University of Bonn Listen to the con- tents of this article and much more in the Alexander von Humboldt Founda- tion’s podcast 19 HUMBOLDT KOSMOS 114/2022

RkJQdWJsaXNoZXIy NTMzMTY=