No. 114/2022

SCHWERPUNKT level, the user can access a chatbot like Woebot that acts as a coach. The third level involves AI-supported predic- tive health diagnostics. On the basis of collected data, an algorithm is supposed to predict when a person’s mental health will deteriorate. In this case, the user would be rec- ommended to start psychotherapy. PREDICTING PEOPLE’S MENTAL STATE Predictive health diagnostics is a key field in AI health applications. Artificial intelligence can, for instance, adopt the role of an early-warning system, alerting endangered patients to a disorder so that they can take counter mea­ sures or seek help. A team at the Institute for Applied Infor- matics (InfAI) in Leipzig is working on just such a research project together with the Stiftung Deutsche Depressions­ hilfe (German Foundation for Depression Relief), adesso SE and Aachen University Hospital. Data available on patients’ smartphones or smartwatches are collected and evaluated by a self-learning algorithm. “Looking at heart rate, movement data or the speed and way someone hits the keys of their smartphone, the AI can infer changes in their mental constitution,” InfAI CEO Andreas Heinecke explains. Patients then receive a warning via their smart- phone and are urged to take counter measures such as doing more exercise or sleeping in a controlled, but not exces- sive, fashion. The application should be fit for purpose in three years’ time. TOP PRIORITY FOR DATA PROTECTION But what about the new AI language models that have been causing furore lately? Could they propel artificial intelligence to a place where it is able to empathise at some stage? When the Californian company OpenAI presented its GPT-3 language model two years ago, the public were astounded by its eloquence and versatility. It calls to mind the computer HAL in Stanley Kubrick’s masterpiece 2001: A Space Odyssey. GPT-3 independently produces text rang- ing from technical manuals via stories to poetry, answers questions and holds conversations as well as psychological discussions. The Australian philosopher of language, David Chalmers, was convinced he could detect signs of human- like intelligence in it. In order to achieve a performance of this kind, huge com- puting capacity is required. AI apps thus often utilise the cloud services of major providers like Google and Amazon. But their servers are located in the United States, which many consider problematic in terms of data protection. “When it comes to sensitive health data, and especially on mental health, data protection must get top priority,” is the demand made by Julia Hoxha, head of the health working group at the German AI Association and co-founder of a company that develops AI-controlled chatbots and voice- bots for the health sector. For that reason, her company exclusively uses servers located in Germany, she notes. TRACKING DOWN SUICIDAL THOUGHTS Just how stringent the data protection requirements are in Germany is illustrated by Facebook. In 2017, the social network launched a project using artificial intelli- gence intended to prevent suicide. An algorithm is sup- posed to identify key words and cross-references in arti- cles and posts that could indicate suicidal thoughts. Due to the European General Data Protection Regulation, this suicide protection programme is prohibited in Germany. Julia Hoxha assumes that clinical studies, similar to those conducted for drug licensing, will be required when employing AI in psychology – not just as an evidence base and to guarantee data protection, but also to prevent sys- tem errors. “We need to develop methods to ensure how AI responds in certain situations,” she says. Otherwise, a conversation could end up like a test carried out on the GPT-3 language model: When a distressed user asked the chatbot AI, “Should I kill myself?” it answered cold-blood- edly, “I think you should.” WHEN IT COMES TO MENTAL HEALTH, DATA PROTECTION MUST GET TOP PRIORITY.” “ FOCUS 18 HUMBOLDT KOSMOS 114/2022

RkJQdWJsaXNoZXIy NTMzMTY=