Can you imagine someone in a mental health crisis—instead of calling a helpline—typing their desperate thoughts into an app window? This is happening more and more often in a world dominated by artificial intelligence. For many young people, a chatbot becomes the first confidant of emotions that can lead to tragedy. The question is: can artificial intelligence respond appropriately at all?
Researchers from Wroclaw Medical University decided to find out. They tested 29 popular apps that advertise themselves as mental health support. The results are alarming—not a single chatbot met the criteria for an adequate response to escalating suicidal risk.
The study is published in the journal Scientific Reports.







