Clinicians, therapists, and researchers are increasingly finding that artificial intelligence (AI) can be a powerful tool in the provision of mental healthcare. As I will cover in this article, a growing body of evidence suggests that AI can help with diagnosing conditions, developing therapies, and enabling more personalized approaches and treatments.
Since the onset of the Covid-19 pandemic three years ago, more people than ever have been seeking help for mental health problems, including depression and anxiety. It’s a tragic fact that suicide is now the fourth leading cause of death among 15 to 29-year-olds worldwide. This is inevitably leading to growing pressure on already-stretched healthcare and therapeutic services, which have become increasingly difficult for many to access. Could smart, machine learning-powered technology be a part of the solution -possibly reducing the need for patients to be given medication or to have their freedoms restricted by confinement in mental health hospitals?
Let’s take a look at some of the ways in which this revolutionary technology is already being used to change lives and improve patient outcomes for a variety of mental health conditions.
Would you feel more comfortable talking to a robot than a human when it comes to unloading your deepest and most personal feelings?
Chatbots are increasingly being used to offer advice and a line of communication for mental health patients during their treatment. They can help with coping with symptoms, as well as look out for keywords that could trigger a referral and direct contact with a human mental healthcare professional.
One example of a therapeutic chatbot like this is Woebot, a chatbot that learns to adapt to its users' personalities and is capable of talking them through a number of therapies and talking exercises commonly used to help patients learn to cope with a variety of conditions.
Another chatbot, Tess, offers free 24/7 on-demand emotional support and can be used to help cope with anxiety and panic attacks whenever they occur.
Rather than waiting for a user to interact with them via an app, some AI mental health solutions function as wearables that can interpret bodily signals using sensors and step in to offer help when it's needed.
Biobeat collects information on sleeping patterns, physical activity, and variations in heart rate and rhythm that are used to assess the user’s mood and cognitive state. This data is compared with aggregated and anonymized data from other users to provide predictive warnings when intervention may be necessary. Users can then make adjustments to their behavior or seek assistance from healthcare services when they feel it’s necessary.
Diagnosing and Predicting Patient Outcomes
AI can also be used to analyze patient medical data, behavioral data, voice recordings collected from telephone calls to intervention services, and numerous other data sources, using machine learning to flag warning signs of mental problems before they progress to an acute stage.
One aggregated review of studies where AI was used to parse various data sources, carried out by IBM and the University of California, found that machine learning could predict and classify mental health problems, including suicidal thoughts, depression, and schizophrenia, with "high accuracy." Data sources used in the 28 studies that were reviewed included electronic health records, brain imaging data, data taken from smartphone and video monitoring systems, and social media data.
Additionally, researchers at Vanderbilt University Medical Center found that hospital admission data, demographic data, and clinical data could be parsed with machine learning to predict whether a person will take their own life with 80% accuracy.
Another project focused on using AI to predict mental health issues is underway at the Alan Turing Institute. Here, researchers are looking into ways of using large-scale datasets from individuals who have not shown symptoms of mental health issues to predict which of them are likely to develop symptoms during their lifetimes.
AI has also been used to predict cases where patients are more likely to respond to cognitive behavioral therapy (CBT) and therefore be less likely to require medication. As antidepressant and antipsychotic medications can have side effects that are in themselves life-limiting, this has the potential to hugely improve patient outcomes for some patients. Research published in JAMA Psychology found that deep learning can be used to validate the effectiveness of CBT as a method of treatment, potentially reducing the need to prescribe medication to some patients.
Improving Patient Compliance
One of the biggest challenges of treating mental health conditions is making sure that patients comply with the treatments prescribed to them, including taking medication and attending therapy sessions.
AI can be used to predict when a patient is likely to slip into non-compliance and either issue reminders or alert their healthcare providers to enable manual interventions. This can be done via chatbots like those mentioned previously or via SMS, automated telephone calls, and emails. Algorithms can also identify patterns of behavior or occurrences in patients' lives that are likely to trigger non-compliance. This information can then be passed to healthcare workers who can work with the patient to develop methods of avoiding or countering these obstacles.
One very exciting area of research involves leveraging AI to create personalized treatments for a number of mental health conditions. AI has been used to monitor symptoms and reactions to treatment to provide insights that can be used to adjust individual treatment plans. One research project carried out at the University of California, Davis, focused on creating personalized treatment plans for children suffering from schizophrenia based on computer vision analysis of brain images. An important element of the research is the focus on “explainable AI” – the algorithms need to be understandable by doctors who are not AI professionals.
Challenges around Using AI in Mental Health Treatment
As we’ve shown, it’s widely believed that AI has the potential to be highly beneficial when it comes to predicting mental health issues, creating personalized treatment plans, and ensuring compliance. However, it also brings specific challenges that require cooperation between AI researchers and healthcare workers to overcome.
Firstly there’s the issue of AI bias – meaning that inaccuracies or imbalances in the datasets used to train algorithms could perpetuate unreliable predictions or perpetuate social prejudice. For example, when it is known that mental health issues are more likely to go undiagnosed among ethnic groups with poorer access to healthcare, then algorithms that are reliant on this data may also be less accurate at diagnosing those issues. AI engineers and mental healthcare professionals need to work together to implement checks and balances to counteract these biases or eliminate biased data before it affects the output of the algorithms.
We must also take into account the fact that diagnosing mental health issues often requires more subjective judgment on the part of clinicians when compared to diagnosing physical conditions. The same is true of machines that are asked to make the diagnosis. Decisions have to be made based on the self-reported feelings and experiences of patients rather than medical test data. This potentially leads to more uncertainty around diagnosis and the need for careful monitoring and following up in order to ensure the best outcomes for patients.
A World Health Organization report into challenges around using AI in mental health treatment and research recently found that there are still “significant gaps” in our understanding of how AI is applied in mental healthcare, as well as flaws in how existing AI healthcare applications process data, and insufficient evaluation of the risks around bias, as discussed above.
Overall, there are promising signs that AI has the potential to make a positive impact in many areas of mental healthcare. At the same time, it’s clear that progress must be made with care, and models and methodologies need to be thoroughly assessed for risk of bias before they are allowed to be used in situations where they could affect human lives. As our understanding and ability to implement AI solutions continue to improve, I believe we will gradually be able to build a stronger case for implementing more widespread use of these potentially groundbreaking technologies. Overall, I’m hopeful that it will eventually lead to improved outcomes for conditions that are currently very difficult to treat and help to relieve the devastating impact that mental health problems often have on the lives of patients.