Mental health support workers improve empathetic responses with help from AI

Artificial intelligence is already making waves in medicine, finding potential clinical applications and aiding better help for patients and it’s now also making its way into mental health fields.

HAILEY, an AI chat interface, is showing promise as a tool to help mental health peer support workers interacting with individuals seeking support online.

A study of HAILEY’s performance is published in the journal Nature Machine Intelligence.

Mental health issues are rife in the population. According to a 2021 survey by the federal government’s Australian Institute of Health and Welfare, more than two in five Australians (44%) are estimated to have experienced a mental disorder in their lifetime. In the 12 months leading up to the survey, an estimated 21% of Australians – 4.2 million people – experienced a mental disorder.

For these people, anxiety disorders are the most prevalent, followed by affective disorders and substance abuse.

In many cases, long-term help is difficult due to cost and the impacts of the disorder itself.. Often, access to therapy and counselling is limited.


Read more: New AI tool might help in rare disease diagnosis and treatment prediction


Peer-to-peer platforms in non-clinical platforms can offer some care, and are shown to be strongly linked with the improvement of mental health symptoms. In particular, their accessibility makes online mental health support services an integral part of helping those with a mental health condition, with the potential to be life-changing or even life-saving. Psychological studies show that, in these settings, empathy is critical.

Tim Althoff, an assistant professor in computer science and his colleagues at the University of Washington, designed HAILEY to help in conversational empathy between support workers and support seekers.

The chat interface uses a previously developed language model specifically trained for empathic writing.

To test HAILEY, the team recruited 300 mental health supporters from peer-to-peer platform TalkLife to engage in a controlled trial. Participants were split into two groups, one with help from HAILEY. The support workers responded to real-world posts filtered to avoid harm-related content.

For one of the groups, HAILEY provided suggestions for phrases to either replace or insert. The support workers could then choose to ignore or adopt HAILEY’s suggestions. For example, HAILEY suggested replacing the phrase “Don’t worry” with “It must be a real struggle.”

The authors found that a collaborative approach between human support workers and AI led to a 19.6% increase in empathy in the conversation. This was evaluated by a previously validated AI model.

The increase in conversational empathy was much higher, 38.9%, among peer supporters who the authors write “self-identify as experiencing difficulty providing support.”

Their analysis shows that “peer supporters are able to use the AI feedback both directly and indirectly without becoming overly reliant on AI,  while reporting improved self-efficacy post-feedback. Our findings demonstrate the potential of feedback-driven, AI-in-the-loop writing systems to empower humans in open-ended, social and high-stakes tasks such as empathic conversations.”


Read more: ChatGPT is making waves, but what do AI chat tools mean for the future of writing?


However, the authors note that further research is needed to ensure the safety of such AI tools “in high-stakes settings such as mental health care” due to considerations around “safety, privacy and bias.”

“There is a risk that, in attempting to help, AI may have the opposite effect on the potentially vulnerable support seeker or peer supporter. The present study included several measures to reduce risks and unintended consequences. First, our collaborative, AI-in-the-loop writing approach ensured that the primary conversation remains between two humans, with AI offering feedback only when it appears useful, and allowing the human supporter to accept or reject it. Providing such human agency is safer than relying solely on AI.”

Please login to favourite this article.