Evaluating of BERT-based and Large Language Mod for Suicide Detection, Prevention, and Risk Assessment: A Systematic Review
2024

Using AI to Detect and Prevent Suicide

Sample size: 29 publication 10 minutes Evidence: moderate

Author Information

Author(s): Levkovich Inbar, Omar Mahmud

Primary Institution: Tel-Hai Academic College

Hypothesis

Can large language models improve the detection, prevention, and risk assessment of suicide?

Conclusion

Large language models show significant potential in identifying and detecting suicidal behaviors, often outperforming mental health professionals.

Supporting Evidence

  • Most studies demonstrated that large language models are highly efficient in detecting suicidal ideation.
  • LLMs often outperform traditional mental health professionals in early detection and prediction capabilities.
  • Ethical concerns regarding the use of AI in mental health need to be addressed.

Takeaway

This study looks at how computers can help find people who might want to hurt themselves and how to stop that from happening.

Methodology

The review systematically searched seven databases for studies published from January 1, 2018, to April 2024, focusing on the use of large language models in suicide prevention.

Potential Biases

Some studies exhibited cultural and gender biases, affecting the applicability of the results.

Limitations

Many studies relied on synthetic data and had small sample sizes, which may limit the generalizability of the findings.

Participant Demographics

The studies included various demographics, primarily focusing on social media users and clinical populations.

Statistical Information

P-Value

p<0.001

Statistical Significance

p<0.001

Digital Object Identifier (DOI)

10.1007/s10916-024-02134-3

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication