AI Explainability in Long-Term Care: A Sociological Inquiry
2024

Understanding AI Explainability in Long-Term Care

Sample size: 30 publication

Author Information

Author(s): Gallistl Vera

Primary Institution: Karl Landsteiner University of Health Sciences, Krems, Niederosterreich, Austria

Hypothesis

How do technology developers, care staff, and older adults conceptualize AI explainability in long-term care settings?

Conclusion

The study found that explainability needs in long-term care go beyond technical aspects and focus on trust and transparency.

Supporting Evidence

  • The study highlights diverse understandings of explainability among stakeholders in long-term care.
  • It emphasizes the importance of trust and transparency in AI applications for older adults.

Takeaway

This study looks at how different people involved in long-term care understand AI and why it's important for them to trust it.

Methodology

The study used data from 30 qualitative interviews and 50 hours of participant observations.

Participant Demographics

Participants included technology developers, care staff, and older adults.

Digital Object Identifier (DOI)

10.1093/geroni/igae098.1115

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication