A Cross-Sectional Study Comparing Patient Information Guides Generated by ChatGPT and Google Gemini for Common Radiological Procedures
2024

Comparing AI-Generated Patient Information Guides for Radiological Procedures

publication Evidence: moderate

Author Information

Author(s): Muacevic Alexander, Adler John R, Phillips Vidith, Rao Nidhi L, Sanghvi Yashasvi H, Nizam Maryam

Primary Institution: Johns Hopkins University, School of Medicine

Hypothesis

How reliable and understandable are the patient education materials generated by ChatGPT and Google Gemini?

Conclusion

Both ChatGPT and Google Gemini can generate consistent patient education materials, but there are significant differences in word count and grade level that need to be addressed.

Supporting Evidence

  • ChatGPT produced a higher word count and grade level compared to Google Gemini.
  • Both AI tools generated content with similar readability and reliability scores.
  • The study highlights the need for AI-generated content to be tailored to different literacy levels.

Takeaway

This study shows that AI can help create easy-to-read guides for patients about medical procedures, but some AI tools make them harder to understand than others.

Methodology

A cross-sectional study evaluated the quality of patient information brochures produced by ChatGPT and Google Gemini using various readability metrics.

Potential Biases

Potential biases in AI algorithms and the reliance on automated assessments may not capture patient reactions accurately.

Limitations

The study was limited to a one-week period and only included two AI tools, which may not represent the full range of AI capabilities.

Statistical Information

P-Value

p=0.0409 for word count, p=0.0482 for grade level

Statistical Significance

p<0.05

Digital Object Identifier (DOI)

10.7759/cureus.74876

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication