Me-LLaMA: Medical Foundation Large Language Models for Comprehensive Text Analysis and Beyond
2024

Me-LLaMA: Medical Foundation Large Language Models for Comprehensive Text Analysis and Beyond

publication Evidence: high

Author Information

Author(s): Xie Qianqian, Chen Qingyu, Chen Aokun, Peng Cheng, Hu Yan, Lin Fongci, Peng Xueqing, Huang Jimin, Zhang Jeffrey, Keloth Vipina, Zhou Xinyu, Qian Lingfei, He Huan, Shung Dennis, Ohno-Machado Lucila, Wu Yonghui, Xu Hua, Bian Jiang

Hypothesis

Can integrating domain-specific knowledge with instruction-following capabilities improve the performance of large language models in medical applications?

Conclusion

The Me-LLaMA models significantly enhance performance in medical text analysis tasks compared to existing models.

Supporting Evidence

  • Me-LLaMA models were trained using the largest medical dataset with 129B pre-training tokens.
  • The models outperformed existing open-source medical LLMs in various text analysis tasks.
  • Me-LLaMA surpassed ChatGPT on 7 out of 8 datasets and GPT-4 on 5 out of 8 datasets.
  • The study emphasizes the importance of domain-specific continual pretraining combined with instruction tuning.

Takeaway

This study created a new type of AI that understands medical information better, helping doctors and researchers analyze medical texts more effectively.

Methodology

Developed Me-LLaMA models through continual pretraining and instruction tuning using extensive biomedical literature and clinical notes.

Digital Object Identifier (DOI)

10.21203/rs.3.rs-5456223

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication