Assessing Agreement between Multiple Raters in Breast Cancer Tumour Grading
Author Information
Author(s): Fanshawe Thomas R., Lynch Andrew G., Ellis Ian O., Green Andrew R., Hanka Rudolf
Hypothesis
Can we accurately assess inter-rater agreement among pathologists grading breast cancer tumours despite missing data?
Conclusion
There is substantial variability between pathologists in grading breast cancer tumours, indicating uncertainty in the 'true' grade.
Supporting Evidence
- The study analyzed 24177 grades provided by 732 pathologists.
- The findings suggest that there are differences in grading behavior among raters.
- The Bayesian model helps address biases in agreement scores.
Takeaway
This study looked at how well different doctors agree on breast cancer grades, and found that they often disagree a lot.
Methodology
The study analyzed a large dataset of breast cancer grades from multiple pathologists using Bayesian latent trait models and agreement scores.
Potential Biases
Potential bias due to the variability in grading behavior among pathologists.
Limitations
The study's findings may not apply to smaller datasets or those with fewer raters.
Participant Demographics
732 pathologists rated 52 breast cancer tumour samples.
Digital Object Identifier (DOI)
Want to read the original?
Access the complete publication on the publisher's website