Beyond reliability: assessing rater competence when using a behavioural marker system
2024

Assessing Rater Competence in a New Behavioral Marker System

Sample size: 22 publication Evidence: moderate

Author Information

Author(s): Smith Samantha Eve, McColgan-Smith Scott, Stewart Fiona, Mardon Julie, Tallentire Victoria Ruth

Primary Institution: University of Dundee

Hypothesis

This study aimed to test the inter-rater reliability of a new behavioral marker system (PhaBS) with clinically experienced faculty raters and near-peer raters.

Conclusion

Experienced faculty have acceptable inter-rater reliability when using PhaBS, but not all raters are competent.

Supporting Evidence

  • Inter-rater reliability for experienced faculty raters was good at 0.60.
  • Near-peer raters had poor inter-rater reliability at 0.38.
  • 9 out of 9 experienced faculty raters completed all assessments compared to 6 out of 13 near-peer raters.

Takeaway

This study looked at how well different raters could use a new system to judge pharmacists' skills, finding that experienced teachers did better than newer ones.

Methodology

Raters attended a 30-min familiarisation session followed by a marking session where they rated a trainee pharmacist’s skills in three scripted scenarios.

Potential Biases

Potential bias due to separate familiarisation sessions for experienced faculty and near-peer groups.

Limitations

The study was insufficiently powered to detect differences in some competence attributes between the two groups.

Participant Demographics

Nine experienced faculty raters with an average of 18.4 years of clinical experience and thirteen near-peer raters with an average of 2 years of experience.

Statistical Information

P-Value

0.0077

Confidence Interval

0.48–0.72

Statistical Significance

p=0.0077

Digital Object Identifier (DOI)

10.1186/s41077-024-00329-9

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication