Expert Status and Performance
2011

Expert Judgments and Performance

Sample size: 123 publication Evidence: moderate

Author Information

Author(s): Burgman Mark A., McBride Marissa, Ashton Raquel, Speirs-Bridge Andrew, Flander Louisa, Wintle Bonnie, Fidler Fiona, Rumpff Libby, Twardy Charles

Primary Institution: Australian Centre of Excellence for Risk Analysis, School of Botany, University of Melbourne

Hypothesis

The social expectation hypothesis predicts strong correlations between self and peer assessments of expert performance.

Conclusion

Expert advice will be more accurate if technical decisions routinely use broadly-defined expert groups, structured question protocols, and feedback.

Supporting Evidence

  • Experts generally believe that more experienced and better-credentialed individuals will perform better.
  • Peer assessments of expert status correlate with qualifications, track record, and experience.
  • The average performance of experts improves significantly when they discuss their judgments with one another.

Takeaway

Experts think they know a lot, but just because someone is an expert doesn't mean they always give the best answers. Talking with other experts can help everyone do better.

Methodology

The study involved structured elicitation exercises with experts from various life science fields to compare self-assessments and peer assessments of performance.

Potential Biases

Experts may be influenced by psychological biases, subjective values, and conflicts of interest.

Limitations

The study's findings may not apply to all expert judgments, especially in contexts with weak feedback or when experts face new situations.

Participant Demographics

Participants included 123 experts from life science fields such as medicine, epidemiology, veterinary science, ecology, and conservation biology.

Digital Object Identifier (DOI)

10.1371/journal.pone.0022998

Want to read the original?

Access the complete publication on the publisher's website

View Original Publication