By the American College of Radiology's RADPEER
- Score 1 = concur with interpretation
- Score 2 = discrepancy in interpretation/not ordinarily expected to be made (understandable miss)
- Score 3 = discrepancy in interpretation/should be made most of the time
- Score 4 = discrepancy in interpretation/should be made almost every time - misinterpretation of finding
How These Could Be Utilized?
- For individual radiologists: maintenance of certification, ongoing quality improvement in diagnostic accuracy, opportunity for education
- For institution: monitoring of radiologist performance as an individual and a group, tracking data over time, monitoring trends, conforming with requirements of several controlling agencies
Ideal Peer Review
- Reveals opportunity for quality improvement
- Ensures radiologist competence
- Improves individual radiologist outcome
- Should be unbiased, fair, balanced, timely, ongoing and nonpunitive
- Should allow easy participation
- Should have minimal effect on work flow
The most popular system in use at present is the American College eRADPEER
Reference:
Mahgerefteh S, Kruskal JB, Yam CS, et al. Peer review in diagnostic radiology: current state and a vision for the future. Radiographics 2009;29:1221-1231.
2. Jackson VP, Cushing T, Abujudeh HH, et al. RADPEER scoring white paper. J Am Coll Radiol 2009;6:21-25.
Follow RiTradiology on Facebook, Twitter or Google Friend Connect
No comments:
Post a Comment