Putting the kappa statistic to use

TR Nichols, PM Wisner, G Cripe… - The Quality Assurance …, 2010 - Wiley Online Library
TR Nichols, PM Wisner, G Cripe, L Gulabchand
The Quality Assurance Journal, 2010Wiley Online Library
Inter‐rater assessments of agreement are an essential criterion in the subjective evaluation
of product quality. When assessments among raters demonstrate evidence of a lack of
agreement (partial or total), there is a need to identify the source of disagreement. The
objective being the reduction or mitigation of the influence different raters have on the
assessment and the achievement of consistency among raters. The less influence that raters
have on the assessment, the more confident one is in making critical to quality decisions …
Abstract
Inter‐rater assessments of agreement are an essential criterion in the subjective evaluation of product quality. When assessments among raters demonstrate evidence of a lack of agreement (partial or total), there is a need to identify the source of disagreement. The objective being the reduction or mitigation of the influence different raters have on the assessment and the achievement of consistency among raters. The less influence that raters have on the assessment, the more confident one is in making critical to quality decisions. However, situations do exist in which user perceptions can be unreliable (not repeatable) and demonstrate poor correlation with engineered specifications. Quality management teams must be aware of this. When such situations exist, it is advisable to revisit the voice of the process as a reliable function of specification. Copyright © 2011 John Wiley & Sons, Ltd.
Wiley Online Library