Previous: Feedback from Coders Up: Hand-Coding Procedures Next: Bibliography

Assembling and Reporting Results

One issue the coding system must address is how to deal with ties and disagreement among coders. One option is to have any disputed codings re-coded by additional coders. However, this may significantly increase the among of coder-hours required for a project. Alternatively, ties can be broken at random. It is essential to report intercoder reliability statistics in a clear way, prior to implementing these fixes. The Kappa statistic described in Kwon, Shulman & Hovy (2006) is helpful, but may not be provide sufficient detail. Craggs and Wood (2005) offers an in-depth review of various methods of reporting intercoder reliability; they suggest that ``only chance-corrected measures that assume a common distribution of labels for all coders are suitable for measuring agreement in reliability studies'' (Craggs & Wood, 2005). In our work, we find that the whole inter-coder reliability matrix, without summary, is useful, since it provides specific feedback about where the coding scheme can be improved.



Gary King 2011-07-12