Tutorial 4 – inter coder reliability testing

Tutorial 3 – inter coder reliability testing

A short demonstration of how to apply inter coder reliability testing in NVivo 8

 

Summary

The content discusses the process of testing inter-coder reliability in NVivo 9 for a research project, using a specific example of a project in Tanzania. It explains how coders worked on the same data set to ensure consistency and reliability in coding.

Highlights

  • [⚙️] Tutorial on applying interrater reliability testing in NVivo 9 for research projects.
  • [📊] Example of a project in Tanzania where coders worked on the same data set to test coding consistency.
  • [👥] Process involved giving each coder a master file to code and merging the projects for reliability testing.
    Key Moments
    • Introduction00:02
      • The video aims to demonstrate the process of applying interrater reliability or inter coding reliability testing between a coding team in a research project using NVivo 9.
      • The project showcased is a EU-funded research project on motivation in three different countries, conducted by the University of Heidelberg in Tanzania.
      • The coding process involved coding separately for each country to test data collection instruments and then using inter coder reliability testing to ensure consistency.
    • Coding Process01:27
      • Coders were assigned to code on the same data set to achieve a broader understanding of the data being produced.
      • The coding involved creating subcategories and cross-coding to ensure comprehensive coverage of the data.
      • The process included giving each coder a master file to code, merging all projects back into a single file for analysis, and checking for agreement among coders.
    • Testing Reliability02:55
      • After merging the files, the reliability of coding was tested against various nodes by comparing the coding done by different coders.
      • Labels with different rules for inclusion were examined to ensure consistency in coding.
      • The process involved checking the agreement levels between coders for each segment of text in their coding to assess the reliability of the data.
    • Kappa Coefficient Analysis05:01
      • A coding comparison query in NVivo was used to compare the levels of agreement between two groups of coders.
      • The Kappa coefficient, a scientific measurement of agreement based on coded words, was used to assess the reliability of the coding.
      • The analysis showed high levels of agreement between the two groups of coders, meeting the benchmark of over 70% agreement set for the project.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.