Reply To: Interrater Reliability Testing for Team Projects


Thank you for your reply! Right now what the team members in the lab have planned out, is that we will have three different coders coding the same media/transcriptions
into the nodes. Then, we’d like to compare the three different versions of the same thing, and find out the % of whether the reliability is low or high.
The problem is, this is my first time experiencing Nvivo 8, and I have no idea how, in our case, should I compare, calculate and analyze the reliability of our work.
I searched online and found out about the Merge with Nvivo, but I have also seen people discussing about certain flaws that Merge has which had lead to the
results where experimenters go back to the most original comparison — print out the codes then compare all of them one by one, then calculate the reliability that way.
I was hoping to find a way to calculate our ICR, without having to go through it the tr adition way. Is Merge actually designed to calculate ICR? Or are there
other ways that you can guide me through to make this ICR possible?
I hope my question and concern is making sense, thank you very much for your help.