I just completed reviewing all of my coding data, which includes nearly 10,000 codes on over 2500 documents! My research assistant and I were very careful in our original coding of these texts and conducted intensive inter-rater reliability assessments throughout the coding process, but I wanted to double-check them nevertheless. We had both coded the texts of approximately 10% of the 245 cases, and reviewed, compared, and corrected our results for those cases as we completed them. I went back over those codes, and identified the codes for which the discordance was greatest. I then reviewed all of the texts identified with those codes across all 245 cases to ensure they were coded consistently.
I also did this using a large number of random spot-checks for all of the other codes we used, and compared them to similar codes for potential mistakes and overlap. I also reviewed all of the comments and memos within our coding software MaxQDA that we had generated about these codes during the coding process, and used them to further standardize the identified codes. This rigorous review process, which took the better part of a month, gives me increased confidence in the reliability and validity of my data, and all of my related analyses that will make use of them.