Reliability in Coding Social Interaction: A study of Confirmation
Document Type
Article
Publication Date
1990
Digital Object Identifier (DOI)
https://doi.org/10.1080/08934219009367505
Abstract
This study sought (a) to determine, under relatively ideal conditions, whether confirmation and disconfirmation can be coded reliably by trained observers, and (b) to provide a tutorial on coding reliability using data from the study of reliability in confirmation. After discussing five problems in determining and reporting the reliability of systems for coding social interaction, including confirmation and disconfirmation, the findings from the analysis of ten videotaped interviews are reported. The tapes were scored by two experienced and trained coders, using the Confirmation Disconfirmation Rating Instrument. Unitizing reliability was good, and global reliability was excellent. Category‐by‐category reliability, both inter‐rater and test‐retest, was acceptable for the two major categories of confirmation and disconfirmation, but unacceptable for the six subcategories. The implications section draws attention particularly to the necessity in all studies involving the coding of social interaction (a) of using appropriate statistics to calculate reliability and (b) of determining category‐by‐category reliability coefficients. Implications to future studies of confirmation are also discussed.
Was this content written or created while at USF?
Yes
Citation / Publisher Attribution
Communication Reports, v. 3, issue 2, p. 58-69
Scholar Commons Citation
Cissna, Kenneth N.; Garvin, Bonnie J.; and Kennedy, Carol W., "Reliability in Coding Social Interaction: A study of Confirmation" (1990). Communication Faculty Publications. 556.
https://digitalcommons.usf.edu/spe_facpub/556