Discourse analysis procedures: Reliability issues

Hux Karen, Dixie Sanger, Robert Reid, Amy Maschka

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

Performing discourse analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure-a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobsever agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between raters was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.

Original languageEnglish (US)
Pages (from-to)133-150
Number of pages18
JournalJournal of Communication Disorders
Volume30
Issue number2
DOIs
StatePublished - 1997

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Linguistics and Language
  • Cognitive Neuroscience
  • LPN and LVN
  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Discourse analysis procedures: Reliability issues'. Together they form a unique fingerprint.

Cite this