Abstract
Expert reviews are frequently used as a questionnaire evaluation method but have received little empirical attention. Questions from two surveys are evaluated by six expert reviewers using a standardized evaluation form. Each of the questions has validation data available from records. Large inconsistencies in ratings across the six experts are found. Despite the lack of reliability, the average expert ratings successfully identify questions that had higher item nonresponse rates and higher levels of inaccurate reporting. This article provides empirical evidence that experts are able to discern questions that manifest data quality problems, even if individual experts vary in what they rate as being problematic. Compared to a publicly available computerized question evaluation tool, ratings by the human experts positively predict questions with data quality problems, whereas the computerized tool varies in success in identifying these questions. These results indicate that expert reviews have value in identifying question problems that result in lower survey data quality.
Original language | English (US) |
---|---|
Pages (from-to) | 295-318 |
Number of pages | 24 |
Journal | Field Methods |
Volume | 22 |
Issue number | 4 |
DOIs | |
State | Published - Nov 2010 |
Keywords
- expert reviewers
- measurement error
- pretesting
- questionnaire design
ASJC Scopus subject areas
- Anthropology