Here is the CMT Uptime check phrase

Questionnaire Development and Testing


Have questions or comments for the authors? Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.


Presentations


Results of a Mixed-Methods Approach to Understanding the Performance of Self-Rated Health Questions in the United States
Kristen Cibelli, US National Center for Health Statistics


Abstract: Self-rated health (SRH) is a useful measure of respondents’ subjective evaluation of their health status and has been shown to be an important predictor of mortality and morbidity. It is frequently included in major surveys in the United States and around the world and is often used comparatively across key respondent characteristics (e.g., race, ethnicity, education, etc.) within countries and across countries. However, different reference levels of health and different ways of interpreting and using response scales by key demographic characteristics (e.g., age, education) and cross-culturally and cross-nationally are known challenges. This presentation describes a new mixed-methods approach to understanding the performance of SRH questions. We draw on data collected in the third round of the Research and Development Survey (RANDS), a recruited probability-sample based web panel, used by the United States National Center for Health Statistics (NCHS) to evaluate survey questions, sources of measurement error, and comparability across important respondent sub-groups and question versions. Included in the study is a split-ballot experiment in which respondents were randomly assigned to an unbalanced response scale for SRH (“Would you say your health in general is excellent, very good, good, fair, or poor?”), as the question typically appears in major US surveys, or a balanced scale (“Would you say your health in general is very good, good, fair, bad, or very bad?”), which is more commonly used in Europe. The instrument also included a series of closed-ended probes that were developed based on previous qualitative research. We examine the constructs captured by SRH based on responses to the closed ended probes and possible differences by education, race and ethnicity, and the type of response scale used. We also explore the possible impact of differences on the validity of SRH by looking at other measures of health (e.g., reported health conditions and behaviors).


Survey Question Order Influences Respondent Engagement
Luke Plutowski, LAPOP, Vanderbilt University
Liz Zechmeister, LAPOP & Dept. of Political Science @ Vanderbilt University


Abstract: How does question ordering affect motivation among survey participants?  In this paper, we suggest that placing more salient issue topics at the beginning of a questionnaire can improve respondent engagement with the survey.  We explore this idea through an original CATI survey conducted in Haiti in March-April 2020, near the beginning of the COVID-19 pandemic.  We randomly vary whether respondents receive questions about the virus at the beginning or end of the survey.  The results show that those who answer the coronavirus questions first tend to drop out of the interview less often, answer more questions before dropping out, and give fewer non-responses.  In line with our hypothesis, these differences are especially high among who think the coronavirus is a serious issue.   The findings have implications for researchers who implement surveys during periods in which a single issue is highly salient.


Quality Indicators of Web Probing Responses and Bias in Cross-Cultural Web Surveys
Dörte Naber, GESIS – Leibniz Institute for the Social Sciences
José-Luis Padilla, Universidad de Granada


Abstract: The growing use of web surveys is a challenge for the traditional methods of pretesting survey questions in 3MC contexts. Qualitative methods like “web probing” allow researchers to obtain evidence of the response processes while respondents are answering the web survey questions, and integrate such evidence with quantitative data: survey question responses, demographics, etc. We will present results of an experiment aimed to test two different sequences of three commonly used probes (category-selection probe, specific probe, and comprehension probe) crossed with country groups. 1114 web panel participants, 559 from Germany and 555 from Spain, were randomly assigned to the two probe sequences, either starting with the category-selection probe as first probe followed by the specific and the comprehension probe or starting with the comprehension probe followed by the specific and the category-selection probe. Before these probes, all participants responded to the Spanish, respectively the German version of a target survey question on subjective wellbeing (C1: Taking all things together, how happy would you say you are?), as well as to a proxy survey question regarding their interest in the topic of wellbeing (In your daily life, how often do you think about how to “be happy”?). In this presentation, we will focus on the data quality of the answers to the three probes measured by “nonresponse”, “mismatching answers”, and “motivational loss” quality indicators by following the analytic strategy proposed by Meitinger, Braun, and Behr (2018). Particularly, we will analyze the relationship between these quality indicators and the responses to the aforementioned target survey question under consideration of the interest in the topic of wellbeing. Finally, we will also discuss on the differences between the German and the Spanish samples and how the findings can contribute to a more comprehensive understanding of sources of bias in cross-cultural web survey research.