by Frauke Becker
Dissemination of findings is essential for every research project. The more valid and robust results are, the more likely they are to apply to a wider context or a wider audience. In order to evaluate validity, we usually rely on statistical significance. However, this only allows for making inferences on the sample population; only those individuals whose information are collected, provide data that can be analysed. But how do we know that the information used for any type of analysis is representative for the population of interest? How can we make sure that the results are valid beyond the sample population and generalisable to the whole population?
The answer is: we cannot (given the case of limited resources). What we can do, is test for a number of biases when we are aware of potential issues. From an econometric point of view, sample selection bias occurs when characteristics of the sample population differ from those for the whole population. Similarly, for any type of survey data, non-response bias occurs when characteristics of respondents differ from those of non-respondents. In those cases, the results are still valid findings – for the specific sample population, though, and not necessarily generalisable for the whole population.
One possibility to minimise non-response error of survey data is to maximise response rates. If the initial sample who receive the survey is indeed a random representative sample of the whole population, sufficiently large response rates will reduce the risk of non-response bias. Whether or not participants are willing to complete a survey depends on a number of factors, including the survey design, how easy they find the survey to understand, and how quickly it can be completed.
In the context of discrete choice experiments (DCEs), where surveys are used to identify individual preferences, our research finds that the burden to respondents can be affected by:
- number of attributes describing a scenario,
- number of alternatives to choose from,
- whether or not an opt-out option is presented,
- number of choice sets to be answered,
- whether or not a risk attribute is included,
- how relevant the survey topic is to participants,
- whether or not reminders were sent.
Consistent with a social exchange theory of survey response, cognitive burden reduces response rates, while perceived benefits (measured by the relevance of the survey topic to respondents) increase response rates. However, the survey design and all associated factors are correlated with the survey quality which is bound to differ between studies. Researchers who are aware of more challenging aspects of a survey may compensate by reducing the cognitive burden in other parts of a questionnaire. Controlling for study quality in a systematic way may prove to be difficult since detailed information on study development and design are not routinely reported for DCE studies, yet. Until then, we suggest to minimise the cognitive burden to participants by careful consideration of relevant and required attributes in the DCE choice sets.
For more information on improving response rates to DCEs follow the link to read the full publication: (Watson, Becker, and de Bekker-Grob, 2016)