Dear HR Division Members,
Please find below an invitation to participate in a brief survey on the perceived prevalence of different types of 'bad faith' survey responses (e.g., insufficient effort, fraud, survey bots), as well as strategies for preventing and detecting such responses. The results of the survey will be disseminated to the field at conferences and in publication. Please directly email Barbara Larson (firstname.lastname@example.org) if you have questions.
We're undertaking a study to better define the prevalence and impact of bad-faith survey responding in management research. We define bad-faith survey responding as a) intentionally careless responses (e.g., participants responding randomly or without reading items), b) fraudulent participants (e.g., individuals lying about who they are or the study inclusion items), c) the same person completing multiple surveys, and d) bots (computer programs) completing multiple surveys or fake data.
If you are a faculty member or postdoctoral researcher, your contribution to this study is really important. We are conducting a survey that takes approximately 15 minutes to complete, with questions about your view of bad-faith respondents in management research generally, as well as about your own experience with bad-faith responses.
In recognition of the value of your time (especially during spring semester!) we are offering everyone who qualifies for and completes the survey the opportunity to enter a drawing in which there will be TWO winners who will each receive a $100 Amazon gift card.
In order to get as much detail as possible without professional risk or discomfort, we are running the survey as follows to ensure anonymity of responses:
CLICK HERE(or below) to be taken to a brief qualifying survey, which will help us verify that you are eligible to participate.
The more participation we get in this survey, the more we can help inform the field about current experiences with bad-faith respondents, as well as best practices for avoiding these actors. We know your time is valuable, and we plan to make this worth your while, so please do participate. Thank you!
Click here to participate in the study
Barbara Larson (email@example.com)
Erin Makarius (firstname.lastname@example.org)
James Diefendorff (email@example.com)
Erin, Barbara, & Jim,
What a fascinating topic and timely topic! Directly related to your survey, please see the following open-access article, which I hope you will find useful:
Again, I hope you will find this useful.
All the best,
A challenge for leadership and health/well-being research and applications relying on web-based data collection is false identities-cases where participants are not members of the targeted population. To address this challenge, we investigated the effectiveness of a new approach consisting of using Internet Protocol (IP) address analysis to enhance the validity of web-based research involving constructs relevant in leadership and health/well-being research (e.g., leader-member exchange, physical [health] symptoms, job satisfaction, workplace stressors, task performance). Specifically, we used study participants' IP addresses to gather information on their IP threat scores and Internet Service Providers (ISPs). We then used IP threat scores and ISPs to distinguish between two types of respondents: (a) targeted and (b) non-targeted. Results of an empirical study involving nearly 1,000 participants showed that using information obtained from IP addresses to distinguish targeted from non-targeted participants resulted in data with fewer missed instructed-response items, higher within-person reliability, and a higher completion rate of open-ended questions. Comparing the entire sample against targeted participants showed different mean scores, factor structures, scale reliability estimates, and estimated size of substantive relationships among constructs. Differences in scale reliability and construct mean scores remained even after implementing existing procedures typically used to compare web-based and non-web-based respondents, providing evidence that our proposed approach offers clear benefits not found in data-cleaning methodologies currently in use. Finally, we offer best-practice recommendations in the form of a decision-making tree for improving the validity of future web-based surveys and research in leadership and health/well-being and other domains.