We want to believe that all online survey respondents have good intentions and offer the survey their full attention. In reality, a market research company has to set up precautions and checks to guarantee quality data.
Though the vast majority of respondents are reliable, it would be unwise to accept every online survey completion that rolls in without a quality check first. Fortunately, there are numerous checkpoints to weed out the disinterested respondents, manipulative respondents, and even bots in the data before any sort of analysis is performed.
There will always be bad eggs when it comes to online survey respondents. The good news is they are rare and easy to identify with the right measures.
Poor Quality Open-ended Responses
There is perhaps no quicker way to identify a questionable respondent than reviewing open-ended responses. You will usually see two types of red flags here: (1) those who attempt to dodge the question and (2) those who provide a nonsensical answer.
Dodgers will input anything to move past the question, such as a space, random characters, or "N/A." You might even commonly see the word "good" or "okay" for every question, regardless of context. These respondents probably don't have bad intentions, but you can't help but question whether they are honestly answering the other questions throughout the survey. Or the fact they may be a robot.
The other type of open-ended responses may need a more careful review to spot. These respondents often digress from the questions asked, sometimes commenting on the survey itself. In rare cases, an individual may put in the effort to provide a lengthy, irrelevant answer in the text box. Fortunately, at the end of the day, these troublemakers are few and far between in survey completions. A review of these responses will surely catch those folks who ought to be flagged for poor quality.
Straightlining
Another indicator of a respondent who may have lost focus during the survey is straightlining. Market research companies use this tactic to identify poor quality data when a survey has a scale question with an extended list of items to rate. If an individual selects the same rating for every item, this could be a sign of bad data.
However, keep in mind that the longer the lists or higher frequency of lists in the survey, the stronger indicator of poor quality straightlining will be. Lots of scales can lead to disengagement in a survey which results in low effort and straightlining because the respondent does not want to read each line.
Quality Assurance Question Fails
To weed out respondents who aren't reading the survey, there is value in adding a quality assurance (QA) question (also known as a red herring question) or two with instructions to pick a particular answer. If the respondent selects the answer as instructed, you can infer that they are more likely to be paying attention and understand the survey.
A guilty respondent can be terminated on the spot in the survey, or at least flagged in the data for a later review. Checking for a QA fail is one of the most easy and efficient ways to flag a respondent of doubtful quality.
Speeding
Speeders are to be expected with any survey in the field. Some just want to receive their reward as soon as possible. Others naturally answer questions quickly and don't dwell in the survey.
When trying to pick out the respondents who zip through carelessly, the culprits are typically those who complete the survey in a time well below the average. The fear here is that the respondent is survey satisficing. Survey satisficing is when the survey respondent takes short-cuts to complete a survey faster, knowingly clicking incorrect answers in order to finish quickly.
A great way to confirm that a speeder is actually poor quality is to also check for unsatisfactory open-ended responses, straight-lining, and QA fails.
So What?
Top market research companies use all or a combination of these poor quality data metrics to protect the integrity of survey data. However, less experienced or do-it-yourself (DIY) survey writers may not think twice about a quality check. It is imperative that time is set aside with every online survey project to catch and remove unreliable data.
This data cleaning is the step when poor quality data is rightfully removed. No matter what your threshold is for a record to be considered poor quality, just be sure one is in place. In the end, data analysis and reporting will be more accurate and trustworthy.
Contact Our Online Survey Company
Drive Research is an online survey company in NY. We care about our data and implement all relevant quality checks for online surveys.
Questions about creating an online survey? Contact us below.
Message us on our website
Email us at [email protected]
Call us at 888-725-DATA
Text us at 315-303-2040