Like me, I’m sure you have seen the good, bad, and ugly when it comes to online surveys. We live in a world where brands love to collect customer feedback after an interaction, whether it’s on a receipt, after a call, or sent via email. Surveys can be really fun to take while others can be a complete drag, which can happen for a variety of reasons (note the foreshadowing here).
At Drive Research, we love creating surveys that people enjoy taking. In my experience, a higher data quality comes from highly engaged survey respondents. Issues launching an online survey can have several consequences including (gasp) drop out, which means that after answering a few questions the respondent says, “Sayonara!” and does not complete the remaining survey questions.
Through this post, we will reveal some of the most common issues online survey creators face when launching their survey and how to fix them.
Looking for solutions to online survey issues? You've found it.
Issue #1: Not doing a test drive
Conducting a survey test drive means that you begin fieldwork slowly. For example, if you are sending a survey out via email you will randomly select a small portion of the sample, let’s say 100 contacts, and review the initial completes as they are submitted.
If you are spreading awareness through advertising or social media, this means you limit your spend initially to review the first few completes.
Depending on your delivery method, you may opt to ask several different team members or colleagues to take the survey prior to launching it.
What issues do test drives detect?
These issues can range from routing, question interpretation, quota requirements, drop out, software problems, to poor response rates. By seeking these issues through a survey test drive you are able to eliminate them and have confidence before beginning full data collection.
Let’s briefly cover each of these issues that can be uncovered in a test drive.
Routing – These issues stem from survey question logic. If your survey has question logic, carefully review initial survey completes to ensure each respondent was taken through the correct path. When complicated routing occurs, take extra caution.
Question interpretation – Respondents not understanding what is being asked is a huge issue. When reviewing initial responses, pay close attention to how respondents are answering. You may need to add cues or notes to questions (i.e., “Select one”, “Select all that apply”, “Please enter 5 digits”, etc.) or rewrite your question for clarity.
Quota issues – This stems from not being able to reach a specific audience to take your survey. Before launching your survey, ensure your delivery method is able to produce the expected results for your end client. If there are several audience segments and one happens to take off and receive a lot of responses, you will need to prioritize ways to reach under quota audience segments.
Drop out – As discussed at the beginning of the post, drop out is where a respondent completes a portion of the survey, closes out of the survey, and does not submit a response. Some ways to battle this could be to allow respondents to skip questions, utilizing software that collects partial completes, reconsidering length and incentive(s), and reviewing questions with high drop out rates which could point to a larger issue.
Software issues – It happens. Unexpected issues always have the possibility of popping up even if your survey language and programming is perfect. To defeat the unexpected, carefully review data after a test drive to seek these issues out before beginning full data collection.
Response rates – Expecting a 10% response rate but after your soft launch of 100 invites only returns 1 survey, you may have a problem. Maybe you need to adjust the email subject line or change the body of the email. Whatever it might be, this is an issue you can learn about early with a test drive.
Issue #2: Creating false expectations
This one is pretty simple. The message used to encourage respondents to complete your survey should be clear and truthful. Exaggerations about the length of time the survey takes to complete, what the data will be used for, whether or not responses are anonymous, or rewards being offered are big offences.
For example, if you are offering a $100 Visa gift card to a random survey respondent, don’t say, “You will be offered a $100 Visa gift card for completing this survey.” Instead ask, “Would you like to be entered into a sweepstakes to win a $100 Visa gift card?” Additionally, if you claim the survey takes only a few minutes, but in reality there are 60+ questions, chances are you will have a high drop out rate and unhappy respondents.
As always, honesty is the best policy.
High quality surveys = High quality data
Issue #3: Neglecting appearance and language
You want respondents to stick around, so give them a reason to. Surveys that look nice, read well, and dare I say "fun", can result in highly engaged respondents which creates higher quality data.
A few common examples of these issues could be confusing question language or not having your survey optimized for multiple devices. The ability to have desktop, laptop, tablet, and mobile friendly surveys is huge.
If you are sending an online survey via email to general consumers, you can bet that many will be completing the survey on their smartphone, and if the survey is not mobile friendly many will find it difficult to complete.
Online surveys can also be fun! Want to see an example and participate in future research with us? Check out the Drive Research panel sign up survey (shameless plug).
Issue #4: Not capitalizing on what you know
If a survey is being sent to a database of collected email addresses, chances are we know a few things about the person (e.g., first name, last name, etc.). Depending on the database, we may know a lot of information about them ranging from how many people live in their household, job title, household income, and more.
The moral of the story is that all of this information can be used for analysis. The catch is, this data has to be matched up with respondent unique identifier or uploaded to the to the survey software being used.
By capitalizing on what you already know, you can have a deeper analysis without adding more questions and length to your survey.
Why increase survey fatigue and drop out if you already know the information? There are a few instances where it’s worth it to confirm or update specific information, but it’s best practice to capitalize on all reliable data collected.
Want more? Contact Drive Research.
If you can't tell from our frequent posts on our market research blog, we not only work in market research, but we enjoy writing about it too!
Have questions? All you have to do is ask!
Call us at 315-303-2040 or send us an email at firstname.lastname@example.org.