Be Wary of the Echo Chamber of Multi-year Surveys

If you conduct a periodic survey, such as a morale and engagement survey every six months, you can reasonably expect that what people read in a report (assuming you publish the results) will influence their answers the next round. Perhaps only slightly, but respondents may recall or be influenced by their latent memory when they answer the next iteration of the survey. This expectation presumes that you maintain the same wording of the question and the same instructions for how to answer.

State Respondents per Question with N =

You can’t make every question a required question, or respondents will likely feel bullied, vexed, and tempted to drop out. For many reasons, people leave answers blank (they don’t want to give that piece of data, can’t track it down quickly, only want to spend a few minutes on the survey, don’t have an opinion, don’t know the answer, never finished the questionnaire, or are not allowed to share the information).

Weight Response Data to Better Match the Population

Surveyor sponsors sometimes weight their data to make the findings more representative of the underlying population. After all, a single survey only provides a partial sample of the entire group (the population), and it may have “holes” or “bulges” – too few respondents of some kind or too many of another. This point comes through in an article in the New York Times, July 23, 2015, at 83 regarding political polls.

Create Drop-Down Selection Lists

When you ask a question on a survey and give respondents several answers to select from, you can present those selections in varied formats. Those choices of format do not change if the question allows for only a single choice or for multiple choices, e.g., “Choose all that apply.” If you present survey-takers with only a few choices, you might spread them horizontally across the questionnaire page. Or you might list them vertically.

Extract Topics of Comments or Free Text with LDA

How can you gain insights from written answers or comments in surveys? Among the many tools, let’s focus on a popular one to ferret out elusive themes in text, called topic modeling. Latent Dirichlet Allocation (LDA) is an unsupervised^[This term means that no pre-existing classifications exist; the software does all the work.] machine learning algorithm for topic modeling: identifying sets of words that characterize topics that might not be obvious – they are “latent”.