The data analyzed in a survey project consists of three kinds: attitudes and answers that only the respondents know; extensions of that data, such as index variables, categorized continuous variables, coded comments, or calculated variables (such as fully loaded internal costs per hour); and thirdly, supplemental information that the sponsor stirs into the mix. Thus, human data, software data (so to speak), and external data.
Like Christmas trees padded with attached branches, supplemental data fills out the analyses.
For a flourishing survey project, assemble a team whose members contribute balanced and complementary talents. It doesn’t take a village, but it takes at least a handful of contributors. Different viewpoints and skills from the legal team or law firm should be committed to the success of the project.
Remember, the information you aim to collect from the online survey needs to be generally acknowledged as objective – not biased toward or against pre-ordained conclusions – and comprehensive – covering the topics that most recipients of the report would like to see explored.
An index variable (sometimes called a “synthetic variable”) takes related questions on a survey and weaves their answers into a new, consolidated variable. Like the Consumer Price Index combines the prices of a range of goods and services, a survey index variable consists of the amalgamation of several questions’ answers.
What good is an index variable? One benefit is consolidating the data from multiple questions that would otherwise be difficult to picture and comprehend as an overall whole.
Law firms and law departments periodically conduct surveys of their members or clients to learn about a topic, e.g., engagement, work-from-home policies, client satisfaction, or use of AI software. Questions on the survey may invite respondents to write as much as they want for an answer. For example, “How have you encountered and dealt with supply-chain obstructions?” I will call them “text questions.”
The old-fashioned way to identify and classify ideas from free-text responses to text questions has been to read and code them by hand.
When you create a multiple-choice question on a survey, whether it is single-choice style or check-all-that-apply, take time to sensibly order the items available for selection. This suggestion, which should be a command, applies both to radio button designs and to drop-down menus; they both list items and the arrangement of those items influences respondents. Poorly constructed items spawn all manner of pernicious, subtle distortions; here are a handful.
• Inferred priority: By “inferred priority,” I mean that some respondents might interpret the first item or two to be a deliberate elevation of them (the more socially desirable, presumed correct, or an expected choice) and demotion of later items (less desirable, wrong, or expected toward the end).