10 Tips for Using MaxDiff in a Survey

If you are ever looking for a compelling way to prioritize items in a list, consider using a MaxDiff analysis with an online survey.

Also known as Maximum Difference or Best-Worst Scaling, MaxDiff is an advanced approach for understanding how respondents rank a list of items using trade-offs (Not to be confused with conjoint analysis).

In a typical MaxDiff question or “set,” respondents are asked to review a handful of items known as “attributes” and make two selections: best and worst. The scale on which best and worst sit is dependent on the research. Common scales you might use include level of agreement, appeal, or motivation. Attributes are usually items like product features, messages, or statements.

Tackling a MaxDiff analysis for the first time can be intimidating, so we put together ten tips to help you make it a success.


Tip #1: Make the list of attributes mutually exclusive 

It will be important to minimize overlap within your list of attributes. Attributes that are too similar run the risk of having a high positive correlation, which muddies your data. This is also true for attributes that are exact opposites of each other (negative correlation). Unique as possible attributes will support greater differentiation in the end results.


Tip #2: Try to limit the number of tested attributes to 30 or fewer

How many attributes you need to test varies with each study, but we suggest capping the total at 30 attributes if you can. This guideline will help maintain both a manageable analysis for researchers and a reasonable number of questions for respondents. The more attributes you include in MaxDiff, the longer the survey will need to be.


Tip #3: Aim to show each attribute 3 to 5 times to each respondent

From a data quality perspective, we advise that a MaxDiff design show every respondent each attribute a minimum of three times throughout the course of the questions. This standard provides confidence that each attribute was shown in a few different contexts for better quality data. 


Tip #4: Analyze the results using a Top 3 attribute or Top 5 attribute lens 

The low hanging fruit with MaxDiff results involves looking at the likelihood an attribute was ranked first for respondents. For a more holistic view, consider analyzing the results with a Top 3 or Top 5 ranking lens. These additional views may show how close or far apart each attribute really is from one another. There is no one perfect way to interpret the data, but having multiple options like this will help you make the best decision.


Tip #5: Avoid going beyond 20 sets to reduce respondent fatigue

Each set in a MaxDiff survey requires respondents to stop and think for a moment about what is best and worst. This required level of engagement from respondents is likely to fall off if the sets drag on too long. We recommend limiting the design to 20 sets or fewer per respondent to minimize disengagement or frustration.


Tip #6: Give respondents a heads up about the repetitive nature of the questions

Knowing the MaxDiff sets will appear as the same question repeatedly, it would be wise to include an introduction for respondents. This could be as simple as a couple of sentences before the first set that tell respondents they will answer the same type of question with different mixes of attributes. They may also notice the same attributes reappear across sets which is an intentional part of the design.


Tip #7: Adequately plan a large enough sample size for each segment

If you are planning to analyze the MaxDiff results by subgroups within your total sample, be sure to collect enough responses to have reliable sample sizes for each segment. To confidently compare segments to one another, we typically recommend sample sizes of 100 to 200 responses for each segment you plan to analyze. For example, you might aim for a sample size of at least 500 if you have five subgroups of interest. This will give you a reasonable margin of error no greater than +/- 10%.


Tip #8: Create shortened codes to analyze your attributes if they are on the longer side

Trying to fit 30 attributes onto a single chart or table can be a challenge if your attributes are lengthy statements. Our advice in this case is to create code names for each attribute composed of no more than a few words each to make your analysis easier. To prevent misinterpretation, also be sure to create a code bank that matches the original name of each attribute to the shortened code.


Tip #9: Stick to 5 attributes or fewer per set to not overwhelm respondents

Similar to not showing too many sets to respondents, it is also imperative to not overload each set with too many attributes. We strongly suggest not displaying more than five attributes per set to keep the decision-making process reasonably simply for respondents. Reading through five attributes each screen instead of six can make a big difference when respondents are reviewing up to 20 sets.


Tip #10: Use a randomness threshold to help interpret the significance of results

When you reach the analysis stage of the MaxDiff study, referencing the randomness threshold may add important context to the results. A randomness threshold is a value that represents the likelihood an attribute would be selected if chosen at random. When looking at top attribute ranking, the randomness threshold is calculated by dividing 100% by the number of attributes in the study. Attributes with percentages well above or below the randomness threshold are noteworthy as strong or weak, respectively.


Contact Drive Research

Drive Research is a full-service market research company specializing in unique analysis techniques such as MaxDiff. Our team has the expertise and experience to assist with your next study, should it be the right fit for your business.

Interested in learning more about our market research services? Contact us today.

  1. Message us on our website
  2. Email us at [email protected]
  3. Call us at 888-725-DATA
  4. Text us at 315-303-2040

tim gell - about the author

Tim Gell

As a Senior Research Analyst, Tim is involved in every stage of a market research project for our clients. He first developed an interest in market research while studying at Binghamton University based on its marriage of business, statistics, and psychology.

Learn more about Tim, here.


subscribe to our blog

 

 

Market Research Analysis