MaxDiff is a question methodology which allows researchers to compare items utilising a best-worst scale. MaxDiff makes it possible to understand the preferences and attitudes of the survey population and provides a ranking of each item as an output. This can be useful for Councils and Government Agencies when seeking to establish preferences for existing services, policy development, and public consultation.
When would you use MaxDiff?
MaxDiff is used instead of the ‘standard rating scale’ which has limitations in terms of analysis and reporting. The standard rating scale can lead you to believe that all the items are equally important as each other without differentiating between them where as MaxDiff forces the respondents to choose between items and shows the importance of an item in relation to other items providing more actionable data. People are better able to judge items at opposite extremes like the example questions whereas they struggle to judge items in the middle ground.
MaxDiff also allows us to randomise the items the respondent is evaluating. This means the respondent can be shown an item several times and that the respondent is not evaluating the same things repeatedly which provides more robust data.
How does MaxDiff work?
MaxDiff asks the respondents what their preference is between a range of items like the question below:
MaxDiff allows the researcher to have a number of items/ attributes which are able to be compared. In the question above there were 10 items which included business, retail, outdoor concerts, recreational activities, a regional attraction and an icon for the region among others. The question asks the respondent to indicate which item they consider to be a low priority and which is a high priority out of the list. The respondent will be asked a series of best-worst questions known as sets, and this means the full range of items can be prioritised against each other. In the above image, you can see that the survey question is 1 of 5 sets
Best practice is that a maximum of five randomized items is listed per set. By listing five items the likelihood that an item will appear in a set with another item increases while the randomization means that the respondent is not always evaluating the same items resulting in more robust data. The more items and sets you have the more data your survey will collect however there is a trade off, that being the longer your survey will be. On average a respondent can answer five closed questions a minute and it is recommended in terms of survey design that the survey takes no longer than five minutes to complete, this includes all the demographic questions and instructions.
MaxDiff Reporting
One of the output options from Maxdiff is a horizontal bar graph as below:
The chart ranks the priorities in order, so the respondents viewed sports fixtures as the number one priority with 68% of respondents say it was a high priority, this is followed by retail, outdoor events, and housing. On the other end of the priority list, we have a regional attraction as a low priority followed by eateries.
To the right of the bar chart are two columns headed rank and score. Rank is pretty self-explanatory, it is where the item ranked in relation to other items. The score is more complex but can provide some useful information. The score tells us how appealing an item was, this is worked out via a formula:
The higher the score the more often it was selected as appealing than not and vice versa with a negative score. This allows us to compare the appeal of an item with another or in the current example how much of a priority an item is. Sports fixtures have a score of 0.35 while industrial has a score of 0.13, this means that over twice the number of respondents viewed sports fixtures as a priority as opposed to industrial. In terms of low priority items, there is quite a large gap in the score between business (-0.03) and an icon for the city (-0.19) and another large gap to the two lowest priorities being a regional attraction (-0.47) and eateries (-0.54). Using these scores a City may decide not to pursue building an icon for the city, developing a regional attraction or encouraging new eateries given that the respondents identified it as a low priority than not.