Conjoint analysis is a statistical technique typically used by market researchers to quantify the impact of various factors on consumers’ buying behavior.
In other words, it’s a way to figure out exactly what makes people choose one thing over another.
There are several different types of conjoint analysis that researchers can draw on, but the most commonly used variation is known as Choice-Based Conjoint, or CBC. Because it presents combinations of attributes simultaneously and asks respondents which they prefer, CBC most closely mirrors real-world buying behavior.
Conjoint analysis in general, and CBC in particular, are enormously powerful tools for anyone trying to figure out their next moves in a competitive market. But, if set up incorrectly they can yield inaccurate data that may lead you in to make unfounded decisions.
To go along with our newly overhauled Conjoint Question Type, we’ve pulled together this introductory guide to help you create a conjoint study that takes advantage of this statistical powerhouse and the amazing insight it can provide.
Conjoint Terms Defined
Before we dive into the specifics of Choice-Based Conjoint, we need to start off by defining some terms that you may not be familiar with. These will come up frequently in the following paragraphs (as well as in our SurveyGizmo documentation if you’re already a user), so it’s worth taking the time to get some shared definitions in place.
We’ve labeled each of these components in the example below so you can see how they all combine within a conjoint analysis survey.
Attribute: a characteristic of a product. Sometimes referred to as a “factor.” In our example, Appearance, Features, Brand, and Price are our attributes.
Level: The different measurements being used for each attribute. Above, $999, $550, and $1200 are the levels of price we are testing.
Card: A combination of multiple attributes into a fictional product. The intelligent robot from Botpro who can compute logarithms and costs $999 is one card in our example.
Set: A group of cards presented simultaneously to a respondent, who is asked to make a choice from among the set. All three robots, plus the “none” option, make up our example set.
Part-Worth Utilities: One of the most valuable outcomes of a Conjoint Analysis survey. This data reveals the extent to which each level contributes to the whole utility of a product. We’ll dive into this more extensively when it’s time to talk about reporting.
Prohibited Pairs: Two levels that should never appear on a card together, i.e. 120-inch TV being shown with a $40 price. If you’re new to conjoint we recommend avoiding prohibited pairs as they can skew data when not used correctly.
Why It’s Called Conjoint Analysis
One of the most amazing things about conjoint analysis is that you don’t have to ask each respondent to rank every single combination of attributes and levels to calculate what their preferences are overall. Through the power of mathematical analysis, their answers to questions they never saw can be inferred based on the answers they did provide.
This means that everyone’s individual answered are combined (conjoined) to come up with the ultimate rankings.
The precise mathematical formulae that provide these outputs are beyond the scope of this introduction, but you can find out more about them in our detailed documentation.
Best Practices for Designing a Choice-Based Conjoint Analysis
Now that we’ve got definitions ironed out, let’s discuss how to actually create your survey.
Keep in mind that, while very powerful, conjoint questions are also highly fatiguing, meaning they mentally tax your respondents. Even though you don’t have to show every single permutation to each respondent, the temptation is to show respondents as many as possible. Resist the urge! Respondents will stop giving you reliable data if you bombard them with huge lists of attributes or dozens of product sets.
The tips and best practices outlined in this post are designed to help you get the data you need without risking its integrity by overwhelming your audience.
When deciding which attributes to include, your goal is to ensure you’re testing all the ones that contribute to a buyer’s decision making process. In other words, you want to test a full profile of relevant attributes, not just a sample. Otherwise you might unknowingly miss something that has a major impact on buyer behavior.
Ideally, keep your attributes down to three or four. If you must go higher, six should be your upper limit. Any more than that becomes seriously taxing for a respondent to handle.
If you’re not sure which attributes actually contribute to decisions about your product, we recommend that you run a pre-study using ranking questions to help narrow down your attribute list.
Picking the Right Number of Levels
When choosing your levels, select options that most differentiate the attributes. For example, if you’re testing TVs, don’t select 60 inches, 61 inches, and 62 inches as your levels for screen size. Choose 60 inches, 70 inches, and 100 inches.
These levels make sense to consumers and allow them to make meaningful choices.
As with attributes, you don’t want your number of levels climbing too high; this increases the possible combinations that you need to test. We recommend choosing six or fewer levels for each attribute.
If you find it challenging to describe each level, you can use images instead.
Choose Your Words Carefully
When writing out your attributes and levels, it’s important to keep your language neutral to avoid introducing bias into your responses.
For example, provide exact pricing rather than choosing “cheap” and “expensive” as your levels.
Similarly, make sure that your descriptions of each attribute keep them independent of one another. Don’t talk about a “red hybrid vehicle” in a single attribute if you’re hoping to determine the impact of color and fuel efficiency separately.
Getting the Right Number of Responses
In the previous sections we encouraged you to limit your attributes and levels out of courtesy to your respondents, but it’s a good practice for you as a survey creator as well. The more combinations you have, the more responses you’ll have to get, and that can drastically increase the cost and time required for a project.
Our documentation includes a calculator so you can determine the exact number of responses required for the attributes and levels you choose.
For example, if you have 4 sets, with 3 cards per set, and no more than 4 levels per attribute, you need 334 responses.
But remember, if you plan to segment your data you’ll need to collect the same number of responses for each segment (e.g. 334 male and 334 female).
Scoring Your Conjoint Analysis
As you’re working on your survey design, you should also consider how you want to score the results. There are four common ways to set things up.
- Single Choice: A respondent chooses the one product they would purchase out of the set. The card they choose receives a score of 100, while all unselected cards get a 0.
- Single Choice with None: The setup is the same as above, except there is also the option for a respondent to indicate they wouldn’t choose any of the cards shown. When “None is chosen,” all the cards get a score of 0.
- Best and Worst: Respondents choose which card they feel is the best, and which is the worst, and they must make both selections. Cards chosen as the best receive a score of 100, those labeled the worst receive a 0, and those that are unselected a 50.
- Continuous Sum: In this case a respondent enters a score for each card based on a set amount of points or money. For example, you could ask a respondent how they would spread $100 over each provided card.
Conjoint’s Ultimate Goal: Market Simulation
When you’ve collected your CBC data, you’ll find a chart like this one in SurveyGizmo:
What it shows you is the relative impact that each of your attributes has on someone’s buying decision. It’s vital to note that comparisons are valid only within attributes; you can’t compare across attributes.
So, in our example, there’s a major discrepancy between how having a robot who can compute logarithms and one who can wash dishes impacts buyer preferences.
The brand of the robot – Gizmobot, Robopal, or Botpro — has very little impact at all. We can tell because of the small relative difference among those attributes in our report.
These data points are interesting, but what the ultimate goal is to be able to create a market simulation, or a data-driven tool that lets us know how changing the levels of different attributes is likely to affect our market share.
Through the magic of VLOOKUP and other Excel formulas, we can adjust the levels on each attribute and see how our market share would change. (For a detailed walkthrough on setting this up, including a downloadable Excel template, please see our documentation.)
These simulators let product and market research teams identify, with a high level of confidence, what product configurations are likely to succeed without ever creating them. This is the power of Choice-Based Conjoint, and it’s why we’re so excited to roll out this new question offering to our Team Edition users!
If you want to add the power of CBC to your SurveyGizmo account, please contact an account representative.
Andrea Fryrear is the chief content officer for Fox Content, where she uses agile content marketing principles to drive content strategy and implementation for her clients. She also writes for and edits The Agile Marketer, a community of marketers on the front lines of the agile marketing transformation. She geeks out on all things agile and content on LinkedIn and Twitter.