Three Methods to Measure What Customers Really Want
Read Time: 8 Minutes
To understand market research methodologies, let’s consider a made-up dating website that I will call Go Love Go.
Go Love Go has faced complaints that some users are unhappy with their matches, so its team is reevaluating the data used for its matching algorithm. Luckily for the Go Love Go team, market research methods can solve their problems. This is because choosing a romantic partner is a lot like choosing a product or service.
If we’re looking for a romantic partner, we have a mental list of the traits we desire, such as sharing our hobbies or having a great sense of humor. While we know what we want and what we don’t want, once we’re out there in the dating world, we often need to make trade-offs based on what matters most to us. Maybe we’re okay with having different hobbies if the person really makes us laugh. When we find someone who meets enough of our wants and needs, we commit to them. We’ve chosen our product after evaluating our options.
What’s at the heart of much of market research is understanding what is truly important to people. In business, we can apply different methods to understand our customers, to increase the value of our products and services, and ultimately drive growth and innovation. Such methods are useful for uncovering importance in the context of brand preferences, customer satisfaction, and segmentation studies, as well as product development research. So how do we capture the right information? This endeavor is a challenge because there are several ways to do this in market research and they often lead us to obtain different types of information.
Read on for three methods to better understand what’s important to customers.
The simplest and perhaps most well-known way of capturing what’s important to people is to simply ask them. Have respondents directly evaluate attributes in terms of their importance via a rating scale. These are simple to gather, take up little real estate, and don’t require advanced statistics or advanced statistical knowledge to interpret. However, there are some drawbacks.
Imagine Go Love Go uses this method. Its users are asked to fill out a questionnaire that helps identify which attributes of a romantic partner are most important to them. Users may be presented with a list of attributes, including attraction, common interests, political alignment, etc., to be rated on a 1 to 5 scale (1 = Not at all important; 5 = Very important).
The issue with this method is that people tend to rate all items as highly important. In practice, however, users might be fine dating someone who doesn’t share their politics if they have a great sense of humor, but without being forced to make that compromise, they wouldn’t.
When it comes to evaluating the importance of product features, people tend to have a bias toward high ratings. This is because they fear that giving lower-importance ratings means they’re going to lose that feature altogether. If they have the option of having a laptop with the best resolution for the lowest price, why would they say otherwise? But when they actually go to buy a laptop, they might be willing to make those trade-offs because the options available make it so that they have to.
With such little variability, these scores don’t tell you much about any given user. We can’t really decipher what’s truly important. We also don’t see much differentiation at the group level. In the case of Go Love Go, this lack of differentiation leads to a lack of predictive ability, so the app can’t improve its algorithms.
If political alignment, educational level, and sense of humor are all equally important, then they would be treated as equally important in determining romantic matches. This means that people would get less custom, less specified, less targeted matches. We can apply the same concept to product development. If all the laptop features are equally important, then how does a business prioritize which features to invest in or which to market? If everything is important, then really nothing is important.
A market research favorite is MaxDiff, an experimentally designed exercise where respondents must make trade-offs among attributes. In a typical MaxDiff, respondents are shown a series of items out of a larger set. For example, they might be shown three attributes out of seven at a time and asked to rate which they feel is the most important and which is least important. They rate enough of these smaller sets until we’re able to extract utility scores, which provide us a relative importance score for each of the attributes.
If Go Love Go used a MaxDiff, it might first ask respondents to indicate which of these three attributes — attractiveness, political alignment, and shared values — are least and most important. Let’s say an individual says that out of these three, attractiveness is the least important attribute when choosing a romantic partner and political alignment is the most. In the next set, they are provided financial status, attractiveness, and education level.
If they indicate here that financial status is the least important and education level is the most, we know now that financial status is the least important attribute out of the ones that have been shown. After all, they first said that attractiveness was least important and now they’re saying financial status is even less important. Values are then quantified to show the relative attribute importance as perceived by respondents. A higher score indicates higher importance.
MaxDiff results are relatively easy to interpret. MaxDiff gives us differentiated responses and versatile outputs, but it might not paint the full picture of what is truly important to people.
Why? Part of what is important to people may also be shown through their behavior or actions, things that may not have been verbalized or even acknowledged explicitly. When you are using data to make business decisions, you want a comprehensive understanding of importance, both what people explicitly say — what’s obvious — and what is implicitly revealed to us through their actions and behaviors. These might be hidden at first, but they are necessary to truly have actionable input about what is important.
We can paint a fuller picture by capturing what’s called derived importance. This is a method that’s used to estimate implicit importance by statistically teasing out the relationship between attribute association with a product to the presence of an attribute and the actual performance of the product (e.g., how people feel about it, how satisfied they are, whether they’re likely to recommend). Derived importance can give us differentiation between attributes, and it removes the explicit bias that we sometimes find in self-reports.
In the context of Go Love Go, to obtain derived importance, they would ask people who had been matched previously to 1) indicate whether their partner had the same qualities they once had to rate in terms of importance and 2) indicate how satisfied they are with their current romantic partner.
One way to calculate derived importance is to run correlation analysis with a simple correlation coefficient that explains the observed relationship between two variables, such as one of our attributes and satisfaction. The challenge with using correlations is that they don’t account or control for the contribution of all the other attributes to satisfaction. In real life, attributes don’t exist in a vacuum.
A better method is to run a regression analysis where satisfaction is the outcome variable and the attributes are the independent variables. A regression analysis provides us the unique contribution of each attribute. The regression output explains both the magnitude of the relationship between each attribute and the outcome variable as well as the direction (i.e., positive or negative). We can calculate a relative importance score (sometimes referred to as a Shapley value) from this output, which provides us a ranking of the attributes’ importance. The relative importance score is scaled from 0 to 100%, so we don’t need to worry about negative values, which can be more challenging to explain and interpret. The relative importance calculation is used as our derived importance.
Putting It Together
If Go Love Go were to change its match algorithm based on those original stated-importance ratings alone, political alignment would be the strongest match indicator, followed by education. Attraction would be the least important. If it were to change its match algorithm based on MaxDiff utility scores, then attraction would instead be the strongest match indicator, and education would no longer be weighted heavily. If Go Love Go were to change its match algorithm based on derived importance alone, attraction and education would be at the top.
Go Love Go can optimize and prioritize by plotting both the MaxDiff and derived relative importance on a quadrant map. As shown in the figure below, education was of low importance in the MaxDiff task but of high importance in the derived score, making it a “hidden gem.” Similar education may be an attribute that leads to satisfaction with a partner, but it is not something people realize is important when they are dating.
The app could prioritize the weight of education in determining matches and ultimately improve satisfaction with matches in the long term. In contrast, political alignment came out as highly important on the MaxDiff task but as not important in the derived calculation. These results mean that people think politics are important to romantic relationships, but similar politics doesn’t necessarily lead to happier relationships. The algorithm weight for politics can be lowered because it was revealed to be a false signal.
MaxDiff provides rich information that gets more nuanced results compared with basic rating tasks, which often lead to little differentiation between responses. However, MaxDiff can reveal only what people are able to state via self-report. Derived importance can reveal hidden preferences that folks weren’t necessarily aware of. Together, MaxDiff and derived importance paint a fuller picture of what is truly important to people. These methods can help us identify focus areas for higher impact.
[All data depicted in this article is for illustrative purposes only.]
About Christina Tworek
Dr. Christina Tworek is Director of Advanced Analytics at GLG, leading the team responsible for executing quantitative methods. She holds a PhD in psychology with expertise in research methods, survey design, and statistics.
Enter your contact information below and a member of our team will reach out to you shortly.
Subscribe to Insights 360
Enter your email below and receive our monthly newsletter, featuring insights from GLG’s network of approximately 1 million professionals with first-hand expertise in every industry.