Creating a Kano Model Survey

This is part two of a three-part series exploring the Kano Model and how to go about uncovering where your users would place your product’s existing and considered features. For a better understanding of the Kano Model, start with part one.

Now that we understand what the Kano Model is attempting to explain – and assuming we agree – how do we go about determining which features are 😍 delighters, 😐 must-haves, etc.?

Obviously, the answer will come by “getting out of the building” and asking your customers and potential customers what they think.

What to Ask

In chapter 4 of her book, Lean Customer Development, Cindy Alvarez offers some good advice on focusing on actual current behavior from potential customers, versus aspirational future behavior. It’s pretty well established that asking people, “How likely would you be to use ___” will lead to a disproportionate number of false-positives and wasted development cycles building features people don’t adopt.

During qualitative, face-to-face interviews, we can ask questions framed as, “Tell me about the last time you ___.” to ensure we’re getting at actual behavior and aren’t leading the witness.

So, with a Kano-based survey, we mitigate against aspirational “Oh, sure I’d totally use that!” responses by asking both a positive and negative question about the same feature or requirement.

For example:

“If you are able to sort your search results alphabetically, how do you feel?”

  • I like it!
  • I expect it.
  • I’m neutral.
  • I can live with it.
  • I dislike it.

and then:

“If you are not able to sort your search results alphabetically, how do you feel?”

  • I like it!
  • I expect it.
  • I’m neutral.
  • I can live with it.
  • I dislike it.

Interpret the Responses

We then use both responses to identify the feature type according to that user like so:


So, in the case where a respondent answered “I expect it” to the positive question and “I dislike it” to the negative, we can see that for this particular user, that feature is a 😐 Must-Have.


Quick aside. You’ll notice that four of those spaces have a new category: 😳. This indicates a Questionable response because the user shouldn’t really like it when a feature does not exist but also like it when it does exist. We won’t necessarily say they’re lying, but it certainly leads us to believe that maybe they weren’t paying close attention to the question 😉

Record the Responses

We’ll record that response in our handy-dandy response tracker table – also known as a spreadsheet.


Feature A got one response of “Must-Have”

Then, we repeat that two-part question for each of our features with a batch of customers and let them tell us where each feature belongs.


In this hypothetical example, Feature A received:

  • 13 Must-Haves
  • 2 One-Dimensionals
  • 1 Delighter
  • 4 Indifferent
  • 0 Reverse
  • 0 Questionable

So, it’s a Must-Have! Build that thing!

How Many Responses Do I Need?

When it comes to qualitative interviews, it’s easier to go on feel and base it on something like, “When we’re no longer hearing new things that surprise us, we’ve probably got enough actionable info from this batch of interviews.”

But with this quantitative measure, we want to be sure we’re not drawing conclusions from too small a sample size. I haven’t come across a well-documented standard for a minimum number of responses, but several trustworthy practitioners have suggested that between 15-20 responses usually starts to reveal some truth.

And use your judgement, of course. In the example above, we can be pretty certain that Feature A is a Must-Have (13, 2, 1, 4), but Feature E isn’t as clear (8, 2, 6, 4). If you see that kind of “close call” pattern emerging, consider digging in on the use case of that feature in your next few customer interviews and perhaps you’ll have a better sense for where it may belong.

Get Started!

Alright, that’s all you really need to know to get going! There’s also a part three where I go a little deeper in to charting your results, but it’s entirely optional.

For now, identify your feature set, create a survey, gather responses, and be open to what the data tells you!

If you’ve learned something from these posts and put this in to practice, please drop me a line! I’d love to hear from you, learn from your experience, and update these posts to be more useful to future readers.