It’s amazing that there is a huge amount of discussion on the importance of hitting product market fit for what you are building. Interestingly, there isn’t as much about how to measure when you actually have product market fit for what you have built.

The reason that’s the case, is because partially it’s a question that you shouldn’t have to ask:

“If you have to ask whether you have Product/Market Fit, the answer is simple: you don’t.” – Eric Ries

And I can see where Eric is coming from with this. Sometimes we lose track of our customers and instead focus on terms, such as “product/market fit” and if we were in closer touch with our customers we would know whether we have it or not. Granted, that’s just my own personal interpretation of his quote.

The product market fit survey and how to use it properly

What questions should I ask my users/customers?

The best way I’ve found to measure product market fit is with a survey that Sean Ellis developed, which asks a number of questions, with the most important one being “How disappointed would you be if you could no longer use this product?”:

Screenshot 2015-03-04 15.48.36

He even built a tool for this called survey.io that you can use to easily run the survey on your customer base.  It features a few questions, but none are as helpful as the “how would you feel?” question.

How many people do I need for conclusive results?

Hiten Shah helped us a huge amount on this to figure this out and his best piece of advice was that you don’t need more than 40-50 responses for your results to carry significance. Of course, the more you get, the better, but a sample of 40-50 is usually enough.

Who should I send the survey to so as to not skew results?

We’re currently building something new at Buffer that we call the “Power Scheduler”. We had it in beta for a few months now and generally the usage from our metrics for it was good, but we couldn’t really tell how useful it really was. So, I picked out the top 200 people currently using the Power Scheduler and this was the result:

Screenshot 2015-02-25 09.24.32

Now, of course, sending something to your most engaged audience gives you a very skewed result and a few people in our team questioned whether that was the right approach. With the help of Sean Ellis,, the man himself, who also built another very helpful surveying tool called Qualaroo that we use a lot here at Buffer we refined who we would survey. He suggested this:

I generally recommend to survey the following:

– People that have experienced the core of your product offering

– People that have used your product at least twice

– People that have used your product in the last two weeks

When we ran the survey again with Sean Ellis and Hiten’s (Hiten built CrazyEgg and KISSmetrics and learnt tons of things from scaling both companies in this regard) help, we still got a very promising result, although less stark as the one above:

Screenshot 2015-03-04 11.49.06

How can I use this with features instead of products?

I’m a big fan of using this for new features you are building, when your main product already has product/market fit. The way I found to make this very clear is to include an image of the feature you’re talking about in the survey, like in this example:

Screenshot 2015-03-04 16.08.19

Why this survey might get even more important as your product grows

One interesting problem we ran into at Buffer – which is definitely an incredible problem to have! – is the following. It almost didn’t matter what new feature or product we would put live, we would be able to get press and buzz around it. It can feel like everything you’re building is a success, when it really isn’t, because you’re not monitoring your key metrics closely.

We had this happen with our Daily app launch for example. We saw great PR, tons of votes on ProductHunt and much encouragement from our community. What we didn’t look at was whether people were coming back to use the product (they weren’t!), so we fooled ourselves for quite some time before we realized what was happening.

So being laser focused on whether you’re actually building something useful, after you already have product market fit, is incredibly important. We now try and separate marketing and product launch as much as we can, keeping a new feature or product in beta for as long until we know this is truly providing value to our customers.

That’s where the product market fit survey comes in. We’ve started to use it on multiple occasions, like in the example above, to know whether something is working, before the general public even knows about it.

Staying disciplined and not getting carried away by the buzz (something I’ve fallen prey to many times!) as you grow I believe is the key to keep building great things for your customers.

Image source: Unsplash

Free up your day with our Social Media Tools

Buffer can save you up to an hour a day and grow your traffic too.

Learn More
Written by Leo Widrich

Co-founder and COO at Buffer. I enjoy writing about Buffer’s lessons learnt, social media tips and updates to Buffer. For some more personal posts, check out leostartsup.

  • Alvaro Martin

    Great post Leo.
    I was just woundering how would you separate what the people like (BUZZZ) from what the people need. What could may be an objectif criteria beyond pure usage (or usage by qualified user)? Basically do you work over any other indicators to measure effectif “need” (i.e increase of usage over the feature= dependence, variation(s) on the overall usage of the app …..????)

    • LeoWid

      Hi Alvaro,

      Great question, I think another great indicator is revenue to determine product market fit, nothing beats measuring product market quite as much as revenue. I think revenue is another great one, if you have people coming back and daily active users is growing over time, that’s another great element!

  • Hey Leo, quick question. What tracking services do you guys use?
    Mixpanel for example lets you do cohort analysis pretty easily :) And is a great and easy way to figure out if people are coming back or not. I remember having to do it in excel, that is NOT fun :p

  • Thanks for sharing this, Leo! I’m surprised to learn that people weren’t coming back to use the Buffer Daily app on a regular basis. The second I tried it out I was hooked and still use it on a (oh no, not cheesy puns…) daily basis!

    Silliness aside, I was very impressed with the simplicity of Daily from the start. Thanks again for sharing, I always seem to learn something reading the Buffer blog.

    P.S. There is a bonus comma in the sentence that begins with “With the help of Sean Ellis,,” – double prizes! :)

  • Iain Acton

    Thanks for sharing Leo. I would be interested to know how you are segmenting for new feature introductions? This is critical for understanding how new feature or product packages relate to unmet needs. Do you segment via needs is or via other segment groupings?

  • Really enjoyed this post, thanks Leo.

    Out of interest, what are the main reasons why you separate product and marketing launches? What benefits do you find it provides, other than the extra time to iterate in beta?

  • Paul Gonzalez

    Leo, great stuff! I’ve used the Sean Ellis question previously and agree with you on it’s power for finding the truth about the value of your feature/product. One question I have for you, which I always struggle with, is what are your thresholds on the results? Clearly in this example it jumps off the page. But if it’s 50% Very Disappointed, what’s next?

  • Suzie Prince

    Hey Leo. I have started to use a similar survey in our product. I have added a “Not disappointed” option. Is there any reason why yours didn’t include that option?

Join 13,000+ startup culture thinkers & get our posts in your inbox!