The information you’re waiting for doesn’t exist yet. The market is holding it for you.
There’s a version of market research that never ends because the questions keep shifting. You answer one thing and two more appear. You validate the concept and now you need to validate the price. You validate the price and now you need to validate the channel. At some point you realise you’re not getting closer to a decision. You’re just getting more comfortable with the uncertainty.
That’s a reasonable response to an unreasonable situation. Because the thing nobody says out loud is this: some of the information you’re looking for doesn’t exist yet.
Customers are remarkably good at describing their current frustrations. Ask someone what’s broken about the thing they use today and they’ll tell you in detail, with feeling, and probably with a few suggestions thrown in. That part of market research works well.
The part that doesn’t work, the part that can’t work, structurally, is asking people to evaluate something they’ve never seen. Not because they’re being difficult or unimaginative. But because human beings respond to things that exist, not to descriptions of things that might. What someone tells you they want in a concept test and what they actually do when the product is in front of them are related but not identical. Sometimes they’re not even close.
This isn’t a flaw in your research methodology. It’s a feature of how things work. People don’t know what they want until they see it done right. And “done right” requires someone to go first.
But the opposite error is just as expensive.
Moving too fast, before you understand the problem well enough, before you know who you’re solving it for, before you’ve pressure-tested the core assumption, doesn’t produce speed. It produces rework. The team that ships in week three and rebuilds from scratch in month six hasn’t moved faster than the team that spent an extra month getting it right. They’ve just distributed the cost differently, and less visibly.
The research phase exists for a reason. It’s where you find out you were solving the wrong problem. It’s where a customer says something that reframes the whole question. It’s where you discover that what you thought was a product decision is actually a pricing decision, or a distribution decision, or a trust problem you hadn’t accounted for.
Skipping that is not boldness. It’s expensive guessing with better branding.
“Move fast and break things” was someone’s answer to the same problem. It has the virtue of simplicity and the disadvantage of assuming someone else cleans up. And I’m sure the most people reading this are the someone else.
So where does that leave the rest of us? Somewhere between the quarterly research cycle and the “ship it and see” school of thought. Which is, admittedly, an uncomfortable place to make careful, evidence-based decisions.
The evidence you most need, whether this works, at this price, for these people, in this context, is only available on the other side of the decision. You can get close. You can reduce the uncertainty substantially with good research. But you cannot eliminate it from a distance. The last stretch of market understanding is always empirical. Always live. Always on the other side of shipping something.
This is true for a product launch. It’s true for a pricing change. It’s true for entering a new market or repositioning in an existing one. The research gets you to a high-quality guess. The market tells you if you were right.
So what do you do with that?
The first thing is to stop treating the remaining uncertainty as a research problem. If you’ve done serious work, talked to real customers, tested the core assumptions, stress-tested the economics, and you’re still not sure, more research is unlikely to resolve it. You’re not uncertain because you lack information. You’re uncertain because the answer isn’t knowable yet from where you’re standing.
The second thing is to treat the first version as a question, not an answer. Move with what you have, but instrument it properly. Decide in advance what you’re trying to learn and how you’ll know if you’re wrong. Build the feedback loop before you launch, not after. The goal isn’t to be right. The goal is to find out quickly.
The third thing, and this is the one that takes the longest to internalise, is to recognise that waiting is also a decision. Every week you spend gathering more information is a week the market spends without your input. Someone else may be asking it the same question. And they may be less careful than you, less thorough, less rigorous. They may also be first.
There’s a version of rigour that serves the work. And there’s a version that serves the feeling of having done everything right before anything goes wrong. They look identical from the outside. The difference is what you’re actually trying to protect.
The market will give you the rest of the answer. But only if you show up with a version of the question.
Further reading:
- Clayton Christensen — Competing Against Luck.
- Eric Ries — The Lean Startup.

Leave a comment