# Prove Demand Before You Commission the Build
**Date:** 2026-04-27
**Author:** Dan Maby
**Categories:** Marketing & Business, Development
> Internal enthusiasm, user interviews and competitor research don't prove anyone will pay. Before we write a line of code, we push clients to answer the harder question: will people part with money, and what does that tell us about what to build?
[Marketing & Business](https://blue37.com/blog/category/marketing-business) | [Development](https://blue37.com/blog/category/development)
[View on blue37.com](https://blue37.com/blog/2026/04/prove-demand-before-you-commission-the-build)
---
## The wound that kills the build

When a custom software project fails, the autopsy usually lists the same cause of death: the money ran out. That's almost never the real story. CB Insights' most recent analysis of 431 VC-backed shutdowns since 2023 makes the point bluntly: capital running out is where these stories end, but the more telling causes - poor product-market fit (43%), bad timing (29%) and unsustainable unit economics (19%) - reveal why the capital dried up in the first place. The original 2021 CB Insights post-mortem study had already put "no market need" at the top of the list, and the pattern has not shifted.

We see the same thing from the other side of the table. Founders and product leads come to us with slide decks, Figma files and a list of competitors they want to out-build. They've done user interviews. They've run a survey. Someone influential in their network has said "I'd absolutely use that." And they want a quote to build it.

Our position, stated plainly: none of that is proof of demand. Before we commission an engineering team against a roadmap, we want to see evidence that somebody, somewhere, will hand over money. That is a different conversation, and it is the most valuable one a product-minded consultancy can have with a client.

## Interest is not demand, and feedback is not money

The hardest habit to break is treating enthusiasm as a leading indicator. It isn't. Rob Fitzpatrick's [The Mom Test](https://www.momtestbook.com) is now more than a decade old and still the clearest writing on this subject. Fitzpatrick warns that the world's most deadly fluff is "I would definitely buy that", because it sounds so concrete, and as a founder you desperately want to believe it's money in the bank, but folks are wildly optimistic about what they would do in the future - always more positive, excited and willing to pay in the imagined future than they are once it arrives.

That is the interview trap in a sentence. It gets worse when the feedback comes from people who like you: colleagues, advisors, friendly users in a Slack community. They will tell you the idea is great. They will not tell you whether they'd pay for it, because you didn't really ask, and they didn't really have to answer.

Jason Cohen has written about the next stage of the same disease, which he calls [product purgatory](https://longform.asmartbear.com/purgatory/): users adopt the thing, love the thing, and the organisation still can't turn that love into revenue. By the time you're here, you've already built the software. The question of demand has been answered - just not in the direction you hoped.

## What actually counts as proof

If compliments and interest don't count, what does? We work from a short list, roughly in order of strength:

1. **Money on the table before the build.** Pre-sales, deposits, signed letters of intent with a price attached, paid pilots. Not discounts for future use - actual cash or a legally meaningful commitment.
2. **Active searching.** Prospects who are already paying for a worse alternative, or hacking together spreadsheets and Zapier flows to solve the problem. If nobody is spending time or money on the problem today, a slicker solution rarely changes that.
3. **Repeatable acquisition at a sensible cost.** A landing page or small ad campaign that converts cold traffic into qualified signups at a rate and cost that could, in principle, sustain a business.
4. **Qualitative depth from the right segment.** Interviews that surface specific past behaviour, not hypothetical future behaviour, from a tightly defined customer type.

Note what's absent: feature requests, survey scores, competitor comparison spreadsheets, and enthusiasm from the founder's own network. Those are inputs to a hypothesis. They are not a result.

The landing-page test deserves a specific word of caution. Benchmarks matter here. First Page Sage's 2026 analysis across 80+ clients defines conversions as contact form fills, demo sign-ups, appointment bookings and other MQL-generating actions rather than micro-conversions like newsletter sign-ups, and puts legal services at 7.4%, eCommerce around 4.3%, and healthcare between 3.0 and 4.2%. A 2% email-capture rate on a pre-launch page tells you almost nothing about willingness to pay. A 2% conversion on a page that takes a deposit tells you a great deal.

## The Stripe link is the cheapest piece of software we can recommend

The single best pre-build experiment we've seen, repeatedly, is a page that tries to take money. Not a waitlist. Not a "notify me when it's ready" form. A page with a price, a buy button, and a Stripe link behind it. If the charge goes through, you refund it and have a genuine signal. If nobody clicks, you have a different - and cheaper - signal.

This isn't a new idea. RevenueCat's writeup on [testing customer demand for subscription apps](https://www.revenuecat.com/blog/growth/customer-validation-subscription-app/) walks through several teams who used landing pages to pre-sell before writing production code, and makes the point that the real validation test is whether people actually pay on launch day, which is why you test demand. The Mom Test frames it more sharply still: think in terms of currency - what are they giving up for you? A compliment costs nothing, so it's worth nothing to you and carries no data. The major currencies are time, reputation risk and cash.

We've had the opposite conversation too, and we think it's worth naming. Some categories don't permit a pre-sale. Regulated markets, procurement-heavy B2B, networked products where value depends on liquidity. In those cases, the substitute for cash is a signed commitment from a named buyer with a price and a date, not a friendly exploratory call.

## Our take

We are a software development and digital consultancy. We make our living building things. It would be commercially convenient for us to say "yes, let's scope the MVP" to every prospect who walks in with an idea. We don't, because the clients who succeed are the ones who put the demand question before the build question. The clients who struggle are the ones who treated validation as a formality and engineering as the main event.

The framing we use with prospects is this: the cost of being wrong about demand is not the cost of the MVP. It is the cost of the MVP, plus the opportunity cost of the year you spent building and selling it, plus the emotional cost of admitting you were wrong after you've told your board, your team and your LinkedIn followers that this is the thing. A two-week demand experiment costs a tiny fraction of that and answers the only question that matters.

When we've done this well - with the team behind [All Counseling](/portfolio/all-counseling), or when advising earlier-stage founders - the conversation about what to build gets genuinely easier. You're no longer arguing about features in the abstract. You're looking at what the buyers who already paid actually expected, and scoping against that. The build becomes smaller, sharper and more defensible, because it's serving a demand signal you can point to.

## If you're about to commission a build

Before the Figma files get handed to an engineering team, we'd rather have a conversation about what you've proven. Not what you believe. Not what your advisors said. What the market did when you asked it for money.

If that sounds like the conversation you need, [get in touch](/contact). If we think you should spend six more weeks testing demand before writing a line of code, we'll tell you - and we'll help you design the test.
