The MVP is broken: It's time to restore the minimum viable product

The minimum viable product (MVP) is one of the most abused terms in the modern lexicon of software development. I sometimes wish our industry would drop it from our collective vocabulary.

When I hear others using the term, I ask dozens of questions. How complete should the product be? What is your goal with this MVP? Are you trying to test the market, or are you just asking for the alpha version to be built? I recommend narrowing the focus of your MVP down to three possible types:  

  • A product that has only the features necessary to test the market
  • A product with its must-have features
  • A product proof-of-concept

It sounds simple, right? But the MVP has fallen far from what it was supposed to be. Here's how we got to this point, and how to make MVPs useful again.

Gartner Magic Quadrant for Software Test Automation 2017

The Lean Startup and origin of the MVP craze

The MVP craze started about six years ago with the publication or Eric Ries’ book The Lean Startup. It was full of stories about companies that started with an MVP and used it to accelerate their path to a great product. Companies that skipped building an MVP built a full version of the product, only to discover that nobody wanted it. To Reis, an MVP wasn’t necessarily an application prototype; it was a market test.

One food delivery company tested its product idea without a single line of code. It planned on creating an app that would help a household write its weekly meal menu, and then have the company deliver the ingredients for that menu to the household.

The company's low-tech version of an MVP was to send one of its product managers to find prospective customers, interview them about their meal preferences, take that information back to a chef who would build their menu for the week, and then go to the local grocery store to buy and deliver the food for all of the menu’s recipes.

Was this slow? Yes. Expensive? Yes. Scalable? No. But it provided data that proved there was a market for this kind of service, and gave the product team the information it needed to build the product.

How the MVP was ruined

Today everyone wants to build an MVP, to build less with the hope of saving time and money. Managers may say it’s for testing the market, but I’ve seen many startups and enterprises build an “MVP” just so they can meet a deadline or try to make their developers work faster with more focus (or with less attention to quality).

Some project managers tell the team, or perhaps merely imply that "This is a minimal product, so quality isn't important." I’ve always been stunned that there are still managers who subtly push developers to sacrifice quality and good design as a strategy for rapid delivery. Presumably, such people don't want bridge builders to waste their time on structural engineering, either.

However, I’ve also seen developers use the MVP designation to justify a buggy or underperforming product. Both sides should hold each other accountable for meeting the MVP’s initial goals—nothing more and nothing less.

Another common way managers have ruined MVPs is by treating them as the M in MoSCoW rules. For those who haven't heard of them before, these rules are an old prioritization technique used for requirements. Each requirement is tagged one of the following:

  • Must-have
  • Should-have
  • Could-have
  • Would like to have/won't-have

"Must-haves" are the essentials, "should-haves" are really important, "could-haves" might be sacrificed, and "would-likes" probably aren't going to happen.

The problem with MoSCoW rules is that at least 60% of any requirements list gets classified as “musts.” Several stakeholders demand their request is a “must” and fight like wild dogs to avoid “could” or “would” status.

A vicious circle is created as stakeholders realize that nothing except “musts” will get done. If it gets to the point where more than 60% of requirements get classified as “musts,” there may even be some “musts” that don’t get done. In some organizations, a stakeholder stampede is triggered every time someone says “MVP,” leading to a bloated first release, unless someone steps in to put stricter limitations on the requirements.

It’s time to differentiate MVPs

I don’t think the abuse of the term “MVP” will end anytime soon, but let's move the conversation back toward Ries' definition and other beneficial definitions. Three types of MVPs make sense for product managers to use and distinguish from one another:

MVP-M: An MVP focused on marketing. This type aligns with the Ries definition: It's a minimal product to test the market and see if there is demand for what you’re offering.

MVP-T: An MVP that serves as a technical demonstration—also called a prototype or proof-of-concept. This is only necessary if you need to explore designs for the software and prove that it will work the way you want it to technically.

MVP-L: An MVP where only the most important features are included. This is the M-in-MoSCoW style I criticized earlier. This approach might make sense if the product managers can prevent a stakeholder stampede for feature priority. But even building just a few must-level features can take a long time, so only use this approach if you can build your “musts” in a few months.

The MVP is still incredibly important

Don't give up on the idea of an MVP just because so many managers are misusing the concept. The idea of a minimal market test is still one of the most important components of product creation.

It protects teams from leaders who blindly believe that their product will succeed. If you don’t have the expertise, resources, or courage to perform a reasonably comprehensive market test, you shouldn’t be making products.

Today's software tools are more powerful than they were 10 years ago. It shouldn’t be prohibitively expensive to build something small to test customer reaction.

The best advice for getting MVPs right is to stop using them as prototypes or foundations for the final product. Build the MVP as a learning tool, and once you’ve learned what you need, scrap it.

So when someone asks for an MVP, say to yourself, “What 10% segment of this request could we build to discover if it makes sense to build the other 90%?” And make sure it would take under 10% of the time needed to build the entire product. 

Gartner Magic Quadrant for Software Test Automation 2017
Topics: Agile