We’ve all been there. The team worked hard for several months on a new ground-breaking product, feature, marketing campaign, pricing model, or {enter your keyword here}. Being almost absolutely sure it’s gonna pull off, the D-day comes, and the project is launched with a big red button.

Fast-forward a few weeks later into the management meeting room. The data guy walks in with red graphs under his shoulder, and in a few minutes, despair sets the mood in the room. The market did not respond to the change as it should have. A blame game begins, victims are pointed at, and everyone tries to tell what will make things right.

Best case scenario – a company loses some money, but after a couple of months, market feedback helps to either get things right or pivot to some other big-ass, gut-powered idea.

Worst case, business becomes unsustainable, capable people flee faster than Twitter’s board of directors after Musk’s takeover, and founders have to sell their Teslas with seats covered in dried tears (rough, I know).

Don’t focus on growth; focus on learning

The experimentation process is not about eliminating your gut. It’s about proving it right or wrong as quickly and cheaply as possible. Remember, instincts are experiments; data is proof.

There’s a direct correlation between the yearly number of experiments a company conducts and its revenue. Netflix conducts around a thousand experiments annually, Amazon 2,000, Google 7,000. Even in such a conservative industry, Protect & Gamble takes advantage of over 7,000 experiments a year.

Were they all successful? Absolutely not. The vast majority did not prove right. But the purpose of experiments is not to deliver growth; it’s to deliver knowledge. Each loop validates or rejects the hypothesis and takes you closer to success.

Building a full-scale product, feature, marketing strategy, or anything else takes time. Experimentation-led growth breaks big projects into smaller pieces and tests them in iterations. The purpose is to avoid giant leaps of faith called “projects” and test hypotheses on a much smaller scale to minimize the cost of failure.

Source: Growth Tribe

The experimentation framework

There are many frameworks for the experimentation process. I’m a big fan of the GROWS framework that I adjusted a little bit, and it looks like this:

Let’s break this down in more detail.

1. Understand

Unless you have a solid understanding of your market, landing a successful idea is like trying to juggle baseball bats on a unicycle while blindfolded. It can be done, but gonna cost you a lot of time and head bumps.

According to Donald Rumsfeld, there are 4 types of knowledge:

Unrevealing these known and unknowns is fundamental to any growth strategy.

Discovery research explores the things we don’t know we don’t know. Those “aha!” moments when you realize something works entirely differently than you’d expect. Build a research framework to look for these insights, as they give you unfair advantages over your competition.

There are many ways to tap into this:

  • Customer and user research – by far the best exploration method out there. Talk regularly to your customers, users, and prospects to discover their deep motivations, needs, frustrations, perceptions, and opinions.
  • Customer-facing staff knowledge sharing – people in the first lines of contact, such as customer care and sales teams, know the customers very well. Build a workflow in which this knowledge is regularly shared between teams.
  • Market research – there’s a plethora of information about your target audience already out there. All you need to do is to sit down and find it. Discussion forums, review sites, conferences, articles, books, podcasts, industry reports, or interviews with well-respected professionals are excellent intelligence sources.

There is a variety of frameworks to put this knowledge together and share it on a company level, such as customer personas, user journeys, jobs to be done or user stories

Not even perfectly understanding your customers’ problems will make your experiments 100% successful. But it will dramatically reduce their error rate. There are no shortcuts to this.

2. Ideate

Your first experiment ideas will come out easily. It’s those things people always wanted to try but never had time for. You can reduce their effort by breaking them down into smaller experiments.

Once all the low-hanging fruits are harvested, however, you might find it increasingly difficult to develop new ideas. Generally, most experiments will fall into one of the following:

  • Reducing friction – what can we do to make the customer experience smoother and increase conversion rates?
  • Add motivation – what can we do to persuade and nudge people to move through the funnel faster and in higher numbers?
  • Personalization – how can we better personalize the experiences for specific personas, industries, or intents?
  • New tools and channels – what new tool or activity can we introduce to boost growth?
  • Best practices – what proven methods can we repeat from others?
  • Scaling – how can we scale what already worked before?

A great source of these ideas is the web. If your goal is to optimize conversion rates, Convertize published over 250 A/B test results from a variety of successful companies. Ladder.io has a playbook with over 800 growth tactics. Robbie Richards wrote an amazing article with his learnings from a study of 77 hyper-growth companies. If you haven’t read the classic book Traction: How Any Startup Can Achieve Explosive Customer Growth, you should do it right now. And there’s much, much more. Google is your friend here!

The power of brainstorming

Fortunately, you’re not alone in this. You can (and should!) leverage your team in a brainstorming session. Nobody will be against it, trust me. People love brainstormings!

Throughout the many sessions I attended and facilitated, I learned a few tips that make the brainstorming sessions much more efficient:

Prepare

Make sure everyone on the team comes prepared. Send upfront all vital information. Explain the topic of brainstorming and ask participants to go over personas, journeys, jobs-to-be-done, or any other documents to help them emphasize with customers.

It also helps to ask everyone to prepare at least one idea upfront and send them resources where they can find inspiration.

Build up on each other ideas

The biggest mistake teams make during brainstorming sessions can be summed up into two words – “yes, but...”. We tend to criticize before we even try to understand.

Premature evaluation kills not just enthusiasm but also creativity. “But… is a restricted word. Whoever says it, goes out for 5 minutes!

Instead of saying “yes, but…”, say “yes, and…” and try to build up on an idea, however ridiculous it sounds. What sounds ridiculous at first can actually be a great foundation for many build-ups that make more sense.

Brainstorm in small and diverse teams

Usually, 5 – 7 people, including the facilitator, are a perfect size. Make sure to pick people from diverse environments. Even if you, for example, brainstorm a marketing strategy, call someone from customer care, data, or product team. They will bring a different point of view that might take the ideation process into new areas.

Also, mix people with different seniority levels and time in the company. I learned that people who joined the company just recently are the greatest source of ideas. They will often ask those “stupid questions” that have obvious answers, but nobody can justify why. These people might challenge your existing beliefs and make you wonder if you really have enough evidence for them. Hint: oftentimes, you’re not.

Defining hypotheses

Part of getting ideas into actionable next steps is to define the hypotheses behind them. These hypotheses will be the very base of your potential experiment.

Make sure to clearly define each idea in a sentence like this:

WE BELIEVE THAT <a feature, product, idea…> WILL <measurable business goal> FOR <target audience> BECAUSE <why we think they will act this way>.

The last part might be the most challenging one to define. It’s the reason why your target audience should act the way you wish. If you can’t figure out any reason for this, it’s a good sign that you should take a few steps back and improve your market understanding.

Doing customer or market research will often validate or deny your hypothesis before even any testing is needed. It will also help you to develop better ideas actually aimed at customers, not just business.

3. Evaluate

Once you fill your idea pool to the brim, it’s time to become critical. I use the ICE Scoring Model popularized by Sean Ellis and used in companies such as Dropbox and Lyft. It evaluates ideas on these three attributes:

  • Impact – if our hypothesis is true, what impact can we expect on the observed business metric?
  • Confidence – that the hypothesis is true based on facts and evidence we already have
  • Ease – how much of your resources does it take to validate the hypothesis

Quantify these on a scale from 1 to 5 or 1 to 10. Make sure to standardize this by creating a table where you describe what exactly needs to happen to score a specific number. Here’s an example:

Once you rate your experiment on Impact, Confidence, and Ease, simply multiply them like this:

Impact (4) x Confidence (4) x Ease (3) = ICE score of 48

This number will allow you to compare ideas against each other and prioritize what to execute next. Your estimations may be a bit off at first, but over time, you’ll develop a sense that will make them more accurate.

You can adjust this framework in any way you need. For example, you might end up in a situation where development or sales are too busy, so you prioritize experiments that do not require their input.

Once you score all your ideas, your backlog should look like this. I set this up as a project in Asana:

Experiment priorizitation made in Asana

4. Prototype

However deep your customer understanding is, and your experiment ideas smart, many of them will still fail. The idea of Minimal Viable Products is to reduce the time and resources needed to test the hypothesis so that if it fails, you can pivot to a different one as quickly and cheaply as possible.

When I say time and resources, I mean especially fixed costs that occur before the experiment is launched. These include things such as product development or rigorous preparation of marketing strategy.

Many MVP methods require high variable costs to keep things going, such as third-party tools subscription costs or manual human effort to avoid costly automation. But that is ok. You only test this way for a short time, and once the experiment is validated and you have enough proof that it’s worth it, you can invest in further development.

If the hypothesis proves wrong, however, you save time and money by not building something that won’t be serving any purpose.

What is minimal enough?

That is the question the whole website was developed to answer. Zappos is one of the finest examples of how even the whole business model can be minimalized into a simple experiment.

Once you have your hypothesis defined, browse through different MVP methods on this website until you find what can be used to answer your question.

Combine and sequence the MVP methods

Usually, your experiment will require multiple MVP methods for its execution. You might start with a simple Validation survey to increase your understanding, continue by building a landing page and launch a Fake door test to validate your target audience’s interest in a potential new product, test different ads to find the best incentives, and use an Explainer video to better convey the potential value—all in one experiment.

Once you validate this way, you might build a prototype of your product using third-party tools (i.e., the Frankenstein MVP), test visual prototypes, or develop a Single-feature MVP.

Each loop should validate a new hypothesis or give you some additional information. A great example is the story of how Buffer emerged from a fake landing page.

Your users might be interested in something, but are they willing to pay for it? If so, how much? Will the product have a growth loop potential to grow organically? If not, how expensive will it be to acquire a single customer? Is there a positive ROI on acquiring new customers, and is it sustainable? What other verticals are there to explore?

These are all questions that experiments can answer. Just make sure not to put too much time into building MVPs with unnecessarily high fidelity. It’s about learning fast, not putting faith into months of development of something that you only have 20% confidence to work.

5. Measure

Put on your glasses; we’re talking numbers now. You can have the best understanding of your market, ideas that make tremendous sense, and minimal prototypes that look almost like the real thing. And yet, you can ruin the whole experiment and lose weeks or months of data-gathering time if you don’t measure the results correctly.

There is no better book for this than Lean Analytics: Use Data to Build a Better Startup Faster. If you want a quick sneak-peak, here are my key takeaways:

Use impact metrics instead of vanity metrics

Vanity metrics make you feel good but are disconnected from business or can’t change the way you act. Let’s take a look at some examples:

  • Website sessions, clicks, page views –unless your main revenue stream is ad space, these metrics are quite irrelevant.
  • Bounce rate, time on page – there are far better engagement metrics than these
  • Number of followers/friends/likes – unless you’re an Instagram or TikTok influencer, there is no direct connection to your revenue
  • Number of clicks, impressions, price per click, CPM – although it’s good to observe these metrics for optimization purposes, they are a far cry from actual business metrics such as CAC (Cost per Acquiring Customer) or ROAS (Return On Advertising Spend)
  • Leads – this is tricky. Based on the source of traffic or messaging on your landing pages, you can easily fool yourself that a lot of leads means good results. But what about their quality? Rather, observe SQLs or PQLs.

Use impact metrics that are as close to the actual business as possible. For each step of your funnel, you should define at least one metric that will be key for experiments in this area. Here are a few examples:

  • Paying users: increase in actively subscribed users or accounts
  • Revenue growth: 10% monthly increase in acquired revenue
  • ARPA growth: average revenue generated per account
  • MQLs: Marketing Qualified Leads that showed interest in your product presentation
  • PQLs: Product Qualified Leads have high engagement in your app
  • SQLs: Sales Qualified Leads (usually used in B2B) are those that fit your Ideal Customer Profile
  • Active users: increase in monthly, weekly, or daily active users
  • Engaged visitors: visitors who viewed the pricing or watched a product video
  • Page load time: <5 seconds
  • Monthly churn: <2%
  • ROAS: calculated as customer lifetime value vs. cost per acquired customer

Compare against control groups

If your experiment aims to change or enhance something already existing, such as design, copy, pricing, special offer, or a product feature, always try to test its impact against a control group.

Imagine an experiment – an email with a special discount offer was sent to 500 non-reactive trial users, and 20 of them converted. Was it a success?

We don’t know. There is no evidence that these 20 conversions were incremental, meaning they wouldn’t have happened even without any kind of promotion. If that was the case, we just threw away money on unnecessary discounts, and if we deem this experiment valid, we will throw even more in the future.

You can observe incrementality either by split testing or cohort analysis:

Split testing splits the users into two or more groups. The first group gets the original experience while the others get variations of it, and you observe the differences across the conversion rates towards your impact metric. The benefit is that you take away the influence of time; whether we’re talking about seasonal and macro-economic trends or other changes in your product and marketing strategy over the course of your experiment. The bad thing is that sometimes it might be challenging to set your sample right.

Cohort analysis compares similar groups over a specific time period. For example, compare one month of data after the change against one month of data before it. Cohort analysis gathers data quicker than split testing because 100% of your audience receives the new experience (vs. 50% or less if you use multiple variations). However, this type of testing is susceptible to all kinds of macroeconomic influences and nuances that occur over time.

Draw a line in the sand

You need to pick a number, set it as the target, and have enough confidence that if you hit it, you consider it a success. And if you don’t hit the target, you need to go back to the drawing board and try again.

Most of the time, experiments end up right in the big fat middle. There was some success, but it wasn’t out of this world. Was it enough success to keep going, or do you have to go back and run some new experiments? That’s the trickiest spot to be in.

There are two ways to set the goal:

From your business model – if you know that you need 10% of your users to sign up for the paid version of your site to meet your business targets, then that’s your number.

Industry benchmarks – what is the average across companies with similar aims? There are plenty of resources to learn this. Here are a few:

Achieve statistical significance

For evaluating experiments, always look at the difference in conversion rate towards your impact metric and the statistical significance of the result.

Fortunately, you don’t have to calculate this yourself. Tools such as the Bayesian A/B Testing Calculator will do the work for you.

Generally, you aim to achieve at least 95% confidence in your result. In reality, though, this might require a lot of time to achieve. You might settle for 90% or even 80% in some cases, but don’t stop the experiment once you hit that number. Only stop it if you see it for a longer period of time and take the results with a grain of salt. If you decide to validate the hypothesis with less statistical significance, make sure to get more in the next loop.

Putting it all together

Now that you’ve prepared your experiment, it’s time to put it all together. If you use Asana or a similar project management tool, you can use that to create a task with custom fields where you define this:

  • What we already know – all the relevant knowledge and data gathered from the understanding stage
  • Hypothesis – what we believe we can achieve
  • ICE score – sum of quantified Impact, Confidence, and Effort
  • Impact metric – what metric should this experiment directly influence
  • How to test it – description of your MVP
  • How to measure it – description of the details of your test
  • We are right if… – set the expected goal
  • Details – anything to add, such as expected data collection time, any affiliated costs, used third-party tools, etc.
A sample Asana task with key description of an experiment

And there you have it! By defining all your experiments with this framework, you can make sure you’ll move upfront more smoothly and get results quickly.

Check out different MVP examples in our library, and let me know your experiences with the experimentation process in the comments below or by email.

Happy experimenting!

Share your experience

Have you tested an interesting experiment that you’d like to share? Publish it on the largest online directory for experimentation-led growth ideas! Influence thousands of growth experts and get recognition for you and your company.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *