How to run growth experiments (that get results)

The scientific approach to growth

Growth Roadmaps-banner

Startups can't afford to spend weeks on campaigns that have 0 impact.

The best teams spend days on tests that increase conversion and get insights that move their business forward.

You grow faster when you learn:

  • What works

  • What doesn’t work

for your audience and then make changes based on the insights.

And the more experiments you run, the quicker you learn.

So when I advise teams, I walk them through a simple framework, where we break down the test they want to run into 6 steps.

Today, I’m going to walk you through the same process and share with you the framework and the experiment doc I use to make sure:

  1. Your tests are less likely to fail

  2. You make sure you learn from the experiment.

This newsletter is kindly sponsored by - A big thanks to them.

My friends over at launched a massive guide about onboarding called the Ultimate Guide to Product Onboarding.

For all my onboarding and product growth enthusiasts, there are tons of tips, examples, and best practices that are really well done.

I’ve already added lots of examples to my swipe file after reading it.

So when you want to run an experiment, I suggest you break down the test into 6 steps. Use this experiment doc for each test you run.


  1. Set an objective

  2. Create a hypothesis

  3. Outline the method

  4. Record the results

  5. Digest the learnings

  6. Have clear next steps


Steps 1-3: Before the test

Steps 4-6: After the test

Growth Experiment Cheat Sheet

Let’s get into detail for each of these steps:

Before the test:

1) Set an objective

Ask yourself:

What are you trying to achieve and learn?

What metric do you want to move?

Your objective should be focused on one area and be core to your business.

This needs to be:

  • Clear

  • Measurable

  • High impact

Here I wish to "I want to increase the conversion rate on the sign-up page"

2) Create a hypothesis

This is your belief as to why the test will be successful.

Just like in science class, we use hypotheses to make sure we get a definitive answer to our tests. No sitting on the fence.

You also want to use data/user feedback to make your hypothesis stronger.

"I believe by

having less friction on the signup page,

conversion will increase by 3%

because video recordings showed major confusion there"

Now it doesn’t matter whether your hypothesis is right or wrong.

This isn’t about how smart you are. It’s about getting the answer.

3) Outline the method

How will you run the test?

You want to validate as fast as possible. Don't waste time making the 'perfect' test. Your goal is to learn fast.

Quick and dirty is fine.

You want to run the Minimum Viable Test (MVT).

To do so, ask yourself questions like:

  • Is this the quickest way to get my answer?

  • Is this the most high-impact thing to learn?

  • What audience are we targeting?

  • Can we run this without code?

  • What do I need to know to successfully run this test?

Go right to what you want to learn and work backwards.

Think of all the things less friction could mean:


  1. Remove the entire signup page

  2. Remove every field but the email field

  3. Remove the 'verify email' field

Rank in order of:

  • Highest impact

  • Quickest to test.

Run the highest score first.

After the Test:

4) Record the results

Record the performance of the experiment.

Did the metric move closer to the goal or not?

Was the experiment a 'success' or 'failure'?

Spend 5-10 minutes getting all the important data.

5) Digest the learnings

This is the most critical area.

Spend time digesting the results. Analyse by asking yourself questions like:

  • What went well or didn't?

  • Did you get your answer?

  • Do you have enough data?

  • Why did you think the results happened like that?

This is where your insights from come. Don’t skip this step.

6) Have clear next steps

We want to use what we learned to gain momentum.

And inform our roadmap.

  • Based on your learnings, what would you test next?

  • What would you change next time?

  • Should we do more tests in the same area?

This means we are always moving forward.

As any good growth marketer will tell you, most of your experiments will fail. This means you're taking big swings.

But working on an experiment that fails and you don't learn anything valuable is terrible.

This framework helps you to not waste time on small experiments that don't deliver insights.

Here’s a link to the simple experiment doc I use with clients.

When you’re ready, here’s how I can help you grow:

If you run an early-stage startup and want me to help your team run more effective experiments, you can book a call here

Get the Startup Growth Roadmap - my playbook of 25+ templates that's helped 300+ founders and marketers to scale their startups.

What did you think of this post?

Your feedback helps me create better posts for you

Login or Subscribe to participate in polls.