Reflections on refining an impact-driven design process

The following is open to further refinement. I see a lot of value of having a hypothesis that explains the predicted outcome early on in execution.

It incorporates what I've learnt from CXL Academy's Conversion Optimisation Mini Degree while leading a 4-month redesign project at Enrola.

These steps are normally shared across a few roles such as a product manager and a product designer, so it may seem weird if you’re from a larger organisation:

  1. Articulate the problem and define the metric that matters most

  2. Is this a relatively small design update (like adding a card or changing copy)?

    • Yes, small update: Check volume of conversions to determine if an experiment is warranted
      • If over 1,000 conversions per month: proceed with plans for a split test later.
      • If less than 1,000 conversions per month: do not plan any split tests as results will not be helpful.
    • No, big changes: Experiments run on big changes is fine for providing proof of validation. It can offer assurance that the new direction is safe, but it won’t provide validity when pinpointing why.
    • If less than 1,000 conversions per month, taking big bets is the way to go.
  3. Know your target audience

    • Who is the primary persona? What are their motivations, goals, needs and concerns in adopting a new solution? What is their context of use?
  4. Get started on content early

    • Cannot stress how helpful this is. It doesn't have to be fully formed from the start, but an idea of web content helps with accessible design and development.
    • This ties to knowing your target audience: Determine what are the key messages you want to convey.
  5. Sketch possible solutions and agree on functionality

  6. Formulate a hypothesis and experiment plan

    • What is the predicted outcome? Where and what will you be changing? How many variations are you testing?
    • Calculate expected earned monetary value when minimum threshold for statistical significance is met.
    • Plan experiment details in advance such as amount of conversions needed and duration of test.
  7. Check for other experiments that might interact with your plans

    • Are there other tests being run at the same time that could affect the viability of the experiment results or complicate analysis?
    • Simultaneous split tests are possible with little interaction between the tests.
  8. Design the solution, ask for feedback and iterate

  9. Test solutions with users

    • If it's a novel and innovative solution, definitely test with users.
    • Otherwise, testing conventional web patterns can be skipped with an experienced designer.
    • Test internally at the minimum.
  10. Release feature, launch experiment

  11. View experiment results on predetermined end date (Usually 2 weeks after launch)

  12. Analyse results and decide if releasing the feature

The focus of a growth designer when there’s less than 1,000 conversions per month isn't that different. Rather, a sharper focus on unique value proposition, customer needs and competitive landscape is essential for taking better bets in absence of a volume of conversions.