Reflections on refining an impact-driven design process
The following is open to further refinement. I see a lot of value of having a hypothesis that explains the predicted outcome early on in execution.
It incorporates what I've learnt from CXL Academy's Conversion Optimisation Mini Degree while leading a 4-month redesign project at Enrola.
These steps are normally shared across a few roles such as a product manager and a product designer, so it may seem weird if you’re from a larger organisation:
-
Articulate the problem and define the metric that matters most
-
Is this a relatively small design update (like adding a card or changing copy)?
- Yes, small update: Check volume of conversions to determine if an experiment is warranted
- If over 1,000 conversions per month: proceed with plans for a split test later.
- If less than 1,000 conversions per month: do not plan any split tests as results will not be helpful.
- No, big changes: Experiments run on big changes is fine for providing proof of validation. It can offer assurance that the new direction is safe, but it won’t provide validity when pinpointing why.
- If less than 1,000 conversions per month, taking big bets is the way to go.
- Yes, small update: Check volume of conversions to determine if an experiment is warranted
-
Know your target audience
- Who is the primary persona? What are their motivations, goals, needs and concerns in adopting a new solution? What is their context of use?
-
Get started on content early
- Cannot stress how helpful this is. It doesn't have to be fully formed from the start, but an idea of web content helps with accessible design and development.
- This ties to knowing your target audience: Determine what are the key messages you want to convey.
-
Sketch possible solutions and agree on functionality
-
Formulate a hypothesis and experiment plan
- What is the predicted outcome? Where and what will you be changing? How many variations are you testing?
- Calculate expected earned monetary value when minimum threshold for statistical significance is met.
- Plan experiment details in advance such as amount of conversions needed and duration of test.
-
Check for other experiments that might interact with your plans
- Are there other tests being run at the same time that could affect the viability of the experiment results or complicate analysis?
- Simultaneous split tests are possible with little interaction between the tests.
-
Design the solution, ask for feedback and iterate
-
Test solutions with users
- If it's a novel and innovative solution, definitely test with users.
- Otherwise, testing conventional web patterns can be skipped with an experienced designer.
- Test internally at the minimum.
-
Release feature, launch experiment
-
View experiment results on predetermined end date (Usually 2 weeks after launch)
-
Analyse results and decide if releasing the feature
The focus of a growth designer when there’s less than 1,000 conversions per month isn't that different. Rather, a sharper focus on unique value proposition, customer needs and competitive landscape is essential for taking better bets in absence of a volume of conversions.