This is a set of defaults for teams to use, but is not mandatory if teams have good reason to do something different (see What this is — and is not).
This section covers how to validate designs, whether that's a mockup of a proposition, a prototype or a technical diagram.
Validation broadly refers to testing your design to either give you the confidence to implement it, to iterate the design, or to discard it and focus efforts elsewhere. Feedback loops are critical to rapid and successful product development, and the shorter the loop and more representative the feedback, the better.
The most basic and quickest form of validation is to test your design with your team. By working in the open, regularly sharing progress and eliciting feedback through structured sessions such as design critiques and show-and-tells, you can gain valuable insights and direction. But, while internal validation is quick to do and benefits from the experience of a range of disciplines, it is no substitute for user or market validation and should predominantly be used to shape early iterations.
Stakeholder validation and engagement is vital to ensuring any proposed designs meet wider business needs and objectives. Stakeholders are often best placed to provide deep subject matter expertise and historical context which is hard to substitute within the team. Working in the open, broadcasting progress and actively seeking input across organisational boundaries helps promote alignment and collaboration. And by making stakeholders active participants in the design process you can reduce the friction often associated with organisational change.
Testing with real users is one of the most effective ways of understanding whether the solution you are designing is desirable and usable, and therefore likely to achieve your goals. It is best to make this as realistic as possible. The more representative the participants are of your target audience, the more your design/prototype looks like a real product or service, and the more natural the environment you run the test in, the more reliable your feedback will be.
To maximise the return on your investment it is important to plan your test carefully. Stating what you are trying to learn upfront by framing your test as a hypothesis will ensure your research stays focused. Once you know what you're trying to learn, you can then choose the most appropriate testing method.
Always try to gather feedback from existing or target users of your proposition and ensure you select a representative sample to test with. Choosing an appropriate sample size depends on a number of factors including your chosen research method, time, budget and risk. Often it's better to stay lean and keep your sample size small. There is no substitute for releasing a working product and seeing how it performs in the market, so large sample sizes and protracted research cycles are often wasteful. A usability testing rule-of-thumb is that testing with just five participants will uncover the majority of usability problems, provided those five are representative of your target user base.
The most unbiased form of testing is unmoderated testing, where there is no facilitator involved and the participant is asked to perform a task or provide feedback in their natural environment. However, it's often easier to gain deeper insights through questioning the participant during the course of the testing session. Also, the design may require some additional explanation or guidance whilst the user is testing it, especially if they are early-stage concepts. This is where moderated testing can be beneficial. In moderated testing, you have a moderator who facilitates the test. Moderated testing can be done either in-person (in a lab or work environment, where a moderator can observe user interactions with a product), or remotely (via a video-conference call, during which a moderator asks test participants to share their screens). The drawbacks of moderated testing are that it is often more expensive and time-consuming and there is a greater chance of introducing bias, through prompting, asking leading questions and staging the test in an unnatural environment.
Storing research materials such as notes, contact details and recordings should be done responsibly and in line with General Data Protection Regulations (GDPR). Always validate your approach to storing user research with your client and avoid using tools that have not been subject to a Data Privacy Impact Assessment. If you are unsure, speak to your local Technical Services Lead.
Market validation takes user validation one step further by releasing a prototype, proof-of-concept or Minimum Viable Product (MVP) to a subset of users in order to gain their feedback. For example, you might launch a landing page of a new service that captures data or A/B test a part-functioning feature to gauge interest. Market validation will give you a much better understanding of whether the thing you've designed is going to be successful or not. However, these types of tests can take a long time to successfully orchestrate and risk frustrating users if they feel the experience is sub-par.
Now that you've gathered feedback on your design, the next step is to decide what to do with it. This is best done as a team. Does the team have the confidence to implement the design? If so, progress to implementation planning. Is further iteration and refinement needed? If this is the case, repeat the discovery loop until you've gained more confidence. Or actually, has the feedback provided an alternative perspective that warrants a completely different approach? When this happens, it's time to move on to the next highest priority task.