Delivery Quality reviews
Introduction
Delivery Quality reviews are an important part of how we support engagements to ensure they sustainably deliver good outcomes.
These reviews are a facilitated, structured self-assessment by people on an engagement to identify where to focus any improvement efforts. They aim to build shared understanding and drive action.
Reviews are done using a set of questions that cover all aspects of technical delivery and operations.
Results are captured in a Google Sheet. See How do you do reviews?
Who is involved in the review?
- Delivery Lead, Tech Lead and Account Lead.
- Whoever else from the engagement is needed to do a full and accurate review.
- It is good to include people who are not in any leadership position to ensure the spread of perspectives is represented.
- A facilitator.
What happens if we don't pass the review?
There is no concept of "passing" or "failing" a review and there are no minimum acceptable scores.
The review is a tool to help engagements identify areas they want to improve and get any help they need to do that — see What does this mean for me as a team member?
For those who support engagements, these reviews help them do so more effectively by giving them better insight — see What does this mean for me as someone supporting the team?
It is not a quality gate and it does not have any implication outside of these contexts.
What's the schedule?
- Initial review at the end of month one of a new engagement.
- Refresh review every three months.
- Or earlier if there is a significant change such as a new stakeholder, a shift in product direction or an architecture or technology change.
Regular monthly checkpoint
In between reviews, engagements should take time, monthly, to select top actions from the last review to take into their delivery backlog for implementation. Book in a recurring session to make sure this happens.
How do you do reviews?
Reviews using this framework are done by the team, either as a whole or by a smaller set of representatives. As a guide, 5–8 is a good number of people to have in a session. The scope of the review should be small enough so that a single set of scores and actions appropriately represent the situation for most areas. If you need to split some areas and score separately for different parts of the system, that's OK.
Facilitator
Good facilitation helps teams get the most out of the review, and it is recommended that all reviews should include an outside facilitator experienced with the process. Because of the breadth and depth of the review, facilitation is best done by someone who has a broad background in both technical and delivery aspects. This helps them clarify and explain aspects of the review for the team, and allows them to delve into areas in more detail when required.
Capturing results
Capture results in the Delivery Quality Detail Google Sheet by making a copy of the TEMPLATE
tab for your engagement. (Note this spreadsheet is only accessible to staff.)
Field | Description |
---|---|
Last reviewed date | See What's the schedule? |
Business context | Brief description of the product or service and how it fits into the customer organisation and their other products and services. |
Tech | List of key tools, technologies and techniques in use. |
Team structure | Brief information to set the context for questions relating to team relationships, the flow of work, roles and responsibilities, autonomy and which aspects of the solution the team have direct control over. |
Top actions | To be completed at the end of the review and refreshed in each monthly checkpoint to highlight the top few actions to focus on. |
Scoring key | See scoring. |
Review: Area | Corresponds to the areas in the quality questions. |
Review: Score | See scoring. If the scores are very different for different parts of the engagement then duplicate the row in the sheet and score separately for each area, capturing notes and actions specific to each area individually. |
Review: Notes | Explanation of the score for future reference, calling out good points and issues. This is especially useful when re-reviewing to help identify the direction of travel relative to the last time the review was done. |
Review: Actions | Actions that could drive improvement. Recording an action here is not a commitment: actions are prioritised and selected for focus in the Top actions section. |
Partially-complete example sheet
Facilitator responsibilities
- Ensure the session has the right people in it.
- Ensure all participants understand the purpose of the review.
- Ensure a full and accurate review is done, considering all aspects.
- Ensure actions are identified and recorded.
- Keep the reviews focused and ensure the conversation stays relevant.
Preparation
- Recommended group size is 5–8 team members for each session.
- For a first review, recommended duration is 3–4 hours, split into sessions of no more than 1.5 hours. Refresh reviews typically take 1.5 hours, depending on how much has changed.
- Be prepared for the session with a review sheet to record notes and scores, and the questions open such that both can easily be seen by participants by sharing your screen. (Note this spreadsheet is only accessible to staff.)
Facilitator tips
- Set the scene.
- It is important to set the right tone. Some teams may understandably be wary of "being assessed", particularly because the process includes an outside facilitator. It is essential that they feel safe to make an honest appraisal. Emphasise that this tool is just a way of helping teams identify how to best drive continuous improvement — a bit like a "structured retrospective".
- Remember (and remind the team) that this question set is continually evolving and "open source". Encourage them to suggest ways it can be improved and raise pull requests.
- Know the framework.
- Be intimately familiar with the sections of the review. If a point is mentioned which relates to a future section then briefly note it against that section and suggest further discussion be picked up when you get to that section.
- Keep it focused.
- Work through the review section by section. For each, briefly outline the scope and pick out a few key points from the list of questions that indicate what "good" looks like, then invite the group to describe how things work for them. Keep the conversation and questioning open to start with and let the conversation be led by the team. Ask specific questions to fill in any gaps based on the points under each section. Identify any actions which come up and record these. Try to keep the conversation relevant and focused — there is a lot to go through.
- Guide the scoring.
- Once the team has discussed the points for the section, it is time to score. A good way to do this is using "planning poker" style blind voting, with each team member holding up 1 to 5 fingers. Ask them to reveal their votes simultaneously on the count of three.
- Discuss any differences in the score and agree on a single score to be recorded, or duplicate the row to represent different parts of the project or system when one score cannot adequately represent the situation and score separately for each.
- Refer to the definitions of each score on the review sheet. While emphasising that the score is not the most important part of the process, advise the team when it feels like they are being too harsh or too soft on scoring compared to the positive and negative points they raised and to scores for other sections. Accurate scoring will more clearly focus attention on the right areas.
- Focus on actions.
- Encourage the team to identify what action could improve the score (especially if 3 or lower) and record these. Make sure the team is happy with the wording you use for the action. See identifying actions. For example:
- "You mentioned your stories often take a week or more to complete; would there be any benefit in trying to split stories up to make this shorter? How could you do that?"
- "You scored a 3. Are there any other things which you'd like to try to change to raise that score by the next review?"
- Encourage the team to identify what action could improve the score (especially if 3 or lower) and record these. Make sure the team is happy with the wording you use for the action. See identifying actions. For example:
- Cue next steps.
- Remind the team to set up regular monthly checkpoint sessions to ensure the reviews provide value.
What does this mean for me as a team member?
These reviews help you think more deeply as a team across all angles of what you do. For many teams, this will be the first time you have shared your thoughts and experiences about some of these things. Engagements typically find this a very positive experience which gives them more clarity on what they want to focus on next to ensure quality.
What does this mean for me as someone supporting the team?
These reviews give you a great insight into what is holding the engagement back and where you can provide support. The findings can also be a powerful tool in helping to make a case for change with customer stakeholders.
Identifying actions
The point of Delivery Quality reviews is to drive improvement action, but that can be easier said than done! Here are some hints on how to do that.
- Be very clear about what the problem is. For example:
- "The data access layer code in the foobar service is poorly structured and has too much duplication"
- "We don't record our decisions"
- "We have too much work to do before the deadline"
- Try to identify why this is happening, the cause. This will guide what action to take. See How to tackle common causes for problems.
- Be clear exactly what the action is, who will do it and when.
- e.g. If a team identifies that they don't record decisions, then an appropriate action might be "Account Lead to create a decision log and establish two weekly reviews with relevant stakeholders, to be in place by DATE"
- Record that in the relevant Delivery Quality sheet, but more importantly record it in whatever system you use to track normal activity or improvement work — in your Jira, RAID log or whatever. That is what will actually help you get it done.
- Use regular reviews with someone from outside the engagement (e.g. DQ Buddies or Client Practice Lead) to help ensure actions are carried out.
How to tackle common causes for problems
- We just haven't done it, but it's easy to do.
- Make it part of your usual work: add it to your backlog, prioritise it and do it.
- The team is inexperienced.
- Do some team members have the skills or experience needed, but not all? If so, then work out ways to spread that knowledge, e.g. pairing, show and tell sessions, training. Also, find ways to let team members work safely and with confidence while they gain experience, e.g. automated tests, code reviews.
- If this is a skill that is relatively low throughout the team (e.g. writing user stories, load testing) then who could help? If it's a relatively easy skill to gain then you may be able to find someone from the relevant Community of Practice to share some time. Otherwise your Account Lead, Client Practice Lead, Engineering Practice Lead or Tech Director may be able to help directly or help find someone who can.
- It is not considered a priority.
- This is a tricky one. Sometimes, it just genuinely isn't a priority and that's fine. But if you have low scores then maybe it should be a priority. How can you make a compelling case for change to your customer Product Owner or other stakeholders? Do you need any help from outside the account? Don't be afraid to lean on your DQ buddies, Client Practice Lead, Engineering Practice Lead or Tech Director for help in getting your message across. That's a large part of what they are there for! See influencing for more.