You have done the hard work to convince your management that your organization will benefit from adopting Agile practices. You had multiple conversations to explain why Agile will achieve faster time-to-market and improve quality of your products, and improve customer satisfaction. You skillfully obtained approval to assemble a team to demonstrate that this “new” way of planning and executing a project actually works for your organization.
Now that you have a team, you provided them with some training and coaching, and they are off and running. Your leadership is asking you how things are going. Your team seems to be working hard, making a few mistakes, but overall learning quickly and delivering working capabilities to the customers. You know intuitively that these types of efforts may not pay off immediately, but you need to find a way to manage expectations tactfully.
What can you do to communicate appropriately to the leadership team which sponsored your initiative, but do not necessarily understand all the mechanics of this new Agile team?
In this article, I will provide a few ideas on ways to measure and track the performance of your team so that you can have the necessary information to communicate to the sponsors and stakeholders.
Metric #1 – User Stories Completed vs. Planned
If your Agile team has been working together for a few sprints, there’s a high degree of likelihood that they are already watching this metric. Evaluating how the team did for a sprint at the end of the sprint (Sprint Review) typically involves looking at how many User Stories were completed against the original plan that the team committed to during Sprint Planning.
As team capacity moves up and down due to availability of team members, the amount of work will also go up or down. This is not necessarily a bad thing, but the important thing to note here is that if you look at the pure number of Stories completed from sprint to sprint, you may not get a good gauge on whether the team performance/output is improving or degrading. This is why it’s beneficial to look at a percentage value instead of a discrete number; by looking at the percent of work completed against planned, you can gain a better understanding of how the team is estimating their work and possibly identify areas of improvement to enhance the predictability in the long run.
Metric #2 – Technical Debt
Some Agile teams feel that if they can produce working software at the end of a sprint, they are successful. My opinion is that while being able to deliver working product is important and valuable, it is also important to ensure sound engineering practices are being utilized. The metric of Technical Debt is one measure of the quality of software engineering for the team.
I have personally encountered situations where teams overcommitted to the amount of work for a sprint, but decided to try to complete the work regardless by sacrificing quality and/or taking shortcuts. This behavior may lead to short-term gains in terms of working product, but it generates Technical Debt that is likely to build over time and impede the team’s ability to accurately forecast future work and degrade team performance.
Using tools such as SonarQube, Agile teams can regularly identify such issues and correct them as needed. The team can also monitor and evaluate trends over multiple sprints to see whether the amount of Technical Debit is increasing or decreasing.
Metric #3 – Requirements Volatility
The concept of Requirements Volatility is often defined in different ways depending on the organization and context of the product that is being built. For the purposes of this article, I see Requirements Volatility as changes in the Sprint Backlog. For example, User Stories being added or removed from the Sprint Backlog during the Sprint (after Sprint Planning has been completed).
The amount of change to the Sprint Backlog can have significant impact to your team in terms of overall productivity. Whenever a new User Story is added to the Sprint for any reason, the team is usually interrupted and must invest time and effort to evaluate the Story and make a decision on how to address it. This interruption, however small, breaks the team momentum and flow, which will likely reduce team output for that Sprint.
By tracking these types of activities over several sprints, you will be able to see whether there’s a bigger issue that needs to be addressed; if the Sprint Backlog continues to change frequently, the team may need to explore process improvement ideas such as improve Sprint Planning, improve quality of User Stories/Acceptance Criteria, shorten Sprint length, etc.
Metric #4 – Defect Rate
The amount of defect created by the Agile team is an important way to evaluate the quality of the work produced. Tracking the defect rate against time will allow the team to see whether it is improving or worsening in terms of code quality.
Metric #5 – Retrospective Process Improvement Effectiveness
An Agile team must seek relentless improvement in order to continue innovating. In my experience, many Agile teams fail to reap the benefits of the Retrospectives because they commit to making too many changes or too few changes; both of these scenarios should be addressed, but they must first be identified.
By tracking how your team is doing in terms of implementing desired process changes and following through to completion, you gain a view into how adaptive your Agile team is. If the team consistently discusses the same problems but do not take action to resolve them, you may need to take a closer look at how to inspire the team to follow through.
In summary, this short collection of metrics is just a small set of possible dimensions to look at when evaluating the level of maturity of your Agile teams. This may be a good place to start, but you might also challenge your team to come up with how they would like to measure success.