Enterprise DevOps: Principles of Meaningful Metrics Measurement, Part 1

Written by: Nigel Willie

8 min read

This is the ninth in a series of blogs about various enterprise DevOps topics. The series is co-authored by Nigel Willie, DevOps practitioner and Sacha Labourey, CEO, CloudBees.

We shall start this blog by stating that we are firm supporters of the idea that the key metrics of IT performance are:

  • Lead time for changes

  • Release frequency

  • Time to restore service

  • Change fail rate

We also recognize that when running a program across a large enterprise, you will be subject to requests for other metrics from your key sponsors. These may include: Speed of adoption, impact of changes, cost of the program, any associated savings, etc. The key metrics are outcome metrics; however, there are also a set of lower-level metrics that drive the outcomes.

In this article, we are going to start by trying to define a set of principles around metrics. We shall not attempt to define a comprehensive set of lower-level metrics, as many of these are specific to circumstance or organization. Rather, we trust that the principles will act as a guide to enable you to understand what, and how, you should obtain meaningful metrics.

Principles of metrics

1. Only collect actionable metrics – Ask yourself what you will do if your proposed metric changes? If you can’t articulate an action, you probably don’t need to collect the information.

2. Focus on the value to the business, not the level of work to IT – The fundamental rationale of DevOps is increasing business value. Your metrics should reflect this. Please note IT performance measurements are valid if they impact business value and meet condition one above.

3. Collect a few simple-to-understand metrics – Don’t turn metrics collection into an industry.

4. Metrics should be role-based – Business value metrics to the business, program-based metrics to program leads, technical debt metrics to technicians – understand the target community and rationale and don’t inundate people with metrics. We like the description that metrics should be offered as a “smorgasbord, where consumers select those which are pertinent to them for their personal dashboard.”

5. All metrics collection and collation should be automated – This should be self-evident to the DevOps practitioner, who is looking to automate delivery of technical capability. First, manual collection of metrics is time-consuming and counter to the program you are delivering. Second, you cannot obtain real-time information manually; as a result, you trend away from a proactive physical and, instead, toward a post-mortem.

6. All metrics should be displayed on a unified dashboard – Don’t expect people to hunt for them. This unified dashboard can be personalized for the consumer and the team, as per point four. The key consideration is that the customer should be able to find all the metrics they want in one place.

7. Prioritize raw numbers over ratios – With the exception of change fail rate which is correctly a ratio, we recommend the collection and display of raw data. This is particularly pertinent for metrics aimed at technicians. This both promotes the use of a holistic application of these numbers by the technical specialists within the team and reduces the risk of team-level metrics being used to compare performance across teams with no contextual understanding.

Because this is an important point, we are going to further explain. In most large enterprises, there is an annual discussion around levels of reward and bonus. Anything that is collected as a ratio can be grasped at by enterprise management to argue for their team. Nigel has been involved in many of these meetings over the years (far too many) and managers always fight for their teams to be highly rewarded when the organization starts fitting teams across a bell curve of performance. With ratios, you end up with conversations that go like this, “My team has a ratio of 75% and your team’s ratio is 71%, which supports my argument for a higher reward.” 90% of these metrics are meaningless at a comparative level and, by using raw numbers that the team itself can use to amend their behavior, you meet the primary need and reduce the risk of numbers being taken out of context and used for meaningless comparison. Of course, all meetings leak like sieves and the teams soon get to hear, rightly or wrongly, that their bonus was affected by, for example, the ratio of new lines of code compared to amended lines. They then amend their behavior to impact their bonus, not the organization’s needs.

8. Use the correct metric for the use case – Different use cases demand different metrics – for some products, it is velocity; for others, it is stability and availability. This principle is for the consumer of the metrics rather than the supplier, but it is critical. Our blog on context should make it clear that the primary success indicator for each product may not be the same.

9. Focus on team, not individual metrics – DevOps looks to drive a culture of cooperation and teamwork to deliver success. As your culture starts to change, you should see a greater focus on the recognition of teams rather than individuals. To support this, your metrics should focus on this too.

10. Don’t compare teams, compare trends – If we accept point 8, that different teams have different primary metrics, we should also accept that each team will have different goals. Additionally, if raw data is used for many metrics it makes little sense to compare teams. Rather, the product teams, business units and key sponsors should compare trends within their teams and units.

11. Look for outliers – While avoiding direct comparisons between teams, it is still sensible to look for outliers. If these are identified, you should look for clues as to why certain teams are either significantly over or underperforming their peers. These can often provide significant learning points that add value to others.

12. Lead time is time to production, not time to completion – This is a fundamental principle. It is repeated here, as from our experience initial stages of adoption are often accompanied by a focus on reducing the time to production readiness. The last step of continuous delivery is often a follower, and it is critical that lead time measures time to production and nothing else. You should also be wary of soft launches being adopted, if the formal production release to market is not a close follower.

13. Use secondary metrics to mitigate unintended consequences - For example, a focus on time to market could negatively impact quality, which is why the key metrics of IT performance contain both. If you focus on a specific metric, you should ask what the negative impacts of that focus could be and monitor trends in this space. This applies even if you have taken the conscious decision that you are happy to suffer these consequences.

In our next blog, 4 Further Considerations for Metrics Measurement, we will take the opportunity to suggest a couple of areas we feel worthy of further consideration which are not regularly discussed in organizations.


Follow the Enterprise DevOps blog series from Sacha and Nigel

  1. Enterprise DevOps: An Introduction

  2. Enterprise DevOps: I Wouldn't Start from Here: Understand Your DevOps Starting Point

  3. Enterprise DevOps: Context is King

  4. Enterprise DevOps: Creating a Service Line

  5. Enterprise DevOps: On Governance

  6. Enterprise DevOps: The Spine is Critical

  7. Enterprise DevOps: Move to Self-Service

  8. Enterprise DevOps: The Peoples' Front of Judea, or Don't Sweat the Small Stuff

  9. Enterprise DevOps: 13 Principles of Meaningful Metrics Measurement, Part 1 (this post)

  10. Enterprise DevOps: 4 Further Considerations for Metrics Measurement, Part 2


About the Authors

Nigel Willie

With over 20 years’ experience working in IT for one of the world’s largest financial institutions, Nigel has experience managing and delivering global transformation programs. Starting his career as a developer, Nigel’s most recent role was to deliver cross-platform DevOps automation capabilities to the enterprise. From his experience, he understands many of the challenges and mistakes involved in a DevOps transformation; indeed, he claims to have made most of the mistakes himself.

Nigel has also had the good fortune to work with a lot of highly skilled individuals, both as colleagues and across the industry. He is attempting to share some of his personal observations and thoughts in the hope they will be of value to others. Nigel is a great believer that every initiative is individual and any observations he makes are intended as principles or guidelines, not rules.

Sacha Labourey

Sacha is a native of Switzerland and graduated in 1999 from EPFL. While at EPFL, he started his first consulting business - Cogito Informatique. In 2001, he joined Marc Fleury’s JBoss project as a core contributor and implemented JBoss’ original clustering features. Sacha went on to become GM for JBoss Europe, leading the strategy and helping to recruit the partners that fueled the company’s growth in the region. In 2005, he was appointed CTO, overseeing all of JBoss engineering.

In June 2006, JBoss was acquired by Red Hat (NYSE :RHT ). As CTO, Sacha played a crucial role in integrating and productizing the JBoss software with Red Hat offerings. In 2007, Sacha became co-General Manager of Red Hat’s middleware division. He left Red Hat in 2009 and founded CloudBees in March 2010. Follow Sacha on Twitter .

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.