Improve software development efficiency using CloudBees software delivery activity report
Software delivery activity is a cornerstone analytics report in the CloudBees platform. It provides detailed insights into the performance and health of engineering processes to users. The report helps software development teams and engineering managers optimize their workflows, identify and troubleshoot issues quickly, and improve the quality of their software.
Data presented in the software delivery activity report is pulled from a connected GitHub repository or an equivalent code repository. The data is then aggregated at a component, organization, and sub-organization level to present a birds-eye view of engineering work.
Before moving into each widget, let’s discuss some common UI elements:
Filtering: Use the filters to choose the component, duration, and org level. Please note that weeks run from Monday to Sunday.
Hovering: Each report has a tooltip explaining what it covers. You can hover over the data for each graph type to get a breakdown.
Drill downs: You can click any data point in the bold blue font for a deeper dive.
Environments: Any spot where environments are shown will populate with your data (staging, production, QA, etc.). Throughout this tutorial, our data only focuses on staging.
Viewing: all CloudBees platform pages are available in either light or dark mode. For this blog, we use dark mode.
Components and workflows
Our first row of widgets tells how many applicable components, workflows, and workflow runs for the selected duration. Further, this information distinguishes between active and inactive components. We define active components as those with at least one workflow executed in any branch for the selected time frame.
We can interpret the prior chart as follows. There are 423 components spanning 411 repos, with only 188 components running an active workflow. This results in 2,654 workflows, of which 1,406 are active. These workflows span 10,210 runs, of which 6,292 are successful. For further analysis, users can click to drill down into the numbers. For example, if users are curious about the 3,918 build failures, they can click on it to see a list of the failed workflows.
Commits, pull requests, and code churn trends
In the next set of widgets, we dive into commits, pull request trends, and code churn. As an engineering manager, these are helpful in providing overall throughput to help with resourcing.
Commits trend
The commits trend shows how many total commits for the selected components and within the chosen time frame have been created. As you can see, there are 9,383 commits by 109 active developers, yielding 86.1 weekly commits/active devs (commits/active devs). Active developers refer to developers who have contributed and made at least one commit in the selected duration for any of these components.
Pull request trends
Pull request trends break down these commits into pull requests. In our example, there are 1,266 pull requests. You can see the status of pull requests, whether they were approved, whether changes were requested, and whether they are still open or rejected. Open will indicate all the open pull requests that are still actively being worked on or in review.
Code churn
Code churn tells you how many lines of code were added or deleted over time. Code churn can be helpful for users because it should match the user’s internal mental model of what normal looks like. In our case, the number of additions is slightly higher than deletions, which is typical for our product because we are actively working on this product and adding new features. Thus, it resonates with our mental model. If there is a deviation from this, it warrants further investigation.
Code progression, successful build duration, and deployment overview
The next series of widgets explores how code progresses from commits to deployment, both from a timing and success rate perspective. Using these widgets requires users to define the step: kind in their workflows .yaml so that the system can accurately distinguish between builds jobs, scans, and deployment jobs.
Code progression snapshot
The code progression snapshot highlights the coding journey from run-initiating commits through builds to deployments, providing a view into the overall efficiency.
In this example, you can see 10,210 run-initiating commits, which means this commit triggered some kind of workflow run. If users have defined multiple environments, they will receive a breakdown of where each run-initiating commit sits.
To progress these charts, we show that the 10,210 run-initiating commits generated 753 builds, of which 705 succeeded (94% success rate), resulting in 606 successful deployments, all from staging. If you have multiple environments (pre-prod, production, QA), they will also appear here.
You will notice there were 48 failed builds. For triage purposes, you could drill down to see the run ID, status, start time duration, and the component name of the run ID build that failed. From there, you could click on the run ID, which takes the user to the details about why this run failed.
Successful build duration
Moving on to the next row, users can see information about builds and deployments. In the successful build duration widget, we will provide a box plot for each component and how long the build takes. Users can see the minimum, median, and maximum durations when you hover your mouse over this. Users can use these arrows to scroll through many of their components and know how these build durations vary over time.
In this example, the reports-service component is interesting because it shows a minimum build time of 26 seconds but a max time of three minutes. These results should make engineering managers curious why some builds take three minutes instead of a few seconds and warrant further investigation to improve efficiency.
Deployment overview
Deployment overview tracks successful and unsuccessful deployments in each environment over time. In our example, we have a 98% success rate of deployments to staging, which is reasonable. Furthermore, we provide weekly breakdowns of successful and failed deployments.
Measuring your development cycle times
Our final set of widgets explores the development cycle time and average deployment time.
Development cycle time
The development cycle time widget is a personal favorite. It breaks down your full development cycle time into various components to show your efficiency. Engineering managers can view the data from multiple levels (org, suborg, or component) to optimize processes. This granularity allows root cause analysis to identify the most impactful bottlenecks and offers a blueprint for where fixes must occur. We recommend starting small and measuring the impact on the broader system.
Our example shows us that, on average, the development cycle time is composed of two days, six hours, 44 minutes, and 49 seconds; this meets our expectations because we have a globally distributed engineering team.
The average development cycle time comprises three stages: coding, pick-up, and review.
Coding time spans when an individual contributor proposes changes to the codebase until the pull request (PR) is created.
Pickup time is calculated from when the PR is created to when the review begins. It represents the first handoff that requires another person to push it forward. Often, this is the biggest bottleneck.
Review time is calculated from the start of the review to when the code is merged; it addresses everything necessary to promote code into production. Depending on the review, this stage can restart the process through complete code rewrites.
Average deployment time
Our final widget tracks the average deployment time for the selected duration across environments and builds upon the development cycle time widget.
We only have a defined staging environment, which takes three minutes and 57 seconds to deploy. Those would also appear here if we had additional environments, such as production or QA. Similar to the deployment overview widget, average deployment time requires you to select the deploy Kind option when assigning steps in your workflow to pull data over.
Putting it all together
As a platform engineering team or individual engineering team manager, you can’t improve processes without visibility into what’s occurring. CloudBees provides visibility into your software delivery activity levels in multiple ways (component, organization, or sub-organization). When bottlenecks or issues are identified, the ability to drill down to the source helps quickly resolve issues.
With this foundation in place, you can create the necessary benchmarks to measure and improve efficiency. Understanding where your teams are strongest also helps in resource allocation to ensure organizations can meet growing customer demand. Ultimately, the goal is to create a great developer experience with as little friction as possible. By taking a proactive approach to efficiency, you are equipped to improve the amount of time your development teams are writing code.
Use Case #1 - Engineering manager sees high cycle time
John, an engineering manager at a big tech firm, sees a very high cycle time in the development cycle time widget (one day, 16 hours, and 12 minutes), which does not match his intuition. The average cycle time further breaks down into cycle, pickup, and review times.
While the coding time of 16 hours and review time of 5 hours and 31 minutes meet expectations, the code pickup time of 18 hours and 38 minutes seems very high. It implies that after coding is finished (PR created), on average, PRs are waiting nearly a day (18 hours) to be picked up for review.
John reviews the agile process within his team and addresses the high code pickup time number with the team. The team points out various reasons why PRs are sitting in a pickup state, such as missed PR notifications and the frequency of scrum calls. Based on the discussion, the team optimizes the process, reducing the pickup time to a few hours and, thereby, reducing their development cycle time to a day.
Next steps
The software delivery activity report offers a great source of information for engineering managers looking to optimize their team’s software delivery performance. Most of the data in these reports will automatically appear once your GitHub or other repo is connected. Users can then aggregate data at the org, sub-org, or component level for further analysis.
Now that you understand this report better, it’s time to implement that knowledge. Try the CloudBees platform for free. For complete documentation on software delivery activity, click here.
To learn more about additional CloudBees analytics reports, visit the documentation below.