Hotline: (+84) 949 594 116
Tel: (+84-24) 73033699
Live support
Hotline: (+84) 949 594 116
Tel: (+84-24) 73033699
Blog

Blog

Monitor progress using standards and facts

admin May 26, 2023

I speak to many CIOs and IT managers, and I notice that many of them are struggling in this ‘Agile’ era. They often have many (self-managing) agile teams, and they lack control. When it comes to software development, for example, they often have no idea which functionality is ready at what time, how much it really costs, what the quality is, and what the risks are that the organization runs with each new delivery of software in the portfolio.

Popular Post

The larger the number of teams and the more self-managing a team is, the more difficult it becomes to keep an overview. The teams often work with self-defined Story Points to make their estimates. Very useful at the team level, but not for management purposes.

Real Agilists may say, “That’s part of it, trust the teams,” but the “real world” has principles like accountability. There is a lot of money involved in software development and maintenance teams and there are people responsible for the correct spending of the budgets, and what is provided for that. In fact, being able to deliver new functionality quickly at a lower cost than the competition can mean the difference between a successful business and a company going bankrupt. Management needs solid management information, based on standards, to gain insight and to be able to make the right decisions.

To be able to compare application development activities, it is always necessary to determine the size of an application or of a project. For benchmarking IT advisory firms, like IDC Metri, this is crucial. Because we must be able to compare organizations to other organizations, we can only use standards for this. Since users are normally only interested in the functionality that is offered to them, and not in how that functionality is delivered (agile or traditional) or what technology is used by the developers (Java, .Net, etc.). We use methods to measure the amount of functionality offered to the user in ISO/IEC standardized measures for functional size.

Standardized measures for functional size were first developed in the late 1970s at IBM, with the aim of measuring the productivity of the development teams. The latest standard in this family is from 2019 on automated measurement. Because only the functional user requirements are measured, i.e., what should the software do for the users (not how or why), these standardized methods are independent of the technical way of development and of the way in which it is developed. They can be considered the square meter of the Software industry. A brief analogy: Whether a wall of 20 square meters is built with large bricks, small pebbles, or in glass, aluminium, or other material, the size in square meters remains the same. This is the same with the standardized functional size of a certain number of functional requirements. The costs are determined by the price per square meter, which depends on, for example, the productivity of the masons and grouters and on the cost of materials.

These standardized measures are perfectly suited to act as management information on the progress of the functionality that a project is expected to deliver. The standard is agnostic for the technology solution and the type of project that is used to deliver. So, there is no reason why standardized functional size cannot be used in the agile world. In fact, user stories can easily be accurately measured with standardized measures. This is particularly important when it comes to the estimation and monitoring of software development. Also in the agile world, many software projects go wrong: The Minimum Viable Product (MVP) is not delivered within time and budget, and/or the technical quality is so deplorable that a catch-up must be made afterwards to reduce the technical debt and remove the outstanding defects.

Monitor progress using standards and factsThe figure on the right shows why many agile projects go wrong when only story point metrics are used. Next to the fact that story point metrics can’t be used across teams, they are a relative measure of effort. This means that everything that costs effort, most likely also gets a story point estimate. Therefore, story points give a wrong perception when you wish to monitor project progress.

 

Monitor progress using standards and factsThe project needs to deliver a certain number of green blocks on a certain date in a certain budget, but all the red blocks (and to some extent the blue and orange blocks) take away effort hours from the team. That effort could have been spent on green blocks instead. Of course, there should be a healthy balance, as technical debt management is an important part, but the figure shows exactly the trap you can walk into when measuring velocity and progress using story point metrics only. We regularly see very low predictability of agile teams using story points only, see for instance the figure on the left from one of our studies.
Standardized functional size measures only functionality (new, modified, and removed) and therefore progress can be monitored much more accurately using for instance productivity (hours per size unit).

Monitor progress using standards and factsStandardized functional size is also a very suitable method to make an accurate estimate of the costs and lead time of developing the Minimum Viable Product (MVP) early in the Project (or Product) Lifecycle. To do this, the size of the product backlog using high-level sizing methods can be measured. In addition, industry data is used to determine the minimum, likely and maximum Productivity for the project to arrive at an accurate and well-founded estimate. IDC Metri uses its database with over 15000 data points to select comparable projects for these estimates.

For project managers, this is great news, as it becomes possible to monitor the project progress again, based on functionality ready instead of person-hours burnt. Every sprint is a data point, where you can check how many function points should be ready and compare the actuals to the plan. And thus, it becomes possible to get early warning signs that may be a trigger to steer early on. The figure on the right shows how easy it is to track progress and to make scenarios based on actual productivity and functional size delivered.

 

Replies to This Discussion