Tracking Individual Performance Has a Negative Impact on Your Outcomes
A manager I was consulting back in the day, was willing to improve the quality of their deliverables. They decided that if the number of defects assigned to each team member goes above a certain threshold, that would be an accurate indicator that the team was cutting corners and that the results they provided weren’t meeting the quality level their customers expected. It was the product owner that prioritized the issues and assigned them to the responsible team members, so at first glance, that metric felt reasonable.
They created a dashboard that proudly displayed the number of defects currently associated with each individual, sorted highest to lowest. Think about this for a second – your name, listed in red at the top of the dashboard because you have twenty defects assigned to you. And the day after, you get an email from the technical director saying it’s unacceptable that you have so many open “problems”.
This approach was supposed to improve their quality, but what makes it such a bad idea? The list is quite long.
We want defects reported, we don’t want to hide defects. Are you more or less likely to report bugs in your management tool if it gets your name colored in red at the top of the list? You aren’t. Chances are, you’ll end up with an empty list of issues that still gradually find their ways into production.
Let’s dig deeper here. Does it matter how many defects are detected if the implementation isn’t going live anytime soon? If the solution has to be perfect before release, should you test it more? For 2 more weeks, 6 more weeks, 10 more weeks? If your customers can’t provide early feedback, how would you find the rest of the defects or UX flaws in a timely manner?
Are all defects equal? Would you prevent shipping a fix for a significant feature because you have some small layout defects on a specific resolution/browser combination? Quality is relative, not absolute. Withholding this release for a minor defect reduces quality as a whole.
Does it matter who is responsible for resolving the defects? Would just knowing there is a spike in defects be enough?
The mistake this organization made was to show data without context – neither a priority context nor customer impact context.
Even though the defect counts appeared to go down, the customer’s feedback remained negative. The result of this initiative was that the teams adapted by hiding data related to defects.
Strive not to put people in a position where they have to make choices that are best for themselves but hurt the whole. Every time you put a metric in place, ask yourself, “Does this decision make your team feel threatened?”. Think about, “How can we use this metric to make things better but also what are the actions it could provoke that would hurt our performance?”
There is often very little upside to tracking at an individual level. It has many impacting downsides that mean your data becomes needlessly incomplete or irrelevant. Your efforts will only ever make sense if they translate as outcomes to your organization. Keep your metrics aligned to that goal.
Replies to This Discussion