The recent #NoEstimates movement has brought increased visibility to an issue that many find contentious. Some assert that estimates are required for any professional software project. Others declare that estimates are harmful enough that we must find a way to reduce or eliminate them altogether. Personally, I’ve found the discussions about #NoEstimates valuable enough that I decided to do some research and then run some experiments of my own in order to move in that direction. Here are the results of two of those experiments:
Experiment #1: Relative Points
For my first experiment, I decided to look at our data. We had already been using planning poker for some time, and we were working with a client who had requested that we track our actual hours per user story. (We don’t always do this for various reasons that are outside the scope of this article.) The question I was trying to answer when looking at the data was this:
Will our actuals hold true to the relative sizes we had assigned them?
That is – if a story worth 1 point has an actual average effort of 10 hours, will a story worth 2 points have an actual average effort of 20 hours, etc. For this project, we used points of 0.5, 1, 2, 3, 5, 8, 13, and 20. The “Actual” hours in the graph below are divided by a number so that the Y axis is similar. Here are the results:
Project 1 |
So, there it is – we were almost perfect at relative estimating for that project! That belief held firm right up until the next project:
Project 2 |
That’s right, in “project 2”, our 1 point stories took longer on average than our 8 point stories. Still, the relative actuals of stories with points 2 through 8 weren’t too far off and we held on to some belief that the 1 pointers may have been an aberration.
Enter “project 3”:
Project 3 |
“Project 3” took away any remaining belief about the accuracy of our points estimating by giving us what looked like completely random results. Roll the dice.
Experiment #2: Points to Count
However, this wasn’t enough to convince me to run away from the data. By this time, there were multiple reports of people comparing the sum of their iteration velocities to the count of stories that were being completed. That seemed like a reasonable experiment, so once again I turned to the data. This time, the question I was trying to answer when looking at the data was:
Is summing the number of points per period just as useful for planning
as counting the number of completed stories per period?
At this point we already had data for 3 projects so we could start graphing right away. Since the initial results were favourable, the graph below shows data from 8 different projects over the course of more than a year:
This graph clearly illustrated to us that we could stop estimating in points and instead just count the number of ‘done’ user stories for planning and forecasting purposes. It has allowed us to move closer to #NoEstimates by removing one more estimating step. We no longer need to use planning poker to agree upon a number – instead we keep slicing stories until they are ‘small enough’.
Summary
In summary, these two experiments didn’t lead us to stop estimating altogether, but they have definitely moved us in that direction. We can still use data to help us forecast and plan, but we are less dependent on estimates to do so. This helps us reduce some of the dangerous effects of estimates, with the additional benefit of getting to spend more time delivering value.
As a bonus, these two experiments nudge us towards continued process experimentation - something I'm happy to endorse.
Subscribe to this blog by Email
No comments:
Post a Comment