Agile estimations

- And how long do you guys think it will take us to develop this user story?

(The little yellow post-it boasted a proud title of "Synchronization of labile subkernel optimizers")

- 5 hours.

- No, 7! 12! 3! 48! 20!

After about 15 minutes of heated discussion the estimate section was filled with a fresh inscription: “16hrs.” The two-week work schedule had 8 similar tasks, and each of the 5 developers was sure that the team would meet the deadline.

- All tasks? By October 18? Consider it done.

Naturally, two weeks later it turned out that the project was nowhere near the done stage. The code lacked both "lability" and "subkernelness". There were "optimizers", at times even "synchronized", but the general picture was inauspicious – the plan was foiled, the story was not ready, the product could not be released. Other stories did not fare much better. The Product Owner plunged into silent panic, vividly picturing the visit to Big Boss with the report on why the deadline had come and gone («who the heck knows» «our estimates are crappy») and how many human resources would be required for the fine-tuning («who the heck knows» «ask for more than you need and you’ll get just about enough»).

Does this seem familiar? I suppose, everybody who has to deal with software development can recognize the pattern. Similar problems have been described by Fred Brooks in his immortal «Mythical man-month» and by our contemporaries. However, with the persistence of a zombie the problem rises from its grave again and again. Reading Brooks’ masterpiece does not help, software estimation trainings are powerless, and even Agile, the much hoped for «silver bullet» seems to be of little use.

The reason lies in the fact that the above-mentioned picture is filled with traps which inexperienced but energetic and adventurous Product Owners easily fall into.

Their first and gravest mistake is the belief that developers are capable of producing a more or less accurate time estimate for any given task. This is fundamentally wrong. For example, Ashmanov recommends multiplying developers’ timeframe by Pi (3.14159265). It certainly sounds reasonable. All developers are optimists by nature. Their optimism is their source of both inspiration and passion for the work itself. Developers depend on their optimism, it is their driving force that allows them to move forward . However, relying on their optimism when planning is fundamentally wrong. Only very bad managers do this. (Alas, in this industry I’ve come across too many “universal” managers who think that the field of management is not relevant and technical specifics can easily be overlooked.)

What to do, then? The easiest thing to do is abandon all hope of getting accurate time estimates. It may sound insane, but let’s face it – accurate time estimates are a myth in 99,99% of cases. There is no point in turning your plan into a mythological saga having nothing in common with objective reality. Then how to calculate time-frames? There’s no getting away from this.

By using Story Points and Velocity. Their main advantage is their abstractness. A developer or a team of developers may talk their heads off giving you highly optimistic estimates for the time-frame until the comparable work load receives approximately the same estimate.

Yes, “approximately”. You have heard me correctly. I realize that any manager is opposed to approximations by nature of his job. Everybody needs estimates that are accurate to the last cent and hour. Moreover, everybody needs them yesterday. The sad thing is – it does not work this way. Not unlike Shroedinger’s cat that remains both dead and alive until the box is opened (in our context until one begins to write code). Everything else is nothing but probabilistic methods. Agile in its numerous variations is currently one of the most accurate ones. Its accuracy is guaranteed by the law of large numbers: repeatedly entering stories with sufficiently small estimates into iteration backlog results in counterbalancing estimation mistakes; and several iterations bring us to the average development velocity.

OK. We have accepted the idea. What we need to do now is create user stories of sensible volume, evaluate them in story points, and calculate real velocity. Sounds easy? This impression is shared by many other people. And here the trap lies in wait.

Firstly, to obtain comparable workload the task/stories need qualitative parceling. The bigger the story volume, the bigger the volume of potential mistakes, and vice versa. Given a sufficient number of tasks, a mistake worth 1-2 points in task A will be compensated by a mistake worth minus 1-2 points in task B. Correct task distribution however needs cooperation of the team, Product Owner and even representatives of the client. Agile is not a magic wand that will ensure your planning is adequate.

Secondly, Velocity is not an absolute reference value that you can look up in books or ask colleagues about. It is a purely empirical value. It varies from team to team and, possibly, from project to project. Only after several iterations with this team you can accumulate sufficient data for further planning. If management is not prepared to use this approach, implementation of Agile may turn out to be extremely problematic.

And thirdly, even the best user stories with their best estimations do not guarantee anything. True, we have a list of stories as well as a rational and realistic estimate of the workload, but we are still a variety of conditions and requirements away from success. As for these conditions and requirements, every single one of them deserves an individual story.

Sadly, there is no silver bullet. But if you summon up enough courage to leave time estimates for story points aside, timeframe estimation and taking up commitment (realistic as opposed to utopian) is possible. And this is reassuring.

Comments

Popular posts from this blog

High-volume sampling: algorithms’ comparison

Spring Framework and Code Obscurity

Multi-threading skills - what do you mean by that?