The Date-Driven Project: Cracking the Agile Paradox – Part 2 of 4

In Part 1, I discussed some of the realities of schedule-driven software development. We looked at the both the practical business need for deadlines, along with the pitfalls that have tended to polarize engineering teams and their stakeholders.

We also explored the two project management extremes:

  1. The waterfall method, in which we set a deadline and then identify, analyze, and plan every application detail in advance.
  2. An Agile technique, such as Scrum, in which we iterate through a backlog and declare the project complete when we’ve built an application with acceptable utility.

Programmers are accustomed to thinking in black and white terms. Consider the evolution of software engineering practices. Once you’ve mastered OOP, you don’t write a mixture of object-oriented and procedural code (notwithstanding static methods and some singletons). Before OOP was widely adopted the debate raged over the benefits of procedural code, then called “Structured Programming,” versus the prevailing non-technique known as “Spaghetti” code. Today, practitioners of rigorous, proven techniques rarely break ranks.

But project management is not procedural, not deterministic, and not reducible to a set of rules that guarantee success. Project teams establish guidelines that reduce risk, with sufficient wiggle room to tune their best practices according to the specific challenges of the project, feature, or task at hand.

Working Schedules

Let’s start with a simple fact: A deadline is a schedule, it just happens to be a schedule at the lowest possible resolution, representing the single time interval from start date to ship date — call it the Master Timeline, or MT.

Now, let’s state the obvious. To increase the resolution of the single date schedule, we need to add more time increments, which take two basic forms:

  • Intermediate milestones, which, like the final completion date, mark time intervals.
  • Work estimates, which define the amount of time, in person-hours, required to complete discrete tasks.

The only way to validate the achievability of the intermediate milestone and the deadline, is to chop up the anticipated work into smaller pieces and tile them into the Master Timeline. Estimating is a complex subject, which I’ll address in another post. For our purposes right now, if you have faith in your Velocity, you can plan with Points. If you prefer to flatten the abstraction use hours. It makes no difference. The point is to think about the schedule from end to end, and not just one Sprint at a time.

Progressive Scheduling

The precision, or uncertainty in the schedule is proportional to the degree of detail in the specifications and estimates. You may assume that a low precision schedule has no more value than no schedule at all. Not so. Precision delimits a band of time within which a project is likely to complete. A schedule can be derived from imperfect estimates, as long as those estimates are accompanied by a reasonable meta-estimate of uncertainty, and the uncertainty is communicated to the stakeholders. Some estimation methods incorporate uncertainty intrinsically (again, a subject for another day, or see Steve McConnell’s excellent book, Software Estimation: Demystifying the Black Art, Microsoft Press, 2006).

Software is often a research and development process, with many unknowns at the outset of a project. It’s rarely possible to estimate an entire project of significant scale with high fidelity. That’s why Steve McConnell introduced the “Cone of Uncertainty.” The recognition of uncertainty has somehow pushed us to the un-constructive conclusion that any attempt at estimating and scheduling software development is delusional and a waste of time and energy. That wasn’t McConnell’s intention at all.

We need to manage uncertainty. But what does this mean in practice? We can manage uncertainty with design and planning together, which scale along two dimensions:

  • X axis: How much of the application (e.g. the number or percentage of total planned features) has been designed and estimated.
  • Y axis: The granularity of design details and estimates for each planned feature.

In a date-driven project, it should be clear by now that I’m suggesting an end to end plan, fully populating the X dimension. What I’m not suggesting is that each feature or component must be fully designed and estimated (the Y axis) at the project’s outset. That would be waterfall in the extreme.

For dimension Y, the question is how much detail is sufficient and how does it correlate to schedule uncertainty? This is also less complex than you might imagine. Here’s an example of a top down hierarchy of specification detail:

spec-hierarchy-block-diagram

Why is this approach variable and not prescriptive? Because all of this depends on where you are in your product’s lifecycle. For mature products, you may need far less detail in your use cases or design stories because you can follow established patterns. For new products, you may need far more detail in advance to make educated estimates. And then you need to combine those considerations with the firmness of your deadline. If your objective is to complete a project “sometime this year” or “by third quarter” you may need less detail than you would to commit to a hard date. Make informed decisions and calculate your risks accordingly.

Whichever way you choose to manage a date-driven project, Gantt or Scrum being the poster children for each camp, you need at least some of this information to make informed estimates. When you’re done, you’ll have either a groomed Product Backlog of estimated Stories, or a schedule with estimated Tasks, along with an uncertainty based on the design depth.

A high precision schedule’s detailed estimates require more detailed specifications, which together increase the front-loaded and total effort. An ideal project, one in which the requirements and technology are well-understood, will require some unknown amount of time. We have two choices at the outset of the project:

  1. Carefully design and plan in order to calculate the completion date with some degree of certainty.
  2. Execute entirely in iterations, working toward the same, but unknown project duration.

All other things being equal, a planned project may take more time than an unplanned project due to the up-front design and planning effort itself. An iterative process will spread the design throughout the project, so it’s not absolutely true that planning increases total cost. In either case, adjustments required to compensate for invalid early assumptions may increase total cost. It’s a toss-up. For many projects, the duration will be the same whether or not it’s pre-planned. The only difference is whether you can predict the completion date. Therefore, a project should be date-driven and pre-planned only with well-defined requirements and when business success is date-driven.

In Part 3, we’ll examine end-to-end scheduling techniques, and practical ways to incorporate the best lessons of both traditional and Agile methods.