Straw-Man Agile Project Sizing: Part 2

In Part 1 we talked about the challenge of furnishing cost estimates for loosely defined software development projects. These are typically initiatives still in the proposal stages, for which business stakeholders need cost before they can determine the potential return on investment and budget.

In this second of three parts I’ll suggest specific examples of ways to sketch in enough detail to arrive at a low fidelity estimate sufficient for business planning.

Tools

I like to begin an exploratory design with a visual “straw man” of the business process I’m modeling. A straw man is a rough proposal, a stand-in for a final design, simple enough to hack away at until we land on a workable approach. If everyone on your team is skilled with OO thinking, then you may choose to jump to Sequence Diagrams for your straw man designs. I prefer old-fashioned flow charts because they tend not to prematurely suggest implementation details, and it’s relatively easy for all stakeholders, both technical and non-technical, to follow the arrows connecting boxes representing actions and diamonds representing decisions. Sketch out as many of these as you need. In one design exercise involving twenty stakeholders from four countries I projected flowcharts on a whiteboard where the team could draw their revisions directly on the diagrams. We then snapped photos to capture the changes. It was a makeshift “smart board.” If you have more advanced technology in your meeting rooms, so much the better.

Along with the process flows, you should invest some time in business rules. Flowcharts may show how things will work under normal circumstances, the “happy path.” Business rules fill in all the minutiae that won’t fit neatly into your diagrams. Don’t get lost in implementation details, such as data input validations until after the project is approved and funded. The business rules you’re interested in here are usually common sense business constraints such as “Each payment may be applied to only one invoice,” or “Accounts may be deactivated, but not deleted until after seven years following last activity.”

The third step, which may be flipped with the first, is to inventory the data required to support the feature. Personally, I like to start with a data model. Incidentally, I used to believe this was the “right” way to design systems, but since I’ve declared myself anti-dogmatic I’m compelled to acknowledge that we each have our own way of breaking down a problem. Yet…

Lay of the Land

While sketching out an approximate data model you can skip the utility attributes, such as event timestamps and update counters, but you should be able to rapidly list all the relevant entities and attributes required to represent the static and transactional units acted upon by the anticipated processes. Capture the essential elements, such as Last Name, First Name, Address, Credit Limit, Terms (30, 60, 90 days to pay), Transaction Amount, Transaction Date, Transaction Description, etc. The number of attributes and the number of documents or records into which they are organized imply a degree of complexity.

For example, you may find that your new feature requires three new data entities — tables in a SQL database or documents in a no-SQL database — comprising a total of forty-five new attributes. The complexity factor isn’t so much how many attributes you need to add, but instead, how many of those attributes are computed, their relationship constraints, and how many will be transformed in some way.

To score the complexity, you’ll need to establish some standard measures. For example, any data entity usually implies CRUD interfaces, which contributes to the weight of the change. So a new table, regardless of the number of attributes, could, for example, represent an arbitrary baseline score of 20 points. That gives you a starting point. It’s a value you’ll establish, against which you’ll compare all other changes. Like card-counting in Blackjack, you can then score up or down. A non-computational field, such as Street Address, would have a complexity value of zero. Each foreign key relationship may add +1. Fields that derive their values from processes applied to other entities or attributes, such as account balances summed from transaction amounts, may count as +2 or more.

Once you’ve captured business processes and a data model inventory, stop and determine whether you need any further detail to arrive at a complexity estimate. You may not. It depends on the constraints such as those I listed in Part 1, such as regulatory compliance, security, scalability, and other non-functional requirements that usually don’t appear in the business processes because they represent plumbing, and aren’t especially interesting to business owners. Sorry managers, but it’s true. That’s why you hire IT managers who do think about all that stuff that just needs to get done. It’s also why things take longer than you expect. Security concerns alone can multiply the complexity and cost of a project.

From here, if necessary, you can increase the accuracy of your guess by diving a little deeper. The next logical steps would be to sketch out the UX and identify backend service endpoints (API functions) that would likely be needed to support the business processes and business rules. Not detailed interfaces, just the most obvious use cases, such as the CRUD functions and essential business logic.

Just like an Agile Spike, time-box this sizing exercise to control your depth of detail. After two or three rounds on separate features, you’ll know how much time it gets to your desired precision. Spend a few hours or a couple of days, not weeks. That would not be a SWAG. And if you do have the luxury of completing a detailed design before providing sizing, take advantage of it because it’s guaranteed to save pain later.

In Part 3 we’ll wrap up, proceeding from complexity to approximate cost and throw in a couple of caveats.