Agile Software Development – One Size Doesn’t Fit All

As I mentioned in part I of this series, Adoption of Agile at Caplin, when we first started following an agile methodology (in our case Scrum) in February 2005 we were desperate to implement it successfully and wanted to adhere to all of the best practices.

We persevered with this approach for several months, repeatedly referring to the books on Scrum we had purchased and the numerous websites on the subject which were cropping up at the time, to find out ways of improving the process.

With hindsight this looks a little conservative, however back then we were still getting to grips with agile and didn’t want to make any significant adjustments that hadn’t been tried and proven elsewhere.

Eventually we built up our confidence in deciding on the sort of tweaks that we could make to the process ourselves. Coupled with this many of the new developers that joined our team at this stage also brought with them their own experiences of agile development, which we were able to use to induce change.

Some of these changes were very successful and are still integral parts of our process today, whilst others only lasted for a few sprints before being dropped for alternate solutions.

So, without further ado, welcome to our Hall of Fame. And to prove that progress doesn’t come without its fair share of mistakes, our Hall of Shame follows.

Hall of Fame

Here are some of changes that we made to the process which proved successful.

We found the 4 week iteration cycle was too long, and restricted our ability to respond to customer needs promptly. We reduced our iterations to 2 weeks.

We stopped calculating the number of ideal hours the team could theoretically take on. This was quite time consuming and project velocity seems to do an equally good job.

We started to update our story board mid sprint, adding in new tasks (not stories!) as they come up. Initially we were loath to change the tasks on the board since this felt like some kind of scope creep. However those cards reflected what we thought we needed to do during the initial estimation meeting. Contact with the real code often highlighted things that we had forgotten or hadn’t even realised would need to be done, and the tasks on the board would no longer reflect what was really going on. A big personal thanks to Richard Chamberlain who introduced this practice to me.

We now ensure that the tasks are all relatively small. The state of a task card is binary, it’s either done or not done (including all testing and documentation facets that might be associated with it). Monitoring how much work has been done and how much is left to do is hard if there are only a few task cards that will each take a few days to complete. Breaking those larger tasks down into their constituent parts will help provide visibility on the progress.

We adopted the use of Extreme Programming (XP) style user stories and story boards instead of using a sprint backlog spreadsheet

We split the development team out into separate scrums. The reasons for this were:

  • As the development team grew there were simply too many people to manage within a single scrum
  • A single scrum ended up containing stories that were deliverable for different customers or products. It was nearly impossible to decide which story to work on next during the sprint because they didn’t have comparable priorities. Was it more important to ensure all stories were completed for one customer, but none for another, or to ensure that some stories were completed for both?
  • Estimation was a serial process that took a longer and longer time, typically a whole afternoon, as the team grew and was able to take on more work items. Often only half of the developers would have any idea how long a particular task might take since it was in a product and/or language they didn’t know. By the end of the meeting everyone was tired, and were desperate to escape from it, which was probably a significant factor in some inaccurate estimates.

We learnt how important it was to ensure the relative priorities are correct before determining which stories were taken into the sprint. After our estimation session had been completed we reported to the stakeholders what work had been signed up to. On a few occasions the stakeholders had assumed that a certain number of stories would be signed up to and were surprised to find out that sometimes they weren’t. When this happened the real priorities became apparent, and the team has to go back and start another estimation phase to address this. I think this is summarised by asking the stakeholders the question, “if we can only do one story, which must it be?”, then “if we can only do two stories, which should the second be?” and so on.

We needed to focus the output of our retrospectives, rather than trying to take on too much change. The retrospectives themselves were very good, however we had a tendency to take many actions out of them, most, if not all, of which didn’t really get implemented. We now focus on agreeing the one or two most important actions to undertake from retrospective and focus on actually carrying them out.

When we are unsure of how much effort might be needed to complete a new piece of work, we scheduled a spike (coined by Kent Beck and Ward Cunningham, popularised within Caplin by Ivan Moore), or tracer bullet (from The Pragmatic Programmer by Andrew Hunt and David Thomas, introduced to Caplin by Mike Cohn), within a sprint to drive out the risk and help come up with a more reasonable estimate on how long the real work will take (thanks to Alistair Cockburn for providing the insight into the originators of these terms).

We changed the way we wrote our stories to focus on delivering thin vertical slices of functionality. This way we are more likely to deliver demonstrable features at the end of a sprint, even if things prove more difficult than we first expected.

Hall of Shame

Here are the changes that we made to the process that turned out to be less than successful:

We attempted to improve estimates by tracking the actual time spent completing each task. At the end of the sprint we could look at the actuals compared to the estimates, and analyse the causes for any with large discrepancies. This Turned out to be complete overkill. Those stories that significantly overran were always identified without any analysis within the retrospectives anyway, and there was a small but noticeable overhead of capturing the actuals.

We also went through a phase of trying to come up with better release plan level estimates by pre-planning a couple of iterations ahead. Unfortunately this broke the all important just in time aspect of agile development. Several times the stories that we had preplanned a few sprints ahead were de-prioritised as new customer requirements came in. These stories sat in the backlog slowly decaying. The time we spent on the pre-planning was wasted. Even if one of these stories made it into a sprint, and some didn’t even make it that far, we found that things had changed sufficiently within the code base such that the story needed to be estimated again. Nowadays we rely on higher level estimates combined with velocity to help with our release level planning.

One Size Doesn’t Fit All

The evolution of our development process has led me to the conclusion that one size doesn’t fit all. The ridiculous thing is how long it took me to realise this, since the first line of the agile manifesto pretty much points to this:

Individuals and interactions over processes and tools

Ultimately any successful agile development process that you read about within a book or a blog, such as this one, is a process that has been tailored to suit the team using it. Sometimes the author has experience of working with many agile teams and the insights that they have can be invaluable since they can provide generalised advice that is proven to have worked in several different environments. However they don’t have experience working in your team, with your unique blend of skill sets.

Even if your team is working on a similar type of problem and/or with identical technologies to another team, there is no guarantee that adopting its process will lead to success. Instead it can act as a good template to base your process on, one that can be adapted over time to suit your team’s unique skill sets.

Retrospective Evolution

This doesn’t mean that you shouldn’t look around for ideas on how to improve your development process. On the contrary, there is no point reinventing the wheel. An idea from another agile team might work perfectly within your team, or might work after some adjustments.

Alternatively it may prove to be the seed that allows you to reach your own solution.

That said, the drive for the improvements to the process should come from the retrospectives and we should continually be striving to improve the it.

For example:

  1. Could any of the problems that were experienced during the last sprint have been avoided if there was a suitable process in place?
  2. Alternatively are there any parts of the process that are no longer necessary or which can be streamlined?

The Saga Continues

Here at Caplin our development process continues to evolve. As has been mentioned before we have been trialling Kanban for a few of our internal development projects. It promises many potential improvements to the development process, the question is when will these changes occur, and what will they be.

Afterthoughts

Having written this article I have subsequently become aware that a whole track was dedicated to Agile Evolution at QCon London 2010, held between 8th and 12th of March. Much of what was spoken about in there echoes the experiences that we have had at Caplin and I would thoroughly recommend looking through the slides:

4 thoughts on “Agile Software Development – One Size Doesn’t Fit All”

    1. Hi Brenda,
      I’m glad that you have found our experiences to be a useful case study. My hope in writing this article was that other people would be able to benefit from them.
      I’m on holiday at the moment and haven’t had a chance to look through your Agile Information Development solution link in any detail yet. I’ll follow up on this next week when I am back in the office.

  1. Hi, Ian, … to help you with your crediting sources, it’s good if you can trace back to the real source, then they maybe feel validated for their work. …I’m referring here to “a spike (Ivan Moore’s terminology), or tracer bullet (Mike Cohn’s vernacular)” … “Spike” comes to us courtesy of the demon programming pair, Ward Cunningham and Kent Beck in the 1980s – Ward kept asking “what is the least we can program to make sure we aren’t going down a dead end, and Kent came up with the killer term (as he usually does) of “spike”. “Tracer Bullets” was introduced in the famous Pragmatic Programming book by the pair Dave Thomas and Andy Hunt (now running Pragmatic Press). I’m glad Mike Cohn has done a good job popularizing their term – it’s even better if they get the credit they deserve. Cheers – Alistair Cockburn

    1. Hi Alistair,
      Thanks for pointing this out. I have amended the post to give credit to those it is certainly due to. When I originally wrote it, I was only focussing on the first hand sources that provided us at Caplin with the ideas and terms that we now use every day. It was certainly an unintentional mistake that I failed to give credit to the pioneers who originally identified those ideas, solidified them and gave them a name. It’s great that we now have a common vernacular for these concepts; everyone within Caplin understands what a developer means when they say they need to “spike” something.
      I’m pleased to see your comments on Platformability, and I hope you’ll visit again.
      Thanks,
      Ian

Leave a Reply

Your e-mail address will not be published. Required fields are marked *