don't worry, it's probably fine

How I split stories

07 Nov 2019

nablopomo work story splitting

I’ve been trying to pin down what goes on in my head when I’m trying to break down a piece of work into manageable chunks.

It has been hard to a degree that I didn’t expect - as a colleague suggested out, a lot of these decisions are context-dependent and build on years of past experiences.

Nevertheless, here’s a rough process I go through in my head.

Split on risky assumptions

I’m an anxious person by nature so large amounts of risk make me uncomfortable.

I try to front-load this risk where possible, and encourage my colleagues to do the same.

We had a bit of work last quarter to change our production smoke-tests from an in-datacentre service to a scheduled job in a Concourse deployment . Some of risks here were:

  • Running the smoke-tests as a periodic job rather than a service.
  • A different way of loading configuration and key-material.
  • Connecting from a different source.

We ended up deploying a “hello world” connector for the task first before iterating on generating the key material in the correct format and ensured the app could load them. Finally, we established the connection to complete the feature.

Split on system boundaries

I build out stories that stub cross-system integration points with the intention of implementing them as a later story.

This is a form of front-loading risk. The majority of problems caused by large cross-system stories have been at the system boundary, in my experience.

We had to pull some data from AWS RDS, munge it, and put it into S3 as part of another piece of work. There are boundaries between RDS and the app, and the app and S3 so we split these into separate stories.

  • The first performed the RDS data pull and we stubbed the transport to log out the resulting metadata.
  • The second implemented an S3 transport which didn’t upload the full data-set but enough to test the connection.

Split on provable value

Change without purpose is chaos - I try to split work up such that each piece provably delivers value.

Defining value is the hard part, of course. I use “changes a measure of success” as a good rule-of-thumb.

If we can’t prove a change has happened, we can’t prove that we’ve actually delivered value. Even if we’re stubbing out a call across a system boundary we should probably put a metric to prove that it’s happening.

I’ve encountered a number of bugs in my career where we failed to realise the wiring was missing until we tried to connect it all together.

I’m a big fan of the unfortunately-named strangler pattern and using branch-by-abstraction to incrementally roll out valuable changes.

Build a dependency tree

I sort-of dislike the standard “list of lists” way of managing work visually. It is fine to visualise a queue of prioritised work that makes its way across the “wall” but loses the nuance of dependent or parallel tasks.

I try to build one of these in my head, if not on paper.

The ideal world is a set of parallel small stories that the team can pick up and individually deliver pieces of value.

When to stop splitting

GeePawHill has as his rule of thumb that an ideal story should be “about a day” and I think this is a good guideline.

Longer stories tend to have more risk associated with them because they’re larger and have more chances to go wrong.

Sure, you can have a large story split up into multiple isolated deployments, but those are very likely natural candidates to split the story up into!


November is National Blog Posting Month, or NaBloPoMo. I’ll be endeavouring to write one blog post per day in the month of November 2019 - some short and sweet, others long and boring.