Friends and Family, we are gathered together here today to morn the passing of an old friend. She lived a long full life and was a true hero. We will never forget the contribution that our friend, the Iteration, made to our lives.
In casual conversation over the last 18 months, I’ve had many chats with practitioners, vendors, and agile/lean strategists about the work that I was doing around large scale agile rollouts and product portfolio management and lately around large scale distributed agile, social’s impact on development, and the human cloud. A topic that always came up was how I would have done things different in retrospect. One point that would always generate discussion or controversy was my statement that I would have done away with iterations. In late 2008, that was met with derisive responses which typically caused the conversation to shift to sports or the weather. In 2009, those conversations spawned healthy dialogs about the value of iterations and pitfalls of their removal. In 2010, particularly with the most innovative development teams, the response has been, “Yeah, we did that too.”
Why did we adopt the iteration?
The value of iterations have been very well documented (exhaustively actually) in book after book. However, there are three particularly compelling drivers.
Iterations represent a rhythm to the development process. Rhythm through repeated routine is a critical driver of productivity and consistency of output in any repetitive task. This was a core tenant of the industrial revolution. The assembly line is put in motion, each step is measured, cookie cutter. Inconsistency in the cadence of fabrication is investigated and remediated. More importantly, cadence is hard wired in our brain and is fundamental to human performance.
The iteration, with its short duration of typically a week to a month, serves as a forcing function to drive features to conclusion. This forcing function also ensures that features are small and implementable in a reasonable amount of time. That further leads to small enough increments of complexity that the average developer can have a sufficiently reasonable chance of creating an acceptable implementation and design.
Most importantly, iterations form a natural synchronization point for stakeholders both upstream and downstream to the development process. The predominance of development issues are requirements issues. The iteration ensures that those with the knowledge of what needs to be built actually look at and comment on what HAS been built at a regular interval. Waste is reduced and organizational value is increased. Likewise, small iterations reduce the risk of defects and environmental instability and shrink remediation times for defects by shortening the time that between when a change is introduced and when it can be put into a production or simulated production environment.
So why are iterations going away?
Iterations are an unnatural concept
As the name implies, iterations are an arbitrary time slice…a line in the sand. This leads to all sorts of technical and logistical gymnastics, that often have more drawbacks than benefits. While the drive to relentlessly divide features into their smallest possible unit of deliverable business value is important and valuable, artificial iteration boundaries tend to encourage bad habits like layer cake division. More insidious, developers almost always make the division decision. The more often that decision is not aligned to true business value, the less relevance stories have to business stakeholders. In my experience, as little as 10% “noise” in story value can cause a non product management stakeholder to tune out of the story flow of a team.
Iterations encourage the introduction of Technical Debt
The “unnatural” and arbitrary sprint end deadline creates a few interesting behaviors in developers. While the forcing function nature is great, it puts enormous pressure on developers to cut corners to meet the team commitment. While mature teams can often make the decision to push stories out, the predominant behavior is to compromise on quality. When “end of release” quality pressure occurs every week or two, technical debt, particularly tougher to detect varieties increase quickly.
Iterations hurt your developers self esteem
The sustainability goals of Agile are well known and critical to long term productivity of an development organization. While the velocity metric provides a defense to burnout, the iteration introduces a couple of critical cancers to development health.
First, the tyranny of iteration end (which, for reasons that are baffling to me, most people put on a Friday) leads to Second Weekend Syndrome (SWS). Second Weekend Syndrome refers to the common practice of developers getting behind in burndown of stories in a sprint and either because of internal pressure (self worth, sense of commitment) or social pressure
use the second weekend of the now ubiquitous two week sprint to “catch up”. As Second Weekend Syndrome progresses, its effects are debilitating. Even personally, I have had a few developers burn out because they were secretly or silently suffering from Second Weekend Syndrome. The cultural impact of even one developer on the team suffering from SWS is infectious, reducing productivity and morale of an entire team.
Second, Iterations create a tension between stakeholders and development around prioritization. Requirements priority is a reflection of changes in understanding or circumstance in the business context of the product. Other than the cybernetic interplay between release and requirements understanding (seeing a working solution to a requirement), these changes in priority have little to do with development cadence (the iteration). The likelihood of a requirements change corresponding to the beginning of a sprint planning cycle is less likely than not. While historically, requirements were boxed up into MRDs and put in storage for a few months, this was an artificial function of the product delivery process, not a natural reflection of requirements capture and understanding. This is poorly understood by the typical developer. Where this comes into conflict is when stakeholders begin to reduce the feedback loop in response to the consistent delivery of solutions to their problems. Eventually, the expectation of lead time between a realization of requirements change or priority change falls to zero based on the historical success of agile delivery (call it Erik’s Law of Lead Time). Unfortunately, because the iteration sets an arbitrary lower limit on lead time, conflict occurs when lead time expectation becomes smaller than iteration length. The very common byproduct is bitter developers who can’t understand why irrational stakeholders “can’t even wait two weeks for a change”. Developers dig in with a war against requirements instability which starts a negative downward spiral.
Iterations increase cycle time and decrease predictability
There is an interesting cognitive dissidence that occurs between the unnatural and arbitrary length of a sprint, the “boxcaring” (delivery coupling) of often to usually unrelated requirements, and the nature of requirements prioritization “in the real world.” (Erik’s Law of Lead Time for requirements discussed earlier) Because multiple developers capacity is pooled into a unit of productivity (the iteration) and then that more course grain pool is “filled up” with a set of high priority requirements, there is, by definition, a coupling of less important features to more important. Also by convention and best practice, those features are smaller than the unit of capacity. In other words, you put smaller items (a story that takes a few hours or days) into a larger iteration (that is some multiple greater than one of the length of the most important story). As discussed earlier with regards to technical debt, any volatility in estimation accuracy leads to end of sprint execution challenges. While the introduction of technical debt is probably the most common strategy used in practice, a close follower (and best practice) is to jettison the lowest priority stories from the sprint. This jettisoning of stories decreases the predictability of story delivery, with lower priority stories in a sprint suffering from increased cycle time as well. Worse, while the best practice is the jettisoning of low priority stories, in practice, the more common practice is larger stories, which are at increased risk of “slipping” as they are the most common things left undone as a sprint end approaches due to team interdependencies and the closeness in time of velocity inferred production time of a large story and the iteration length (ever heard, “it’s done but it isn’t tested yet.” in your last stand up of an iteration?). The consequence of large stories more commonly suffering sprint decommit is that most teams organize their sprints around one or two important stories and then “filler”. The important stories, sadly, are usually the largest. This leads to cycle time volatility IN PRACTICE correlated to the MOST IMPORTANT functionality in the iteration based development team.
Benefits of Iterationless Development
While extolling the virtues of iterationless development are beyond the scope of this discussion, here is an overview of a few value points.
Iterationless development gets rid of the tyranny of the iteration end. This opens the door for developers (and the organization) to execute at the optimal sustainable cycle time.
Cycle time has the opportunity to be reduced by removing the cycle time tax incurred by the timeboxed “boxcar” of requirements.
Requirements and prioritization changes can take place at any time, without causing consternation with development. WIP is much smaller, limiting the likelihood of a requirement change impacting an active development activity.
It opens up opportunities for more diverse team organization functions like place shifted (remote) and time shifted (multiple time zones or work schedules) development.
What about cadence?
Cadance can (and should) be moved to a more natural concept. Some popular cadences are around builds and features.
What about forcing functions?
Dramatic advances in practitioner adoption of continuous integration and deployment, DevOps, and configuration management now allow the forcing function to occur more frequently and at a finer grain level.
What about stakeholder synchronization?
The more frequent forcing functions provide greater upstream and downstream visibility and tighter feedback loops. WIP limits, dynamic portfolio allocation, and QoS schemes feed stakeholders only as much as they can readily consume, maximizing efficiency and organizational value delivery.
It’s a brave new world for development and I look forward to exploring it with everyone.