In many less progressive organizations and verticals where IT is not a meaningful competitive advantage I agree. In fact in some verticals, waterfall still defenders that make legitimate points (had a great conversation with a dev exec for an embedded systems team in the medical device field).
Refers to development by architectural layers instead of end user functionality. E.g.:
“this week we will build our data integration layer”
“next sprint we will write the business logic for the blue module”
“I think we should be ready to wire up the UI to the back end in next month’s iteration”
No question that fixed end dates are a reality in many projects. However, the operative question is how a team deals with the very real uncertainty of delivery (i.e. iron triangle). Instead of iterations, cycle time, and cumulative flow analysis (which also gives you a great feel for the schedule volatility introduced from quality issues) gives you finer grain control with equivalent protection against disruption to the development team and finer grain prioritization.
I had the pleasure of catching up with Jim Highsmith recently and we had an interesting discussion on the topic of technical debt. We were talking about the non obvious consequences to high technical debt loads. One particular insight that rang true from my own experience was the effect of high technical debt loads on the psychology and mind set of an organization and in particular, the tendency for an organization to actually underperform beyond the level that might, on the surface, be dictated by the technical debt load.
After an organization experiences extended periods of high technical debt, the decision making processes of the software’s stakeholders often begins to change. Faced with repeated instances where the inflexibility, fragility, or speed to augment or change the software has caused the organization to fail to capitalize on a new opportunity or respond to urgent change, stakeholders slowly start to form a mental model of the software’s limitations. Typically, this model is a darker, more pessimistic view than reality. Also, they typically has an inaccurate view of the actual limitations imposed by the debt load. These inaccuracies get anchored more to circumstances where failure has occured more than they are informed by architectural of quality limitations of the code base (though there is, of course, a strong enough correlation to reinforce this model in the stakeholder’s mind).
Unfortunately, as the mental model solidifies, stakeholders consciously (or subconsciously) begin relying on their internal model of the softwares limitations and the organization’s ability to respond instead of soliciting the development team’s opinion or engaging the organization’s formal product management process. Often, the product managers fall into the same pattern, no longer consulting the developers or architects for estimations.
The consequence of this behavior is a systematic underperformance of the organization. Opportunities are overlooked or passed on, even when a percentage may have been achievable by the organization despite the technical debt level of the software. This leads to an interesting dysfunction in the organization where the perception gap where the organization underperforms the current state of the software and development teams begin to defend their code base, objecting to the characterization by the business. This incentivizes both sides not to invest in technical debt reduction as the development team loses credibility defending the perception gap and the business refuses to support or fund technical debt reduction because of lack of confidence in the organization’s ability to execute.
I had the honor and great privilege to be one of the 33 attendees at the 10 Years Agile gathering at Snowbird, Utah. The gathering’s goal was to celebrate our accomplishments in the Agile Community, reflect on what has transpired since the Manifesto’s signing, and dialog on what remains undone. Agile has, over the last 10 years, grown to be the dominant development methodology. It has tapped into a number of fundamental best practices for productivity and efficient, flexible value creation. It has matured and mainstreamed.
In the closing hours of the gathering, the team distilled the amazing dialog of the weekend into four areas of focus for the agile community going forward. Out of those, three dealt with the maturation of agile. Education, technical excellence, and leadership all focus on maximizing the value of the last ten years. The fourth stood out to me as the essence, and inevitable impact, of Agile on the future of the world of technology.
“Maximize Value Creation Across the Entire Process”
We are seeing agile principles move upsteam and downstream in the value creation process to spawn or heavily influence entire movements like DevOps and Customer Development. The next frontier of value delivery for Agile will be on the linkages and support of holistic views of value creation. I’ll be interested to see how the community responds to this challenge.
I’ve wrestled with some hairy large scale agile problems. I’ve found that the coordination, temporal synchronization, and management overhead represented dramatic waste. While many argue that this is part of the nature of scaling up, I have a hypothesis. I believe that a shift from a team oriented organizational structure to a community oriented organizational paradigm will result significant efficiently improvements over highly synchronized, coordinated, and regimented teams. One of the many problems that large scale organizations face in this regard is release planning.
Let’s unpack this practice to understand the challenges it represents, in practice, for most organizations.
Coordination: As the number of teams increase, it becomes challenging to coordinate work allocation between groups. Historically, I have created elaborate structures, tools, and rituals to try to make this more efficient. Traditional release planning activities creates an unhealthy cycle of “point in time” moments of clarity and focus that then degrade over time until the tactical decisions made on a daily basis by the teams drop to a sufficiently low quality as to act as a forcing function for replanning. The productivity and budgetary cost implications of large scale release planning act as a counter balancing force that actively encourages extending the use of poor information longer. This leads most organizations to reach equilibrium with quarterly (or annual!!) release planning.
Because even a quarterly planning function is challenging to coordinate at large scale, most organizations will further compromise on other axes to work around the budgetary and logistical constraints. Typical suspects are:
Time boxing of the event, regardless of actual time required for the complexity and scope of the organization’s mandate
Participant constraints, where delegates, meant to represent all the knowledge and experience of a team or stakeholder, are selected to participate which creates another source of imperfect information into the planning process (further eroding the half life of the plan’s usefulness) or leads to significant time delays, cutting into the already limited time allotted to the planning process.
Scope constraints, where topics discussed are unnaturally limited to a list of predefined priorities or “strategic” initiatives, which often flies in the face of the organic interdependencies between otherwise tactical concerns and the topics of import. This leads to polite fictions where planning for significant work gets deferred under the banner of dependancies (or high brow euphemisms like Architectural Runway and Technical Debt reduction). This directly and proportionally builds inaccuracies into the plan which further reduces the useful tenure of the plan.
What should be done?
Organizations need to rearchitect their planning processes. Two key focus areas of a more organic and natural process are:
Align planning to areas of focus. Any product has a natural set of key drivers for change. Sometimes this is the implementation of a single business model (think Lean Startup MVP). Other times it has a natural set of key constituents that can be effectively segmented into a portfolio. Additionally, there are a set of concerns such as technical debt levels relative to the expected longevity and activity of the code base or operational and support concerns. By segmenting planning, the activities can be aligned to more natural events and cadences in the business (such as industry cycles and events, line of business calendars, etc). Planning can then be smoothed out into a finer grain process that more closely resembles a continual planning model (with budgetary alignment based on priority).
Move to a more fluid and fungible resource model. One of the biggest sources of impedance between stakeholder priorities and development capacity is over specialization of resources and teams.
There is a natural tendency for organization stakeholders to want to codify priorities and portfolio allocation into the organizational structure. While it makes sense on the surface and it is easy to align budget and consensus around the practice, it immediately cements a point in time set of priorities and organizational concerns into an org chart. This makes it incredibly hard to react to changes in stakeholder priories and react to changes in the business. It also creates an unnecessary and destructive tendency to shrink and grow teams not based on the merits of the personnel but on team alignment to priorities.
There is a natural tendency for technical leaders and contributors to want to codify technology decisions and code bases into organizational structures. ”This is the RoR team.” ”This is the platform team.” ”This is the blue widget team.” All of these represent technologists natural desire for efficiency. Because a team (or location) has a history with a particular code base or technology, most technologists will want to funnel all requirements involving a the specialization through that team. This creates significant bottlenecks that are not well appreciated by stakeholders. Being told that they cannot have the thing that will deliver the most business value because it requires a specific team that is working on something else is not easy to swallow. Organizations should create development communities that transcend team affiliation so that work can more efficiently align to business value return instead of technical resource availability. While strengths and weaknesses will always exist, codifying cross training into process and ensuring the model promotes wide geographic exposure maximizes return on development cost centers.
Development teams are becoming more virtual, transient, and distributed. The lack of colocation makes it challenging to experience much of the cohesion benefits that come from positive reinforcement in intrateam interactions. The increasing instability in team membership makes it difficult to establish and maintain a team culture, shared identity, and the expectations for individual practices and habits that make up the less talked about, but vitally important component of a development process. Trends like Kanban, Continuous Deployment, and DevOps are driving an intense focus on cycle time and are driving out traditional cushions in schedules and team performance variability. All of this makes it very challenging for team leaders and stakeholders to keep a pulse on how a project is progressing qualitatively.
Game Mechanics, while not a panacea, is a strong contender for filling the void being created by the structural changes taking place in development organizations. Let’s examine the applicability of some of the more common game mechanic constructs:
- Badges allow for both feedback on behavior mastery and reenforces team culture and shared values.
- Voting with titles creates a durable system of record for accomplishments and contribution and externalizes inter team dynamics.
- Leaderboards provide a consistent level of intensity and motivation and focuses team members by personalizing their contributions to shared team goals and key metrics.
- Game results become an interesting form of project reporting that provides unique quantitative insight into a range of qualitative questions. Savvy development managers often glean great insight from simply walking around and observing her team. Energy levels, momentum, engagement, and confidence in deliverables and dates are just a few of the qualitative insights this can yield. When your team is distributed, this key source of knowledge is gone. Game results and trends provide an interesting surrogate.
- Game results operate as a form of institutional memory providing consistency in the face of changing personalities on an unstable team. At a team level, they provide new members with a quick (but deep) understanding of acceptable performance levels and team values. It also acts as a form of personal reputation, allowing a new team member to quickly understand the strengths of her new coworkers.
- Game results provide a rapid feedback loop for an individual to understand how their recent performance and individual behaviors align to the ideal. This is perhaps the most powerful aspect of game mechanics. Reviews and other HR concepts rarely create desired changes because they occur to infrequently to be used in habit creation or behavior modification and have too much overloaded importance for honest discourse (read compensation and promotion). Code reviews and peer feedback, while valuable is focused in the result of individual behavior (the code, defects, functional issues, etc) not the actual behaviors, actions, and practices of the individuals themselves.
While game mechanics can provide some interesting benefits to agile teams, there are significant potential pitfalls. The value of game mechanics increases as a function of the quality of the specific defined mechanics. Poorly thought out game mechanics can have dramatically negative effects. There is a temptation to tie extrinsic rewards to game results. Anyone who thinks this is a good idea, should read the great research and arguments in Dan Pink’s book Drive (in fact, anyone thinking of embarking in a game mechanics experiment should read this book). In this same vein, while e competitive effects of game mechanics can boost team performance for short periods, game mechanics should be positioned and nurtured as a mirror for intrinsic motivation and provide a rapid feedback loop for an individual to assess their behaviors and performance against team culture and goals.
A fascinating trend that I have noticed accelerating of late is the socialization of the development process. One of the more innovative corners of this trend is the introduction of social objects into the development world. A social object is a domain object in a business process which has been surfaced to interested parties independent of the normal process workflow and wrapped in social functionality to form a community and provide a system of record for the body of ad hoc processes that generally execute in parallel to the structured business process execution. An example of this is the work salesforce.com has done around sales processes with Chatter. Any opportunity, lead, contact, or other domain object has a full set of social/collaborative capabilities.
The typical set of features that make up a social object are the ability follow and receive updates to the social experience around the specific social object, express affiliation or interest to others, the ability to comment or otherwise carry on a conversation around the object, and a nexus of information in the form of a profile page. Often the ability to create relationships to other social objects orthogonally to the application data model is also supported. Likewise, you also often see collaborative content management around the social object. Examples of collaborative content management might include wiki like content creation, photos, documents, or other rich media.
In development organizations, we are seeing some very interesting social objects emerge.
The most common social object in development organizations is the requirement/story/feature. Most ALM tools are creating social experiences that allow an idea to form a community of stakeholders (customers, public, internal) and have that idea evolve into a fully fleshed out requirement or feature through dialog and social prioritization (idea networks and voting/ranking). Many organizations are extending this idea inside the feature implementation lifecycle with product managers, QA, developers, professional services, operations, and other constituents involved in the delivery of the feature to users. Less rework, better implementations, and shorter commercialization cycles are common benefits.
Beyond requirements, the code itself is becoming social objects. Historically, comments in code served as the primary repository of conversation between developers. Starting with the open source code repositories and extending to the commercial software tools vendors, code visualization systems layered on top of the source code repository are starting to provide conduits for conversation and communication. Wiki like rich annotation as well as threaded conversations and commenting are becoming a prevalent in tools like Github, Bitbucket, and Fisheye. Additionally, as code is getting created, the interpersonal feedback processes between developers is getting social as well. From change set comments to full blown collaborative code review tools, the conversations that used to take place in casual conversation and formal documents or meetings is moving into a free form, continuous conversation. The implications for knowledge capture and decision documentation are significant with the barriers to many objectives historically requiring onerous process becoming a by product of authentic and productivity enhancing collaboration.
While discussion forums and public bug tracking databases are the elder statesmen of the social object world in development, these tools are starting to provide more accessible tools for creating internal collaborative conversations between support, QA, developers, and product managers to streamline the effect resolution process. While few fundamental changes are taking place, the paradigm shift in design is making otherwise little used features more interesting with follow capabilities and links to actionable data like change sets for patches and links to test case repos.
Visionary social development practitioners are also extending the social object strategy to more interesting and less obvious domain objects. For examples, many are turning software builds into social objects so developers and QA professionals can collaborate on data and configuration dependancies required to test new versions of their software. These are replacing a high friction component of many development organizations with a natural collaborative alternative. Likewise, operations staff are exploring build based social objects an effective tool for satisfying ITIL and SAS70 requirements without elaborate handover documents and forms. Additionally, DevOps organizations can enhance the development and operations communication process based on the build based social collaboration.
It is still early and vendors are still have uneven support for the social object paradigm. However the derivative benefits are profound. It will be an interesting few years for tool vendors as they ride a renaissance in their market fueled by social innovations.
Background and the Three Core Inputs
I’ve watched e lean startup movement gain momentum with great interest over the last few years. While I was no longer in a start up environment, I found that Steve Blank’s Customer Development Process worked well for any new venture (new product, new market, or other significant shift of expansion of a component of your business model). While the MVP concept was well documented, I found an alarming lack of focus in the nuts and bolts of product management in this environment. Most descriptions of the customer development process juxtapose it to agile development, so it is interesting to me that more attention is not paid to the meshing of the gears between the two. While not a thorough treatment, I thought I would document the inputs into my backlog.
The Customer Development Process
The customer development process is, of course, the primary (almost exclusive) driver for functional requirements into my backlog. The lion’s share comes from MVP definition, where the business case is driven by the implementation of my business model. I have been heavily influenced by Dean Leffingwell’s excellent work on his Big Picture Agile work. As such, I use a three tier model for requirements tracking (epics, features, and stories). My epics correspond, generally, to the large scale solution assumptions in my business model. I also have epics for major funnels that I am building (activation, revenue, and referral especially). Each epic has a few features which represent the course grain features required. The features are typically sized at a granularity where anyone can understand them and their value. Each feature will have a number of stories, each one at the smallest granularity that delivers business value. While that sounds like a lot of complexity, I will add to the “lean doesn’t mean
The second source of backlog stories comes from testing and refinement. Every day reveals more A/B test results, validated learnings, and assumptions disproven. With each of these, stories get added to make small tweaks and implement new tests to quickly iterate towards product/market fit. Occasionally, a new epic gets injected to represent a pivot in business model.
Many seem to gloss over architecture in their quest for simplicity and speed. I’ve found three critical areas that require explicit management of your architecture. The first source of stories is architectural spikes. When delving into the unknown, your technical team will still often need to try something out in order to find the right technical path. The second is architectural runway. Even in the leanest of organizations, there is often a need to set up a database, incorporate a new open source package, or otherwise fortify the underpinnings of the application. I have found that more spikes occur early, and more runway stories occur after some traction has been secured. Lastly, and most overlooked is technical debt reduction. While beyond the scope of this discussion, it is critical that the amount of technical debt that in incurred be managed explicitly. When it gets too high, it needs to get paid down.
The guys in operations always draw the short stick when it comes to having a voice in product. Running a DevOps process keeps operations close to the product, and more importantly keeps product management cognizant of the issues that need to be resolved. There are two DevOps inputs that stand out. The first is the output of monitoring (logs, environment, user behavior, and funnels) that indicate an operational issue is occurring. The second is the output of DevOps retrospectives (five why meetings or equivalent). One of the most insidious forms of technical debt is Operational Debt and is characterized by manual processes in operations that lead to outages, execution/scale constraints, and budget overruns.
Putting it all Together
A good product owner or team can take these inputs, weave them together and avoid many of the classic pitfalls of a new product.
When I first heard the story of John Henry as a very young child, I found it puzzling. I couldn’t understand how anyone could honestly believe that somehow the human spirit could triumph over machinery at manual repetitive tasks. Later, I came to understand that this story was an allegory for society’s struggle to accept marginalization in value of human labor as a result of the modernization of the industrial age. I didn’t understand the story as a child, because the superiority of mechanized labor over human labor was well accepted and sewn into the collective consciousness. Today, we are reliving the John Henry story. The new battleground? AI (primarily Machine Learning Algorithms) vs Crowdsourcing. The buzz around the Human Cloud is that we can harness the intuitive pattern recognition and judgement abilities of people that machines just cant effectively replicate. Sadly, I think in large part, this is simply reliving the seductive, but ultimately hopeless story of John Henry. We are already starting to see quantitative studies to show how this movie ends. That said, I’ve found some compelling niches for crowd sourcing that complement the use of AI:
- Crowd sourcing is very effective for ad hoc jobs where the cost to automate dramatically outweighs the cost of acquisition and QA on the crowd sourced model.
- Likewise, it is well suited where the domain contains a graduated value return as a function of quality.
- Crowd sourcing can also be effectively used as an audit function to discover holes and exploits of your automated algorithm (eg to discover that people are gaming your moderation system using #u€&!ng symbols….).
- Crowd sourcing is a great training source for machine learning algorithm based solutions. If you can’t beat the machines, at least you can work for them….