Waterfall and why it’s not suitable for software development
By Noel Viehmeyer
1 October 2015 (Last updated 24 April 2020)
The whole controversy over Agile vs waterfall methodology has been covered in countless articles and blog posts. Yet I still spend a lot of time explaining the challenges and reasons why I think running a waterfall approach in software development projects is not a good idea.
I’ll look at what the waterfall methodology is, why it only delivers value late in projects and how it limits adaptability. I’ll look at what this means for risk and visibility and compare this with the way Agile maximises visibility to minimise risk.
What is the waterfall methodology?
Waterfall is a sequential approach, separating a project in different phases. It originates in manufacturing and was applied to software development projects later on. Used in software development projects, the phases typically look like this:
Every phase in a waterfall project aims to create a certain output or deliverable. Handing over this deliverable from one phase to the next marks a milestone in the project. Different teams or departments are responsible for each stage. The project manager is responsible for making sure that the different teams keep their timelines and the project stays on budget. The primary measure of progress is tracking milestones.
Having worked at a company in Berlin that builds both hardware and software, I’ve come to understand some very important differences between manufacturing and software development that undermine the value of applying a manufacturing methodology to software development projects.
An experienced software architect once said to me: ‘When you build a house, you cannot build the roof first’. That sounds about right for manufacturing – it’s really difficult to change the layout of a house once you’ve built the walls – but it’s not a good analogy for software development because software can be changed and refactored quite easily.
Value and adaptability
In waterfall projects, requirements are gathered in the analysis phase. Requirements are collected and documented in a specification document. The requirements are grouped into must have, should have and nice to have features. After completing the analysis stage, the requirements should remain stable. Once the product is finished, the whole bulk of the value is delivered to the customer.
In reality, most software development projects have to change some of the requirements eventually. In manufacturing, changes are ideally avoided. The whole process is not built to cope with changing requirements. The more the project has progressed, the bigger the effort required to change existing requirements or add requirements that have been missed during analysis.
You often hear people speaking of scope creep. Instead of embracing change and identifying opportunities to add value to a product, project teams will usually try to avoid change and settle with a mediocre product because going back to the analysis phase usually means delaying the whole project.
Specifying requirements up front means a lot of the requirements are based on assumptions. It is difficult to validate those assumptions, since the first builds are not available until late in the development phase. Once the first builds are available, it’s often too late to change requirements without delaying the project substantially. It’s hard to build the “right product” when you cannot validate your assumptions and requirements through working software.
Instead, having a flexible release strategy opens a lot of strategic opportunities, including beating a competitor to the market, being able to release just a new feature, or delaying should have/nice to have features because there is more important work to do.
When it comes to deployment, waterfall methodology works with fixed released dates. The whole project and resource planning revolves around this fixed date. Bringing forward the release date will impact the overall product quality because, thanks to waterfall’s sequential phasing that leaves testing until the end, project managers will be forced to sacrifice testing time.
Risk and visibility
Before kicking off a waterfall project, project managers need to provide a set of documents, such as an overview of all the requirements, a budget plan, a risk register etc. This set of documents is the basis for getting the sign-off from project sponsors.
Project sponsors are mostly interested in the following three pieces of information:
- list of features
- project schedule (when the features will be ready for delivery)
- resources (staff and budget).
Estimating the effort for delivering features that require new, unproven technologies is an imprecise science. Because waterfall projects are broken into separate phases, it’s possible that the development team may discover some severe challenges weeks or even months after the project has kicked off.
Experienced project managers will try to mitigate this risk by talking to subject matter experts, breaking down the features in smaller tasks and making comparisons to past projects. In the end, the whole plan is still based on a lot of assumptions, resulting in many projects failing to deliver on time and overspending their budget.
“On average, large IT projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted.” McKinsey survey, 2014
Surprisingly, project sponsors are still willing to approve budget for projects planned on assumptions. This is despite the fact that, if at any point, the project runs out of budget and has to be canceled, there is nothing to show for all that effort.
Another challenge worth mentioning is that testing is not possible until there is a stable build. Quality and security issues or integration problems with existing products are typically discovered very late in the process. Fixing these sorts of issues takes a lot of effort.
Finally, project managers often have a hard time maintaining their organisation’s engagement with their project. Senior management and stakeholders may take a lot of interest during kick-off and towards the final stage, but not so much in between. Since there is nothing to show other then some mock-ups, usually only budget and milestone updates will be reported. As long as they look ok, management stays more or less out of the project, leaving the project without high-level support to remove road blockages or resolve challenges met along the way.
Risk and visibility in Agile projects
In Agile, you mitigate risk by delivering the highest priority work in short iterations so you can get valuable working software in front of your customers as soon as possible. You base future work on how customers actually use the software, inspecting and improving your product, processes and progress with every iteration.
By basing your risk management on knowns rather than unknowns, you do only the planning and mitigation that’s required.
Project risk management with Agile
Why we only work with clients who are happy to use an Agile approach — The ideal Boost project: our recipe for success
Agile software project success checklist — Ensure your project succeeds