This is the second post in a series about getting distressed projects under control
It seems strange to think that despite all the advancements in software development we need to keep re-emphasizing the importance of building software that actually works correctly. Yet this is one of the main areas where software projects continue to be challenged.
The development team in my previous post spent months trying to “nail down” what the client wanted and get “sign off”. This led to the false sense of security that they (and the client) knew for sure what they were going to deliver in the end. This led to the following phenomena:
- The developers had to digest and interpret the requirements as part of a two-to-three hundred page document.
- The client continued to make changes to the requirements even after the analysis document was “signed off” by both parties, which meant that the developers’ work was continuously being invalidated.
- This caused the development to push out beyond the originally planned development completion dates.
- Testing happened at the very end after all the development was “done”. Since the original deadlines had passed, this meant that testing had to happen in a hurry.
- Quality was less a measure of whether the developed software worked correctly and more a measure of how frequently testers and developers interpreted the requirements the same way.
The ultimate result of all this was that many hundreds of defects escaped development and made their way in front of the client. Defects are the worst kind of waste because they are unimplemented requirements for software that has been developed already. This means the client pays for software to be developed and then they (or the development team) have to eat the cost of making it work correctly.
Clearly the way the development team was working did not put quality first. In fact, it put quality dead last.
The most important decision we made to start getting control over quality was to require acceptance tests to be developed for work items before a developer could write code. This is called Acceptance Test Driven Development (ATDD). With ATDD, we literally put quality first.
The way we made ATDD work on this project was that we required a set of acceptance tests to be associated with each defect in the backlog. This effectively documented the requirements that were unsatisfied by the developed code. It also forced the tester and developer to get on the same page before coding started. The developers’ job became focused on making the failing acceptance tests pass.
This approach takes the guesswork out of whether the developed code works properly or not. The test either passes or fails. Yes or no. True or false. Since we were working one defect at a time, the number of acceptance tests the developer had to understand at a point in time was never too many as compared with having to understand the entire analysis document up front.
The state transitions for an acceptance test are:
- Failed. The test initially fails because code has not been developed to make the test pass.
- Developed. The code to make the test pass has been developed. This transition is identified by the developer as they implement the code.
- Passed. The testers have verified that the developed code does in fact make the test pass.
Using the ATDD approach alone had the most positive transformative effect on product quality.
In the first 8 weeks of using ATDD, we transformed the project from one that had no documented test coverage to one that had a library of tests and a documented history of passing tests that asserted the overall quality of the code being developed.
In the next post I will describe how we organized the team to control over how work was assigned and how work flowed from Pending to Accepted.