Agile With Development and Testing Phases
Occasionally I hear about or talk to people who are involved with projects where there is a separation between testers and developers to the point where the developers deliver work in one iteration and then in another iteration the testers validate the work and either pass the features as working or raise defects and send them back to the developers. This is on projects that are otherwise described as “agile” or “Scrum”. Is it really possible to be an agile project with this phased style of setup and separation of responsibility?
I don’t think so. Despite the illusion of agility provided by standups, retrospectives or Kanban boards, nothing about this sounds to me like it’s conducive to technical excellence or delivering working software frequently. I think in this situation you’ve got two major problems: divorcing the developers from the responsibility of making software right; and increasing the feedback time before knowing that a feature is implemented properly. These problems are getting in the way of you moving on to giving more valuable feedback to the team: whether the feature is actually beneficial to the user.
The first problem is going to lead to bigger problems later as the development team determine whether or not they’re finished work on a feature based on whether or not they’ve finished coding. In an agile team the developer shouldn’t just be responsible for coding, they should be responsible for completing the work required to implement a feature and should be able to prove it. This includes writing tests for their feature and could include other criteria like having a code review or having it demonstrated to a user representative of some kind for acceptance. Splitting development and testing will inevitably lead to bottlenecks - either on the testing side as untested work piles up, or on the development side as the testers sit around and twiddle their thumbs.
The second problem is not being able to properly measure progress. On an agile project, working software is the primary measure of progress so you can’t know you’ve made progress until you’ve proved it is working with tests. If you have a developer write a feature in one iteration and the results of the test come out in another iteration, it’s at least one iteration between a developer thinking they’re finished coding and feedback on that being acted upon (whether to raise a defect or to base some new work upon it). Features that are “dev complete” aren’t ready for release and aren’t able to be integrated into downstream systems, nor can other developers base their work upon it because it isn’t “done” (by which I mean “done done”). This prevents you from adhering to the principle of continually delivering valuable software.
I think this can also happen on projects where there is a strict separation between development and testing even when they aren’t explicitly done in different iterations. If you have dedicated testers and it is they instead of the developers who are responsible for proving the developers are done, they will end up getting a rush of work at the end of an iteration and become a bottleneck. You will also still suffer from the developers not taking responsibility for ensuring their work is complete before context switching onto another piece of work. I’m not arguing against having dedicated testers, I think that testers can be doing work more valuable than just asserting program correctness and they shouldn’t be shouldered with the sole burden of making sure a developer has done her job. The tester should be working with the developer to help automate tests, clarify acceptance criteria, suggest edge cases and otherwise collaborating.
In general I think that testing in terms of program correctness verification is a low-value exercise. Having tests and being able to prove that a feature works is only enough to take you to the point where you know that a feature is now done (and that nothing else has been broken). Testing at this basic a level only gives you the confidence to know that if other work breaks this feature then you’ll know, and to have your developers go and move on to another task.
What would be a more valuable exercise and where the really interesting feedback other than “yeah your code works” lies is in testing whether or not the feature you’ve done was actually useful to the customer/end-user. Finding out whether what you’ve done actually enables them to meet some kind of goal is surely more interesting; we should be testing whether the requirements given were correct rather than whether the code written correctly implements those requirements.
Having a long turnaround time between finding out whether you wrote the correct code to implement a feature is going to mean that even in the best case where there are no defects, it’s still a big gap between the feature being developed and feedback as to the feature’s actual value.
Projects with separate dev and testing will suffer from longer feedback loops and bottlnecks/overheads where testing and development are done by separate people at separate times instead of having testing built in to the development of a feature.
blog comments powered by Disqus