Any Testing Will Do

I have worked on several different projects in the last few years. Often I would encounter code bases that had either no, or very little automated testing in place. Usually this goes hand in hand with regularly late delivery of somewhat buggy code and the customer ending up being the real tester. My reaction to this is that the right thing to do is to architect for testability, namely decouple dependency from implementation through dependency injection or a similar pattern so functionality can be tested discreetly, and add unit testing. I indicate either test driven development or unit testing during development should be done. I cite studies and articles that back up my position. (here’s a few: http://devops.sys-con.com/node/2235139, http://collaboration.csc.ncsu.edu/laurie/Papers/Unit_testing_cameraReady.pdf, http://www.infoq.com/news/2009/03/TDD-Improves-Quality). I ask if anyone can produce empirical evidence to support less testing is better.

I generally get agreement that unit testing should exist. However, a continuation of the exact same architectures and practices generally follow. What I have come since to realize is that it is not the how but the why that is the issue. I am making concrete recommendations of how to solve a problem in large code bases that already exist. I am also making recommendations that mean developers have to change their practices, which is hard, and often not desired by the developer. If the developers do not want to add unit testing, they won’t. Some refuse out of disagreement. The common arguments are that it slows me down to write tests, or I do not write buggy code and do not need to write tests. There is some merit to these arguments, and even if there isn’t often the people making them have no interest in hearing your counter points. Many developers also do not want to be told how to do something by other developers, they have their own approach and it is good enough for them and their shop, fair enough.

Where this led me is to take the emphasis off the how. First off I now make clear up front existing code should be left alone. If it exists and doesn’t have any test coverage, leave it alone. The architecture is most likely not friendly to testing and your customer, for better or for worse, is already testing it for you. The effort to alter the code to make it testable would most likely be a large, long, and from a management perspective, non value add task. Instead I suggest adding minimally to current code bases. Add new functionality as if it were green field and have your current code bases consume it, with as little coupling as possible

When it comes to actual unit tests I don’t care how testing occurs from an architecture or implementation perspective. I now try and get agreement that we should have some form of automated testing in place to ensure code quality. I then try and get agreement that to do that we should enforce a certain level of test coverage on new code in our builds (50% maybe). If I can get agreement to that then I quickly codify in the build server so any new code breaks if less than 50% test coverage. I leave it up to the developer to decide how to achieve this, but if they happen to achieve if by using a more decoupled architecture and dependency injection I don’t complain. In this way I am not trying to dictate how they should test, only enforcing on what we all agree on, automated testing of some form should exist.