Test Coverage On CI

I’m a fan of if not TDD, TWD, testing while developing. I believe that having decoupled code that can be easily tested and debugged at the method level keeps code healthy, readable and robust. This naturally leads to a certain level of test coverage. In my current role I deal with a lot of untested code as not everyone agrees with my TDD or TWD approach. I believe firmly that some form of test coverage helps to increase quality so I wish to enact a policy on our CI builds that causes a build failure if there is not a certain minimal level of test coverage. This would be easy for me, but hard for some who do not write tests. My perspective is they can be integration or functional tests, but we need some level of test coverage for code to be considered complete. It’s much easier to achieve some test coverage unit testing of course.

Is this as reasonable approach to take? I believe so as I believe we can catch bugs much earlier and eliminate many from ever existing if we push more test coverage, but is it reasonable to ask those who don’t agree to comply via breaking the CI build if they do not?

Decoupling For Unit Testing: Why And How

 

Abstracting Dependencies:

In order to allow code that has dependencies to be tested at the method level, the code must be architected in a manner that allows those dependencies to be replaced and mocked during method level testing.    One example of this is the below class that has one method that needs to print something as part of its processing.  In order to test the method we do not want to actually have to print something as this means to test the method we would need to have a printer setup and operating somewhere every time we tested our method.

The below versions of our example class does not allow for dependency free testing.  In the example the two dependencies it relies on, the repository and printer, are created in the method itself and we cannot exercise the code in this method without accessing a real repository and printer.

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)   
      {
          PersonRepository repo = new PersonRepository();        
          Person person = repo.GetPerson(personId);       
             
          StringBuilder personStringToPrint = new StringBuilder();  
  
          personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);       
          personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);      
          personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);       
          personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);          
 
         Printer printer = new Printer();         

         printer.Print(personStringToPrint);    
       }
}

 

The above code cannot be tested unless the entire system can run from end to end.  We should always create functional or integration tests to ensure our entire system functions correctly end to end.  The above method will not be under test until both the person repository and printer objects exist and are configured to run end to end.  By definition this means that if the above code is written before either the printer or person repository is finished, it will not be able to be tested at the time of its completion.  It could be a significant amount of time between when the above code is completed and when it is even possible to test it using this approach.

In order to make the method code itself testable without relying on its concrete dependencies we need to allow the dependencies to be mocked.  To do this two things must happen, first the dependencies have to be delivered as an abstraction.

Delivering  dependencies as an abstraction means they must be provided to our method as either an interface or a base class that marks the methods we use as virtual so that a fake implementation can be created and fake the behavior we wish to test.  Using interfaces makes sure that any fake dependencies we provide during testing will be replaceable, base classes can have implementations that can cause problems, so we prefer using interfaces.

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)    
   {        
      IPersonRepository repo = new PersonRepository();        
      Person person = repo.GetPerson(personId);                    
     
      StringBuilder personStringToPrint = new StringBuilder();       
     
      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);     
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);      
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);       
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);       
     
      IPrinter printer = new Printer();   
      
      printer.Print(personStringToPrint);     
   }
}

 

The second thing that must happen for our method code to be tested on its own is the dependencies must be provided in such a way that fake dependencies can be used when testing.  This means that we cannot ‘new up’ a dependency in our code as we are doing in the first example as this makes the dependency impossible to replace.

There are several means of delivering dependencies in a manner that allows them to be replaced.  The first is simply to pass them in to our method as arguments:

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId,
                                  IPersonRepository repo,
                                  IPrinter printer)    
   {       
      Person person = repo.GetPerson(personId);                    

      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
      
      printer.Print(personStringToPrint);     
   }
}

 

The above strategy allows us to provide fake implementations of our dependencies at run time so we can now test our method code regardless of the state of our dependencies.  We may not want our callers to have to know about the dependencies so we could always create an overload that provides the real dependencies if we want our callers to be unaware:

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)   
   {        
      PrintePersonDetails(personId, new  PersonRepository(), new  Printer());   
   }       

   public void PrintPersonDetails(int personId,
                                  IPersonRepository repo,
                                  IPrinter printer)    
   {       
      Person person = repo.GetPerson(personId);                    

      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
      
      printer.Print(personStringToPrint);     
   }

 

The downside to the above is that we need to know what the actual concrete dependencies are when we are building our code in order to create the overload method that provides them.  The IPrinter interface may be defined, a matter of fact it needs to be, for us to build our class, but there is no guarantee when we build our code that the implementations of IPrinter will exist or be known.  In order to allow for this we can deliver our dependencies through a factory.  In this way for us to build and test our method code only the factory needs to exist at the time we are writing our code:

public class PersonDetailManager
{     
   public void PrintPersonDetails(int personId)    
   {       
      IPersonRepository repo = DependencyFactory().Get<IPersonRepositor>();
      Person person = repo.GetPerson(personId);

      StringBuilder personStringToPrint = new StringBuilder(); 

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
 
      IPrinter printer = DependencyFactory().Get<IPrinter>();

      printer.Print(personStringToPrint);    
   }
}


 

As long as our factory exists when we are building our method code, and can be changed in testing to return fake implementations of our dependencies, the factory pattern will allow us to deliver the implementations of our dependencies in a decoupled fashion.  This will allow for testing just our method code discreetly.

Delivering Dependencies:

Above we abstracted our dependencies and used a factory pattern to deliver instances of our dependencies in a decoupled manner.  Our class will never know what the actual implementation of IPrinter is, it just knows the factory will provide one when we need it.  In order to use a factory pattern we have to build and maintain our factory.  We need to have mappings from abstractions to implementations at run time that allow the correct implementation to be delivered.

We can build our factory by hand, but it turns out there are in existence a number of third party tools called Inversion of Control containers, ‘Ioc Containers’ for short, that do essentially this already.   Ioc Containers allow you to register implementations of abstractions and then have the registered implementations delivered when an abstraction is requested.  All that has to happen is for implementations to be registered as the implementer of an abstraction.

Microsoft’s Ioc Container is called ‘Unity’ and to register implementations for our example we would  run something similar to the following code somewhere in the startup of our application:

var container = new UnityContainer();
container.RegisterType<IPrinter, Printer>();
container.RegisterType<IPersonRepository, PersonRepository>();

 

The above code tells our inversion of control container, Unity in this case, that if an IPrinter is requested, return a Printer instance to satisfy the request.  In our code instead of our factory we can reference the container to resolve our dependency.  Using the container directly in your code to resolve dependencies is called the service locator pattern.  It is called this because you are locating the service you need at the time you need it.  One downside to this is that the dependency is not easily seen unless you look at the method’s implementation.  Looking at the class definition itself you would never know the class requires an IPrinter and an IPersonRepository to operate.

 

public class PersonDetailManager
{     
   public void PrintPersonDetails(int personId)
   {
      var container = new UnityContainer();

      IPersonRepository repo = container.Resolve<IPersonRepository>();
      Person person = repo.GetPerson(personId);
      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);

     IPrinter printer = container.Resolve<IPrinter>();
     printer.Print(personStringToPrint);
  }
}

 

We can write test code that registers fake implementations so our method code can be exercised without any real implementations yet existing.  We can create our own test classes that implement IPrinter and IPersonRepository and register them in our container for testing.  Our fakes might hard code return values and just save values sent to them for us to inspect in the asserts of our tests.  A sample test could look like the below:

[TestFixture]
    public class PersonalDetailManagerTests
    {
        class FakePersonRepository : IPersonRepository
        {
            internal static Person FakePersonReturned;
 
            public Person GetPerson(int personId)
            {
                return FakePersonReturned;
 
            }
        }
 
        class FakePrinter : IPrinter
        {
            internal static StringBuilder BuilderSentToPrint;
 
            public void Print(StringBuilder personStringToPrint)
            {
                BuilderSentToPrint = personStringToPrint;
            }
        }
 
        [TestFixtureSetUp]
        public void SetUp()
        {
            var container = new UnityContainer();
            container.RegisterType<IPrinter, FakePrinter>();
            container.RegisterType<IPersonRepository, FakePersonRepository>();
        }
 
        [Test]
        public void First_Name_First_Line_Of_Print_Test()
        {
            //--Arrange
            int personId = 33;
            FakePersonRepository.FakePersonReturned = new Person
                {
                    FirstName = "firsty",
                };
 
            var itemToTest = new PersonDetailManager();
 
            //--Act
            itemToTest.PrintPersonDetails(personId);
 
            //--Assert
            Assert.IsTrue(FakePrinter.BuilderSentToPrint.ToString().StartsWith("First Name: " + FakePersonRepository.FakePersonReturned.FirstName));
        }



 

 

The service locator relies on our code using the container as a dependency to acquire the dependencies it requires.  This means that our method needs to have an actual container to work, so we are coupled to a container, but at least only to this.  This is why in the above test code as a setup we have to register our fakes with the container, because our code actually uses the container to get the dependencies.  Also, as we stated before, there is no way of knowing what this class is dependent on without looking through the methods themselves.

In order to resolve the issues of being coupled to a container and not having visibility into what dependencies a class has most Ioc Containers implement a feature called constructor injection.  Constructor injection puts the dependencies for  a class discreetly in the constructor, removing the dependency to the container itself, and making it clear to any user of the class what dependencies the class has.

 

Constructor Injection:

Instead of asking the container for our dependencies we can change our class so that any dependencies are taken in our class’s constructor.  The dependencies are then stored as local private variables.  Our class would be changed to the below:

 

public class PersonDetailManager
{
  IPersonRepository _repository;
  IPrinter _printer;

  public PersonDetailManager(IPersonRepository repository,
                             IPrinter printer)
    {
         _repository = repository;
         _printer = printer;
    }

    public void PrintPersonDetails(int personId)
    {

       Person person = _repository.GetPerson(personId);           
 
       StringBuilder personStringToPrint = new StringBuilder();
 
       personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
       personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
       personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
       personStringToPrint.Append("Address: " + person.Address + Environment.NewLine); 

       _printer.Print(personStringToPrint);
 
    }
}



 

Now we can give our dependencies to our class directly so our test code can be changed to eliminate the need to interact with the container at all in our tests:

[TestFixture]
    public class PersonalDetailManagerTests
    {
        class FakePersonRepository : IPersonRepository
        {
            internal static Person FakePersonReturned;
 
            public Person GetPerson(int personId)
            {
                return FakePersonReturned;
 
            }
        }
 
        class FakePrinter : IPrinter
        {
            internal static StringBuilder BuilderSentToPrint;
 
            public void Print(StringBuilder personStringToPrint)
            {
                BuilderSentToPrint = personStringToPrint;
            }
        }
 
        [Test]
        public void First_Name_First_Line_Of_Print_Test()
        {
            //--Arrange
            int personId = 33;
            FakePersonRepository.FakePersonReturned = new Person
                {
                    FirstName = "firsty",
                };
 
            var itemToTest = new PersonDetailManager(new FakePersonRepository(),
					             new FakePrinter());
 
            //--Act
            itemToTest.PrintPersonDetails(personId);
 
            //--Assert
            Assert.IsTrue(FakePrinter.BuilderSentToPrint.ToString().StartsWith("First Name: " + FakePersonRepository.FakePersonReturned.FirstName));
        }


The above tests and code work, but it would appear anyone who wants to use our code would have to pass in the actual implementations of our dependencies, as we have in our test. Most Ioc containers, however, have a feature where they will provide dependencies in the constructors of instances they produce if the dependencies are registered with the container.  What this means is that if I request a PersonDetailManager from the container and I have registered implementations for the dependencies PersonDetailManager needs the container will automatically provide them and pass them as parameters to the PersonDetailManager when it builds it.

Practically this means that in an application I only need to request an instance of an object at the top of a dependency tree and the Ioc Container will handle fulfilling all dependencies listed in the constructors of instances the container provides.  If at the start of my application I have the following registration code:

 

var container = new UnityContainer();
container.RegisterType<IPrinter, Printer>();
container.RegisterType<IPersonRepository, PersonRepository>();
container.RegisterType<PersonDetailManager, PersonDetailManager >();

 

Then later in code I request a PersonDetailManger from the container, the container will automatically deliver the registered instance for IPersonRepository and IPrinter that it will need to construct a PersonDetailManager.

 

var personDetailMangaer = container.Resolve<PersonDetailManager>();

 

The above means that we only need to use the service locator pattern at the top of dependency trees.  Examples of the top of a dependency tree are a ServiceHostFactory in WCF, ControllerFactory in ASP.net MVC or the ObservableFactory in our MVVM framework.   Anything that is consumed by the top level or further down in the tree just needs to be created with its dependencies listed as abstractions in its constructor.

What constructor injection allows you to do is make a clear pattern for how to access dependencies in development, put the dependency as an abstraction in your constructor, and allow the container to manage delivering the dependency at runtime.  This allows the developer to focus on building code as they have a known pattern for how to build code, and also to test as they have a known pattern for how to replace dependencies in test code.   Constructor Injection also allows developers to build and test code without all dependency implementations being fully operational or accessible at the time they are building.  Class dependencies are also explicitly stated as they are all listed in the class constructor.

In our test examples above, we create our own fakes or mock dependencies, but similarly to Ioc Containers there are many mock libraries that exist that already provide rich mocking functionality so we don’t have to roll our own.

 

Mock Objects:

There are many mock libraries for .net, NMock, Moq, RhinoMocks just to name a few.  What these libraries do is allow you to quickly create a mock instance of an interface and program that instance to act as you would like in your test, and also record calls against it so you can ensure the dependency was called as you expect in your code.  Each mock library has slightly different syntax, but each performs the same basic set of behaviors, allowing the programming of a mock instance, and interrogating how the mock instance was called.

Using nMock the below shows how we program a mock instance of our IPersonRepository to return a predefined person, and how we would check to see how our IPrinter mock instance was called as we expect it to be:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
   //--Arrange
   int personId = 33;
   var fakePerson = new Person
    {
        FirstName = "firsty",
    };

   var mockFactory = new MockFactory();

   Mock<IPersonRepository> mockPersonRepository = mockFactory.CreateMock<IPersonRepository>();
   Mock<IPrinter> mockPrinter = mockFactory.CreateMock<IPrinter>();
 
   //--program mock to return fake person if personId passed
   mockPersonRepository.Stub.Out.MethodWith(x => x.GetPerson(personId)).WillReturn(fakePerson);
            
   //--program mock to expect first name first line of stringbuilder passed in
   mockPrinter.Expects.AtMostOnce.Method(x => x.Print(new StringBuilder()))
	.With(NMock.Is.Match<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName)));
 
   var itemToTest = 
	new PersonDetailManager(mockPersonRepository.MockObject, mockPrinter.MockObject);
 
   //--Act
   itemToTest.PrintPersonDetails(personId);
 
   //--Assert
   //--make sure expectations met, will enforce expectations, but not stub calls.
   mockFactory.VerifyAllExpectationsHaveBeenMet();
}


 

The syntax above would be similar for Moq or RhinoMocks and would achieve the same purpose.  The below is the syntax for RhinoMocks.  What I like better about Rhinomocks and Moq is I can make my verifications explicit and place them at the end after I run my test instead of setting them before in expectations.  This is called the Arrange, Act, Assert or AAA testing patter:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
IPersonRepository mockPersonRepository = MockRepository.GenerateMock<IPersonRepository>();
IPrinter mockPrinter = MockRepository.GenerateMock<IPrinter>();
 
//--program mock to return fake person if personId passed
mockPersonRepository.Expect(x => x.GetPerson(personId)).Return(fakePerson);
            
 
var itemToTest = new PersonDetailManager(mockPersonRepository, mockPrinter);
 
//--Act
itemToTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
mockPrinter.AssertWasCalled(x => x.Print(Arg<StringBuilder>.Matches(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}
}

You can also use AAA syntax in Moq as below:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
    Mock<IPersonRepository> mockPersonRepository = new Mock<IPersonRepository>();
    Mock<IPrinter> mockPrinter = new Mock<IPrinter>();
 
//--program mock to return fake person if personId passed
mockPersonRepository.Setup(x => x.GetPerson(personId)).Returns(fakePerson);
            
 
var itemToTest = new PersonaDetailManager(mockPersonRepository.Object, mockPrinter.Object);
 
//--Act
itemToTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
mockPrinter.Verify(x => x.Print(It.Is<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

AutoMocking:

 

What you may have noticed in both my nMock and RhinoMocks tests was that I had to explicitly declare my mock objects, this can get tedious if you multiple dependencies and many classes you are testing.  Automocking is a feature that some Ioc Containers provide that automatically creates mock instances of all dependencies on a class when you ask for the class under test.  Doing this removes the need for the developer to add code to create each mock dependency the class under test will require.

Below is how our test would look using an Ioc named StructureMap with RhinoMocks and  automocking.  Notice the mocked dependencies exist on the class under test by default now:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
var autoMockedItem = new RhinoAutoMocker<PersonDetailManager>();
 
//--program mock to return fack person if personId passed
autoMockedItem.Get<IPersonRepository>().Expect(x => x.GetPerson(personId)).Return(fakePerson);
 
    
//--Act
autoMockedItem.ClassUnderTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
autoMockedItem.Get<IPrinter>().AssertWasCalled(x => x.Print(Arg<StringBuilder>.Matches(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

This is the same automocking feature using StructureMap and Moq:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
    //--Arrange
    int personId = 33;
    var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
    var autoMockedItem = new MoqAutoMocker<PersonaDetailManager>();
 
    //--program mock to return fack person if personId passed
    autoMockedItem.Get<Mock<IPersonRepository>>().Setup(x => x.GetPerson(personId)).Returns(fakePerson);
 
 
    //--Act
    autoMockedItem.ClassUnderTest.PrintPersonDetails(personId);
 
    //--Assert
    //--program mock to expect first name first line of stringbuilder passed in.
    autoMockedItem.Get<Mock<IPrinter>>().Verify(x => x.Print(It.Is<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

Using automocking above lowers the setup overhead of using multiple dependencies in a class.  Automocking combines a container and a mock object library (above StructureMap and RhinoMocks or Moq), to allow dependencies to be automatically fulfilled by mock objects without having to define each mock object and pass to its constructor.  This is helpful where new dependencies are added so older tests do not have to be updated to add new dependencies in their constructor, automocking will automatically add a non-programmed mock to any test that creates its test class using the automocker.

 

Creating Tests When Creating Code

 

Test Driven Development is the process of writing tests that fail first, then writing code that makes the tests pass.  This ensures that tests exist, that they can fail if the condition they are testing is not met, and that the code written makes the test pass.  It has been documented that this process significantly decreases defect rate without adding significant development time. (http://www.infoq.com/news/2009/03/TDD-Improves-Quality).  Writing tests when code is written ensures that tests exist and that code is architected to be testable.  Whether or not the tests are truly written test first or written as you go along is not that important.  What is important is that the tests are not added later as this increases time as the test writer has to become familiar with the code they are writing tests for, and if any bugs are found any code that has come to rely on the class with the bug may need to also be changed to accommodate any fixes.  Finding bugs long after the code has been writing becomes more costly to fix since there may now be dependencies on the flawed code.

Writing unit tests that exercise method code at the time the method code is written is to ensure that the code does what the developer intends it to do when the developer writes the code.   As a process developers should write method testing unit tests when they write their code and should use constructor injection, mock libraries and automocking to establish efficient patterns to make this testing relatively quick and easy to implement.  Using this architecture also allows developers to write tests regardless of whether there are working or configured instances of the concrete implementations of their dependencies.  This allows development of code and its dependencies to be concurrent, code can be written as long as the signature of a dependency is known, not its full implementation.  Testing at the time of development allows for the possibility to catch bugs at the earliest possible time, and perhaps even stop some from ever existing.

 

Automating Method Tests In Continuous Integration Builds

 

Once method level tests exist they show that method code does what the developer intended it to do.  They also mock dependencies so should run relatively quickly and require no other resources to run.  Method level unit tests can be run without any dependencies actually existing or being reachable.  For this reason it is these tests that should be run on Continuous Integration builds.  By running just these tests builds run quickly, and these test ensure code has not been broken on the method level.

Functional and integration tests should be automated but should run separately and not block Continuous Integration builds as they can be long running and brittle and remain broken for long periods of time, rendering the Continuous Integration build less effective as when it is broken it is more likely due to brittle functional tests than actual code issues and is frequently ignored.

Any Testing Will Do

I have worked on several different projects in the last few years. Often I would encounter code bases that had either no, or very little automated testing in place. Usually this goes hand in hand with regularly late delivery of somewhat buggy code and the customer ending up being the real tester. My reaction to this is that the right thing to do is to architect for testability, namely decouple dependency from implementation through dependency injection or a similar pattern so functionality can be tested discreetly, and add unit testing. I indicate either test driven development or unit testing during development should be done. I cite studies and articles that back up my position. (here’s a few: http://devops.sys-con.com/node/2235139, http://collaboration.csc.ncsu.edu/laurie/Papers/Unit_testing_cameraReady.pdf, http://www.infoq.com/news/2009/03/TDD-Improves-Quality). I ask if anyone can produce empirical evidence to support less testing is better.

I generally get agreement that unit testing should exist. However, a continuation of the exact same architectures and practices generally follow. What I have come since to realize is that it is not the how but the why that is the issue. I am making concrete recommendations of how to solve a problem in large code bases that already exist. I am also making recommendations that mean developers have to change their practices, which is hard, and often not desired by the developer. If the developers do not want to add unit testing, they won’t. Some refuse out of disagreement. The common arguments are that it slows me down to write tests, or I do not write buggy code and do not need to write tests. There is some merit to these arguments, and even if there isn’t often the people making them have no interest in hearing your counter points. Many developers also do not want to be told how to do something by other developers, they have their own approach and it is good enough for them and their shop, fair enough.

Where this led me is to take the emphasis off the how. First off I now make clear up front existing code should be left alone. If it exists and doesn’t have any test coverage, leave it alone. The architecture is most likely not friendly to testing and your customer, for better or for worse, is already testing it for you. The effort to alter the code to make it testable would most likely be a large, long, and from a management perspective, non value add task. Instead I suggest adding minimally to current code bases. Add new functionality as if it were green field and have your current code bases consume it, with as little coupling as possible

When it comes to actual unit tests I don’t care how testing occurs from an architecture or implementation perspective. I now try and get agreement that we should have some form of automated testing in place to ensure code quality. I then try and get agreement that to do that we should enforce a certain level of test coverage on new code in our builds (50% maybe). If I can get agreement to that then I quickly codify in the build server so any new code breaks if less than 50% test coverage. I leave it up to the developer to decide how to achieve this, but if they happen to achieve if by using a more decoupled architecture and dependency injection I don’t complain. In this way I am not trying to dictate how they should test, only enforcing on what we all agree on, automated testing of some form should exist.

Unity and RhinoMocks, I Like

On my current project we have been using Unity as a Dependency Injection Container. Previously to this project I had always created constructor overloads, so any dependencies could be injected for testing or replacement. Typically what I might have done would have looked something like this:

 public class MessageReceivingService : IMessageReceivingService
    {
        internal IMessageProcessor _MessageProcessor;
        internal IMessageRepository _MessageRepository;


        public MessageReceivingService()
            : this(new MessageProcessor(), new MessageRepository())
        { }

        public MessageReceivingService(IMessageProcessor messageProcessor, IMessageRepository messageRepository)
        {
            this._MessageProcessor = messageProcessor;
            this._MessageRepository = messageRepository;
        }

    }

The above allowed me to inject and change my dependencies. In a test I’d create mock objects for the dependent objects and call the constructor that takes the dependencies as parameters. The constructor with no parameters creates the default dependencies. A typical unit test looked like the below:

        private IMessageProcessor _MessageProcessorMock;
        private IMessageRepository _MessageRepositoryrMock;


        [TestInitialize]
        public void Setup()
        {
            this._MessageProcessorMock = MockRepository.GenerateMock<IMessageProcessor>();
            this._MessageRepositoryrMock = MockRepository.GenerateStub<IMessageRepository>();
        }


        [TestMethod]
        public void Checking_That_Some_Behavior_Works_Test()
        {
            //--Arrange
            MessageReceivingService serviceToTest =
                     new MessageReceivingService(this._MessageProcessorMock, this._MessageRepositoryrMock);

            Message testMessage = new Message();

            //--Act
            serviceToTest.ProcessMessages(testMessage);
            

            //--ASSERT
            this._MessageProcessorMock.AssertWasCalled(imd => imd.SaveMessage(Arg<PostOffice.Data.Message>.Is.Same(testMessage)));

        }

This allowed me to unit test and inject mocks. I was really only trying to allow myself the ability to inject mocks in unit testing. If it ever came up I could use the constructor overloads to override the default dependencies, but I was not that worried, that happened very infrequently.

What I did realize, and did not like, was that every time a dependency was added, I had to go change all my constructor calls in my unit test to allow for the new dependency. That was a pain, but not painful enough to make me change anything apparently.

Now that I started using Unity for DI The above class becomes this:

public class MessageReceivingService : IMessageReceivingService
    {
        internal IMessageProcessor _MessageProcessor;
        internal IMessageRepository _MessageRepository;
        internal IUnityContainer _Container;


        public MessageReceivingService()
            : this(new ConfiguredUnityContainer())
        { }

        public MessageReceivingService(IUnityContainer container)
        {
            this._Container = container;
            this._MessageProcessor = this._Container.Resolve<IMessageProcessor>();
            this._MessageRepository = this._Container.Resolve<IMessageRepository>();
        }

    }

ConfiguredUnityContainer is just a class extending UnityContainer that loads types from a default container in configuration. The above allows for an existing container to be passed in, or for a new one to be created and used. I wasn’t crazy about having dependencies to unity in my constructor, but I consoled myself since the dependencies were on the IUnityContainer interface and a class I had extended from UnityContainer.

It was in my unit tests that I realized how this was really going to help me. My unit test above now changed to this:


	private IMessageProcessor _MessageProcessorMock;
	private IMessageRepository _MessageRepositoryrMock;
	 
	[TestInitialize]
	public void Setup()
	{
	    this._MessageProcessorMock = MockRepository.GenerateMock<IMessageProcessor>();
	    this._MessageRepositoryrMock = MockRepository.GenerateStub<IMessageRepository>();
	}

[TestMethod]
        public void Checking_That_Some_Behavior_Works_Test()
        {
            //--Arrange
            IUnityContainer container = new UnityContainer();

            container.RegisterInstance<IMessageProcessor>(this._MessageProcessorMock);
            container.RegisterInstance<IMessageRepository>(this._MessageRepositoryrMock);

            MessageReceivingService serviceToTest =
                            new MessageReceivingService(container);

            Message testMessage = new Message();

            //--Act
            serviceToTest.ProcessMessages(testMessage);
            

            //--ASSERT
            this._MessageProcessorMock.AssertWasCalled(imd => imd.SaveMessage(Arg<PostOffice.Data.Message>.Is.Same(testMessage)));

        }

The first thing I realized was that I would no longer have to change my constructor if I added a dependency! Now my only constructor overload takes an IUnityContainer. I just set my dependencies in the test code, setting my mock objects as the served up dependencies. No matter how many dependencies I add or remove my constructors will never change.

As an added bonus, I can now replace dependencies on the fly. If I deliver a container that is mapped in a config file, I only have to change the config file to change served up dependencies. I was not really that worried about changing dependencies on the fly since it seemed to happen so rarely, but I certainly do not mind that it is now easy to do should the need arise.

I probably should have realized there was more value to a DI container then just serving up mocks in tests, but even in this context the DI container made my life easier. I have to say I really like the combination of Unity and RhinoMocks. I’m kicking myself for not looking into this myself before Unity was forced on me on a project!

Someone Recently Asked What Design Patterns I Used, Here’s What I Said.

There are a few design patters I use frequently, mainly because in business software the problems they are geared at solving come up often. The first is the Command pattern. This pattern in demonstrably useful in UI operations as it supports a generic means for undo and redo operations. When UI operations are implemented as atomic Command classes their undo redo operations are explicitly defined in a concrete object, and are generically invoked from the stack managing the executed commands as base commands. I have also come to find that the Command Pattern is very useful in pure business logic as well. In environments where distributed transactions are involved the Command pattern allows the definition of atomic transactions to be defined as a command in the business logic layer. The benefit of this structure is that each atomic Command can be chained and composite transaction behavior can be defined in the base Command class instead of by each operation itself.

There are two other patterns I use frequently to make code less coupled and more testable. The first, the Factory Pattern, was much more helpful before the wide acceptance of dependency injection frameworks. The Factory Pattern allows me to abstract the creation of classes from their consumers. This allows the consumed class to be an abstraction, an interface, and allows the management of its creation to be removed as a concern from the consumer. From a testing perspective this allows the injection of mock objects in situations where a consumer of an object is being unit tested and the consumed object has an outside dependency, such as connecting to a database or making a network call. The Factory Pattern also keeps the consumer from being directly coupled to the object it’s consuming. This lesser coupling makes it that much easier to manage changes to the consumed object. If changes to the consumed object do not change the interface the consumer is using, then changes to the consumed object have no effect on the consumer at all.

I use the Singleton Pattern frequently to apply abstraction to objects that do not need to keep state. Static classes cannot implement interfaces, which makes it difficult to mock them when testing their consumers. Singletons allow for a method of applying an interface to what otherwise would be a static class. Using a Singleton instead of a just creating a separate instance for each call to a class allows for more efficient memory management since only one instance to manage, and the interface allows the Singleton to be mocked when and if necessary.

Lately I have used the Mediator Pattern. This pattern became useful in the UI work I have done in the context of event handling. I implemented a Mediator Pattern to create an event aggregator so event handlers and throwers would register and listen and throw events to the event aggregator, instead of directly to each other. This assisted in minimizing memory leaks due to improper releasing of event references , and also decreased coupling of classes needed for direct event relationships.