Codemash 2018 – Crimson and clover, over and over (Devops Security)

Presenter: Josh Wallace

  • What does your security team think of devops?
    • They slow you down, right?
      • In my situation we have no security team, so a little different when dealing with super small team
      • How do you inject dynamic analysis into your pipeline if releasing every few minutes as Amazon does
  • Should you automate your processes if they are not good?
    • Is automation good on its face even if perpetuating a bad practice underneath, especially from a security perspective.
  • Applications tend to be tested equally, but now well
    • functional testing of security requirements usually not done, if security requirements exist at all
    • Do not have to apply the same level of testing and security scrutiny to all applications, level or risk should dictate how thoroughly an app is beat up.
  • How do we fix the above situations?  Introducing a framework for continuous security!  (Crimson and Clover)
    • Define our requirements during planning and pre-planning phases
      • application inventory
      • apps ranked by risk
      • secure coding guidelines
      • threat modeling
      • required security controls based on risk
    • All security requirement should be tested
      • break the ci build so get feedback immediately
    • Testable security requirements are needed
      • requirements need to be written in a manner that is testable
        • written in dev speak, not security speak
        • train developers on security
    • Automate security testing, put in pipeline
    • Pipeline should be scale-able and flexible, not many
      • One good pipeline with if/then logic better than one per app
    • Don’t write your own crypto code, ever.
      • There are plenty of good, easy to use libraries that are essentially unbreakable

Agile Is About Velocity, Not Standups

It seems to me that any good process should provide assistance in four basic areas.

  • Predictability
  • Quality
  • Throughput
  • Morale

Predictability is important because you want to be able to tell people approximately how long something will take.  Quality is important for obvious reasons.  Throughput, the amount of work  you get done, should be positively impacted, and morale should go up if your process is a competent one.

Agile is handy for the above if you can create a meaningful velocity because it serves all four areas.  If you can effectively break down and point stories and have a meaningful velocity, you can tell people more accurate information about when you will be done.  If you effectively break down and point stories and have a meaningful velocity you can measure throughput and know if changes you make increase or decrease your throughput.  If you effectively break down and point stories with the input of test and business stakeholders quality should theoretically improve, as well as predictability as stakeholders are more likely to get what they are asking for.  If you effectively break down and point stories, and involve all team members as equals, morale is likely to be high.

Stand up meetings are nice, but that does not make for effective Agile.  What makes for effective agile is tracking and maintaining an effective velocity.  To do so means you are effectively breaking down work into stories that testers and business agree are of value and are done.  Agile is really organizing yourself so you can create and track a meaningful velocity.  By doing this good things collaterally happen.    If you have stand up meetings, and scrum masters and task boards and no velocity you are doing something, but it isn’t Agile as far as I’m concerned.  Have only the meetings and artifacts required to track and maintain a meaningful velocity and the rest will take care of itself.

When a group says they embrace Agile, ask them what their team’s velocities are and how long they’ve tracked them.  If blank stares are the response, then run.

Backlogs, Backlogs, Backlogs

Have an interesting circumstance.  If you are using an Agile/Scrum approach what I have seen historically is that each dev team has a backlog.  Different projects funnel stories into the one team backlog.  This allows you to clearly see the amount of work the team is tasked with doing.   My current client insists on having  a backlog for each project and sprint, so there is sort of a Cartesian product of backlogs.

I can understand how it may make project management activities easier, but it has only served to confuse me as far as understanding what work the team is currently and future tasked with.  I mentioned my questions, but the client assures me they know Agile and this is how Agile is done.  It has been interesting to compare this approach to others I have seen.   So far it seems to muddle the concept of a sprint as sprints have no start and end date, they end when all the work for the project/sprint backlog is done.   If it works better than one team backlog I’ll be down with it and waiting to see how it all turns out to make that determination.

 

Ideas For A More Reliable Process

Below is a document I wrote with suggestions for how we can potentially change our process to promote more reliable, quality and friendly processes. It is basically a derivative of scrum. Thought I’d share on my blog.

Enjoy!

———————————————————————————————–
The below is a set of suggestions for how we can change our development process to be more reliable both in terms of estimating completion dates and code quality.

Development Process

 

Ideas For A More Reliable Process
Desired Outcomes

 

There are three outcomes the below suggestions are intended to support.

1. Reliability of Delivery Estimates
2. Increased Code Quality
3. Increased Team Morale And Buy In

 

Reliability of Delivery Estimates:

First we hope to change our process to make our delivery of work more predictable. We hope the below changes allow us with more accuracy to communicate to stakeholders how long a feature will take to be complete and when it is possible for us to complete it given workload and priorities at the time the feature is requested.

 

Increasing Code Quality:

The changes suggested below are also intended to increase the quality of our work. By increasing code quality we mean decreasing the amount of defects identified in our code, and identifying defects sooner in our process.

 

Increasing Team Morale and Buy In:

The suggestions below will address increasing developer ownership and collaboration with the hope of increasing the morale of our work environment and increasing developer’s sense of ownership over the whole work product.

 

Suggestions

 


 

 

Split Group into Smaller Teams:

 

We should split our group into smaller teams that have resources that can work on layers of the products the team works on. This means each team should have at least one person who is capable of doing SQL work if their products interacts with a database, at least one person who can do MVVM work if their product has a web front end, and at least one person who can work on services if their products interacts with services. Teams should not be too large as large teams tend to break down into smaller informal teams anyway.

We are suggesting we create three development teams of five in the web group. Each team would consist of one senior level developer, two mid our above level developers, a resource to act as a proxy for a business analyst role and resource to act as a proxy quality analyst. One person on the team, either the senior developer or business analyst proxy or quality proxy, will also act as the team administrator. The team administrator will be in charge of organizing meetings and managing the team’s actionable backlog.

The teams should decide on a name for their team, this adds in creating team buy in.

 

Sprint And Story Management:

 

A story will only be worked on if it is in a sprint. Issue Manager should be changed so that Sprints are their own field. When a story is scheduled to be released is important information and should be managed separately from when it is worked on. If a story is required in a certain release then it should be included in a Sprint that happens before or during that release.

story’s will be assigned to only a team. Developers will track their time in the story against whatever item they are working on, but actual task management will be up to the team itself. It will not be necessary that individual developers are fully loaded via story, only that the team has taken on an appropriate amount of work for the sprint based on its velocity and accommodation for expected support work.

To increase buy in and participation from the group sprints can be named differently than releases. The whole group can decide a naming paradigm for sprints each year and then the year can be cut into those sprints. For instance the group could decide that cartoon characters will be sprint names for 2015. The first sprint of the year will be a cartoon character that starts with A, the next B and each sprint is named for the entire year.

 

Estimated Hours as Velocity: Break Estimates Out Into Initial Estimate and Actual Work Hours:

 

When a story is estimated and assigned for work, allow the original estimate to remain always unchanged. Allow the actual work to be the sum of all the developers hours logged to the story, but do not allow the original estimate to be updated.

Use the original estimate number as a way of measuring feature throughput. Allow the velocity of a team to be the number of estimated hours that it completed in sprint. The estimate hours don’t have to have any correlation to the actual amount of work done. The idea is to create a metric, estimated hours, that we can track and use as a means of understanding how much work a team can likely complete in a sprint.

Allowing estimated hours to remain unchanged and mark a team’s velocity will also allow us to better estimate how long features will take to be completed, as once over time we have reliable velocities in terms of estimated hours, it is easy to apply how many estimated hours we have queued up to have a firmer idea of what we can likely accomplish in a sprint, quarter or year even.
At the beginning of each sprint in sprint planning the teams can use a worksheet similar to the below to allocate story’s from the actionable backlog. The team will select the next most important stories whose estimated hours add up to approximately their velocity in estimated hours, allowing for individual team availability.

In their sprint planning sessions teams can then take the stories that they have decided to work on in the sprint and break them down into actionable tasks that can be assigned to specific developers. The teams can decide what tool to use to do this, they can make child stories or use a third party tool like Jira, YouTrack or Trello to manage at the task level. All time would be recorded to the main story if a team does not use child story’s to manage their task level work.

 

Team Meetings and Sprint Size:

 

In order to foster a sense of collaboration and developer buy in we are suggesting the teams have the following meetings every sprint:

  • Sprint Planning Meeting
    • Happens first day of each sprint. The team reviews the story’s assigned that sprint and breaks them down if necessary into smaller tasks they manage as child story’s or in Trello, Youtrack, or some other mechanism of the team’s choosing. Teams take in story’s from their actionable backlog to bring into the sprint. If a story is deemed not defined enough to be worked on it can be kicked back at this point.
    • Goal of Meeting: To break down story’s into discreet parts that team members can work on. To include team members in planning process so their input is solicited prior to starting work.
    • Duration: 4 – 8 Hours
    • Attendees: The team and any stakeholder or manager that has an interest in providing input into how a particular story is implemented.
  • Sprint Review Meeting
    • Happens last day of each sprint. Team reviews how the sprint went. Team creates a list of things they did in the last sprint that they should stop doing, things they did in the last sprint they should continue doing, and things they should start doing that might help mitigate issues the team had in the sprint. Team members submit anonymous numbers between 1-10 grading how the sprint went. These numbers are used to gauge team morale. Finished features in the sprint are quickly demonstrated.
    • Goal of Meeting: To gather feedback from team on how process is functioning. To gather ideas for how process and team can function better from people on the team and to create an environment of inclusion.
    • Duration: Time boxed to 1 hour
    • Attendees: The team and any manager or stakeholder who is interested in hearing about the sprint.
  • Stand Up Meetings
    • Happens every day. Each team member quickly reviews what items they worked on yesterday, what they are planning on working on today and what roadblocks they have. This is not a status meeting and should be limited to only items that are in the sprint.
    • Goal of Meeting: To allow team to assess if it is on track an identify any sticking points as early as possible
    • Duration: Time boxed to 10 Minute Maximum
    • Attendees: The Team and any manager who is interested in hearing it.

In order to facilitate the number and duration of meetings sprints will be lengthened to three weeks so that there is enough development time to accomplish a meaningful feature load.
Teams should also sit together. Cubicles should be rearranged into team pods to foster communication and collaboration amongst team members.

 

Backlog Management:

 

Each team will have an actionable backlog where stories that are believed to be defined enough are placed and ordered by their priority. In each Team’s Sprint Planning meeting they pull in as many story’s from their Backlog as they believe they can accomplish, they should always pull in a number of story’s that is close to the estimated hours their velocity reflects they can handle. If a team needs to accommodate for support work that comes in unannounced then they should take in a smaller percentage of their estimated hour velocity to plan for the support story’s that are expected to appear during a sprint.

The team’s administrator should work closely with group management and stake holders to help make sure story’s placed in its backlog are actionable and can be worked on without too many questions needing to be answered.

To support this change Issue Manager should have a backlog field. This field will allow a story to be added into one of the team’s backlogs, and it is from these backlogs the team will pull when they do their sprint planning.

Management can plan work ahead by making sure story’ are well defined and adding them with appropriate priority into teams backlog.

 

Velocity Tracking

 

Only story’s that are entirely completed during a sprint are counted in the estimated hours that comprise that sprint’s velocity. All the estimated hours of a story that are finished in a sprint are counted in that sprint. Velocity is a rolling average so this will not impact the calculation of overall velocity and helps put the emphasis on completing a story entirely during a sprint.

 

New Developer Onboarding:

When new developers start, especially junior developers, the team structure should assist in their onboarding. Since new developers will be free of having specific story’s assigned to them (they are assigned to the team), they can pair with more senior members and work on tasks that add value to the team while they learn about environment and products. As they learn more and become more valuable they amount of effective hours expected can be increased during sprint planning. Teams can also create their own onboarding steps that are particular to the technologies the team most frequently works with. For instance, each team might have a set of pluralsight courses they expect a new hire to watch over the first 90 days of employment.

 

Build Changes:

To facilitate increased quality our Continuous Integration builds will be changed to break if a certain level of test coverage is not met. For new projects this can be a higher number than existing projects that have little or no existing testing.

Using dependency free unit testing the team should be able to implement code that achieves desired test coverage targets without creating brittle and slow builds that are based on databases existing and having correct data to support testing.

Functional testing will be separated into their own solutions and run on their own builds so they do no block development and the needed fast feedback from Continuous Integration builds.

Decoupling For Unit Testing: Why And How

 

Abstracting Dependencies:

In order to allow code that has dependencies to be tested at the method level, the code must be architected in a manner that allows those dependencies to be replaced and mocked during method level testing.    One example of this is the below class that has one method that needs to print something as part of its processing.  In order to test the method we do not want to actually have to print something as this means to test the method we would need to have a printer setup and operating somewhere every time we tested our method.

The below versions of our example class does not allow for dependency free testing.  In the example the two dependencies it relies on, the repository and printer, are created in the method itself and we cannot exercise the code in this method without accessing a real repository and printer.

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)   
      {
          PersonRepository repo = new PersonRepository();        
          Person person = repo.GetPerson(personId);       
             
          StringBuilder personStringToPrint = new StringBuilder();  
  
          personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);       
          personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);      
          personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);       
          personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);          
 
         Printer printer = new Printer();         

         printer.Print(personStringToPrint);    
       }
}

 

The above code cannot be tested unless the entire system can run from end to end.  We should always create functional or integration tests to ensure our entire system functions correctly end to end.  The above method will not be under test until both the person repository and printer objects exist and are configured to run end to end.  By definition this means that if the above code is written before either the printer or person repository is finished, it will not be able to be tested at the time of its completion.  It could be a significant amount of time between when the above code is completed and when it is even possible to test it using this approach.

In order to make the method code itself testable without relying on its concrete dependencies we need to allow the dependencies to be mocked.  To do this two things must happen, first the dependencies have to be delivered as an abstraction.

Delivering  dependencies as an abstraction means they must be provided to our method as either an interface or a base class that marks the methods we use as virtual so that a fake implementation can be created and fake the behavior we wish to test.  Using interfaces makes sure that any fake dependencies we provide during testing will be replaceable, base classes can have implementations that can cause problems, so we prefer using interfaces.

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)    
   {        
      IPersonRepository repo = new PersonRepository();        
      Person person = repo.GetPerson(personId);                    
     
      StringBuilder personStringToPrint = new StringBuilder();       
     
      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);     
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);      
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);       
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);       
     
      IPrinter printer = new Printer();   
      
      printer.Print(personStringToPrint);     
   }
}

 

The second thing that must happen for our method code to be tested on its own is the dependencies must be provided in such a way that fake dependencies can be used when testing.  This means that we cannot ‘new up’ a dependency in our code as we are doing in the first example as this makes the dependency impossible to replace.

There are several means of delivering dependencies in a manner that allows them to be replaced.  The first is simply to pass them in to our method as arguments:

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId,
                                  IPersonRepository repo,
                                  IPrinter printer)    
   {       
      Person person = repo.GetPerson(personId);                    

      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
      
      printer.Print(personStringToPrint);     
   }
}

 

The above strategy allows us to provide fake implementations of our dependencies at run time so we can now test our method code regardless of the state of our dependencies.  We may not want our callers to have to know about the dependencies so we could always create an overload that provides the real dependencies if we want our callers to be unaware:

public class PersonDetailManager
{    
   public void PrintPersonDetails(int personId)   
   {        
      PrintePersonDetails(personId, new  PersonRepository(), new  Printer());   
   }       

   public void PrintPersonDetails(int personId,
                                  IPersonRepository repo,
                                  IPrinter printer)    
   {       
      Person person = repo.GetPerson(personId);                    

      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
      
      printer.Print(personStringToPrint);     
   }

 

The downside to the above is that we need to know what the actual concrete dependencies are when we are building our code in order to create the overload method that provides them.  The IPrinter interface may be defined, a matter of fact it needs to be, for us to build our class, but there is no guarantee when we build our code that the implementations of IPrinter will exist or be known.  In order to allow for this we can deliver our dependencies through a factory.  In this way for us to build and test our method code only the factory needs to exist at the time we are writing our code:

public class PersonDetailManager
{     
   public void PrintPersonDetails(int personId)    
   {       
      IPersonRepository repo = DependencyFactory().Get<IPersonRepositor>();
      Person person = repo.GetPerson(personId);

      StringBuilder personStringToPrint = new StringBuilder(); 

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);
 
      IPrinter printer = DependencyFactory().Get<IPrinter>();

      printer.Print(personStringToPrint);    
   }
}


 

As long as our factory exists when we are building our method code, and can be changed in testing to return fake implementations of our dependencies, the factory pattern will allow us to deliver the implementations of our dependencies in a decoupled fashion.  This will allow for testing just our method code discreetly.

Delivering Dependencies:

Above we abstracted our dependencies and used a factory pattern to deliver instances of our dependencies in a decoupled manner.  Our class will never know what the actual implementation of IPrinter is, it just knows the factory will provide one when we need it.  In order to use a factory pattern we have to build and maintain our factory.  We need to have mappings from abstractions to implementations at run time that allow the correct implementation to be delivered.

We can build our factory by hand, but it turns out there are in existence a number of third party tools called Inversion of Control containers, ‘Ioc Containers’ for short, that do essentially this already.   Ioc Containers allow you to register implementations of abstractions and then have the registered implementations delivered when an abstraction is requested.  All that has to happen is for implementations to be registered as the implementer of an abstraction.

Microsoft’s Ioc Container is called ‘Unity’ and to register implementations for our example we would  run something similar to the following code somewhere in the startup of our application:

var container = new UnityContainer();
container.RegisterType<IPrinter, Printer>();
container.RegisterType<IPersonRepository, PersonRepository>();

 

The above code tells our inversion of control container, Unity in this case, that if an IPrinter is requested, return a Printer instance to satisfy the request.  In our code instead of our factory we can reference the container to resolve our dependency.  Using the container directly in your code to resolve dependencies is called the service locator pattern.  It is called this because you are locating the service you need at the time you need it.  One downside to this is that the dependency is not easily seen unless you look at the method’s implementation.  Looking at the class definition itself you would never know the class requires an IPrinter and an IPersonRepository to operate.

 

public class PersonDetailManager
{     
   public void PrintPersonDetails(int personId)
   {
      var container = new UnityContainer();

      IPersonRepository repo = container.Resolve<IPersonRepository>();
      Person person = repo.GetPerson(personId);
      StringBuilder personStringToPrint = new StringBuilder();

      personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
      personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
      personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
      personStringToPrint.Append("Address: " + person.Address + Environment.NewLine);

     IPrinter printer = container.Resolve<IPrinter>();
     printer.Print(personStringToPrint);
  }
}

 

We can write test code that registers fake implementations so our method code can be exercised without any real implementations yet existing.  We can create our own test classes that implement IPrinter and IPersonRepository and register them in our container for testing.  Our fakes might hard code return values and just save values sent to them for us to inspect in the asserts of our tests.  A sample test could look like the below:

[TestFixture]
    public class PersonalDetailManagerTests
    {
        class FakePersonRepository : IPersonRepository
        {
            internal static Person FakePersonReturned;
 
            public Person GetPerson(int personId)
            {
                return FakePersonReturned;
 
            }
        }
 
        class FakePrinter : IPrinter
        {
            internal static StringBuilder BuilderSentToPrint;
 
            public void Print(StringBuilder personStringToPrint)
            {
                BuilderSentToPrint = personStringToPrint;
            }
        }
 
        [TestFixtureSetUp]
        public void SetUp()
        {
            var container = new UnityContainer();
            container.RegisterType<IPrinter, FakePrinter>();
            container.RegisterType<IPersonRepository, FakePersonRepository>();
        }
 
        [Test]
        public void First_Name_First_Line_Of_Print_Test()
        {
            //--Arrange
            int personId = 33;
            FakePersonRepository.FakePersonReturned = new Person
                {
                    FirstName = "firsty",
                };
 
            var itemToTest = new PersonDetailManager();
 
            //--Act
            itemToTest.PrintPersonDetails(personId);
 
            //--Assert
            Assert.IsTrue(FakePrinter.BuilderSentToPrint.ToString().StartsWith("First Name: " + FakePersonRepository.FakePersonReturned.FirstName));
        }



 

 

The service locator relies on our code using the container as a dependency to acquire the dependencies it requires.  This means that our method needs to have an actual container to work, so we are coupled to a container, but at least only to this.  This is why in the above test code as a setup we have to register our fakes with the container, because our code actually uses the container to get the dependencies.  Also, as we stated before, there is no way of knowing what this class is dependent on without looking through the methods themselves.

In order to resolve the issues of being coupled to a container and not having visibility into what dependencies a class has most Ioc Containers implement a feature called constructor injection.  Constructor injection puts the dependencies for  a class discreetly in the constructor, removing the dependency to the container itself, and making it clear to any user of the class what dependencies the class has.

 

Constructor Injection:

Instead of asking the container for our dependencies we can change our class so that any dependencies are taken in our class’s constructor.  The dependencies are then stored as local private variables.  Our class would be changed to the below:

 

public class PersonDetailManager
{
  IPersonRepository _repository;
  IPrinter _printer;

  public PersonDetailManager(IPersonRepository repository,
                             IPrinter printer)
    {
         _repository = repository;
         _printer = printer;
    }

    public void PrintPersonDetails(int personId)
    {

       Person person = _repository.GetPerson(personId);           
 
       StringBuilder personStringToPrint = new StringBuilder();
 
       personStringToPrint.Append("First Name: " + person.FirstName + Environment.NewLine);
       personStringToPrint.Append("Last Name: " + person.LastName + Environment.NewLine);
       personStringToPrint.Append("Phone: " + person.Phone + Environment.NewLine);
       personStringToPrint.Append("Address: " + person.Address + Environment.NewLine); 

       _printer.Print(personStringToPrint);
 
    }
}



 

Now we can give our dependencies to our class directly so our test code can be changed to eliminate the need to interact with the container at all in our tests:

[TestFixture]
    public class PersonalDetailManagerTests
    {
        class FakePersonRepository : IPersonRepository
        {
            internal static Person FakePersonReturned;
 
            public Person GetPerson(int personId)
            {
                return FakePersonReturned;
 
            }
        }
 
        class FakePrinter : IPrinter
        {
            internal static StringBuilder BuilderSentToPrint;
 
            public void Print(StringBuilder personStringToPrint)
            {
                BuilderSentToPrint = personStringToPrint;
            }
        }
 
        [Test]
        public void First_Name_First_Line_Of_Print_Test()
        {
            //--Arrange
            int personId = 33;
            FakePersonRepository.FakePersonReturned = new Person
                {
                    FirstName = "firsty",
                };
 
            var itemToTest = new PersonDetailManager(new FakePersonRepository(),
					             new FakePrinter());
 
            //--Act
            itemToTest.PrintPersonDetails(personId);
 
            //--Assert
            Assert.IsTrue(FakePrinter.BuilderSentToPrint.ToString().StartsWith("First Name: " + FakePersonRepository.FakePersonReturned.FirstName));
        }


The above tests and code work, but it would appear anyone who wants to use our code would have to pass in the actual implementations of our dependencies, as we have in our test. Most Ioc containers, however, have a feature where they will provide dependencies in the constructors of instances they produce if the dependencies are registered with the container.  What this means is that if I request a PersonDetailManager from the container and I have registered implementations for the dependencies PersonDetailManager needs the container will automatically provide them and pass them as parameters to the PersonDetailManager when it builds it.

Practically this means that in an application I only need to request an instance of an object at the top of a dependency tree and the Ioc Container will handle fulfilling all dependencies listed in the constructors of instances the container provides.  If at the start of my application I have the following registration code:

 

var container = new UnityContainer();
container.RegisterType<IPrinter, Printer>();
container.RegisterType<IPersonRepository, PersonRepository>();
container.RegisterType<PersonDetailManager, PersonDetailManager >();

 

Then later in code I request a PersonDetailManger from the container, the container will automatically deliver the registered instance for IPersonRepository and IPrinter that it will need to construct a PersonDetailManager.

 

var personDetailMangaer = container.Resolve<PersonDetailManager>();

 

The above means that we only need to use the service locator pattern at the top of dependency trees.  Examples of the top of a dependency tree are a ServiceHostFactory in WCF, ControllerFactory in ASP.net MVC or the ObservableFactory in our MVVM framework.   Anything that is consumed by the top level or further down in the tree just needs to be created with its dependencies listed as abstractions in its constructor.

What constructor injection allows you to do is make a clear pattern for how to access dependencies in development, put the dependency as an abstraction in your constructor, and allow the container to manage delivering the dependency at runtime.  This allows the developer to focus on building code as they have a known pattern for how to build code, and also to test as they have a known pattern for how to replace dependencies in test code.   Constructor Injection also allows developers to build and test code without all dependency implementations being fully operational or accessible at the time they are building.  Class dependencies are also explicitly stated as they are all listed in the class constructor.

In our test examples above, we create our own fakes or mock dependencies, but similarly to Ioc Containers there are many mock libraries that exist that already provide rich mocking functionality so we don’t have to roll our own.

 

Mock Objects:

There are many mock libraries for .net, NMock, Moq, RhinoMocks just to name a few.  What these libraries do is allow you to quickly create a mock instance of an interface and program that instance to act as you would like in your test, and also record calls against it so you can ensure the dependency was called as you expect in your code.  Each mock library has slightly different syntax, but each performs the same basic set of behaviors, allowing the programming of a mock instance, and interrogating how the mock instance was called.

Using nMock the below shows how we program a mock instance of our IPersonRepository to return a predefined person, and how we would check to see how our IPrinter mock instance was called as we expect it to be:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
   //--Arrange
   int personId = 33;
   var fakePerson = new Person
    {
        FirstName = "firsty",
    };

   var mockFactory = new MockFactory();

   Mock<IPersonRepository> mockPersonRepository = mockFactory.CreateMock<IPersonRepository>();
   Mock<IPrinter> mockPrinter = mockFactory.CreateMock<IPrinter>();
 
   //--program mock to return fake person if personId passed
   mockPersonRepository.Stub.Out.MethodWith(x => x.GetPerson(personId)).WillReturn(fakePerson);
            
   //--program mock to expect first name first line of stringbuilder passed in
   mockPrinter.Expects.AtMostOnce.Method(x => x.Print(new StringBuilder()))
	.With(NMock.Is.Match<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName)));
 
   var itemToTest = 
	new PersonDetailManager(mockPersonRepository.MockObject, mockPrinter.MockObject);
 
   //--Act
   itemToTest.PrintPersonDetails(personId);
 
   //--Assert
   //--make sure expectations met, will enforce expectations, but not stub calls.
   mockFactory.VerifyAllExpectationsHaveBeenMet();
}


 

The syntax above would be similar for Moq or RhinoMocks and would achieve the same purpose.  The below is the syntax for RhinoMocks.  What I like better about Rhinomocks and Moq is I can make my verifications explicit and place them at the end after I run my test instead of setting them before in expectations.  This is called the Arrange, Act, Assert or AAA testing patter:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
IPersonRepository mockPersonRepository = MockRepository.GenerateMock<IPersonRepository>();
IPrinter mockPrinter = MockRepository.GenerateMock<IPrinter>();
 
//--program mock to return fake person if personId passed
mockPersonRepository.Expect(x => x.GetPerson(personId)).Return(fakePerson);
            
 
var itemToTest = new PersonDetailManager(mockPersonRepository, mockPrinter);
 
//--Act
itemToTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
mockPrinter.AssertWasCalled(x => x.Print(Arg<StringBuilder>.Matches(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}
}

You can also use AAA syntax in Moq as below:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
    Mock<IPersonRepository> mockPersonRepository = new Mock<IPersonRepository>();
    Mock<IPrinter> mockPrinter = new Mock<IPrinter>();
 
//--program mock to return fake person if personId passed
mockPersonRepository.Setup(x => x.GetPerson(personId)).Returns(fakePerson);
            
 
var itemToTest = new PersonaDetailManager(mockPersonRepository.Object, mockPrinter.Object);
 
//--Act
itemToTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
mockPrinter.Verify(x => x.Print(It.Is<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

AutoMocking:

 

What you may have noticed in both my nMock and RhinoMocks tests was that I had to explicitly declare my mock objects, this can get tedious if you multiple dependencies and many classes you are testing.  Automocking is a feature that some Ioc Containers provide that automatically creates mock instances of all dependencies on a class when you ask for the class under test.  Doing this removes the need for the developer to add code to create each mock dependency the class under test will require.

Below is how our test would look using an Ioc named StructureMap with RhinoMocks and  automocking.  Notice the mocked dependencies exist on the class under test by default now:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
//--Arrange
int personId = 33;
var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
var autoMockedItem = new RhinoAutoMocker<PersonDetailManager>();
 
//--program mock to return fack person if personId passed
autoMockedItem.Get<IPersonRepository>().Expect(x => x.GetPerson(personId)).Return(fakePerson);
 
    
//--Act
autoMockedItem.ClassUnderTest.PrintPersonDetails(personId);
 
//--Assert
//--program mock to expect first name first line of stringbuilder passed in.
autoMockedItem.Get<IPrinter>().AssertWasCalled(x => x.Print(Arg<StringBuilder>.Matches(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

This is the same automocking feature using StructureMap and Moq:

[Test]
public void First_Name_First_Line_Of_Print_Test()
{
    //--Arrange
    int personId = 33;
    var fakePerson = new Person
    {
        FirstName = "firsty",
    };
 
    var autoMockedItem = new MoqAutoMocker<PersonaDetailManager>();
 
    //--program mock to return fack person if personId passed
    autoMockedItem.Get<Mock<IPersonRepository>>().Setup(x => x.GetPerson(personId)).Returns(fakePerson);
 
 
    //--Act
    autoMockedItem.ClassUnderTest.PrintPersonDetails(personId);
 
    //--Assert
    //--program mock to expect first name first line of stringbuilder passed in.
    autoMockedItem.Get<Mock<IPrinter>>().Verify(x => x.Print(It.Is<StringBuilder>(sb => sb.ToString().StartsWith("First Name: " + fakePerson.FirstName))));
}


 

Using automocking above lowers the setup overhead of using multiple dependencies in a class.  Automocking combines a container and a mock object library (above StructureMap and RhinoMocks or Moq), to allow dependencies to be automatically fulfilled by mock objects without having to define each mock object and pass to its constructor.  This is helpful where new dependencies are added so older tests do not have to be updated to add new dependencies in their constructor, automocking will automatically add a non-programmed mock to any test that creates its test class using the automocker.

 

Creating Tests When Creating Code

 

Test Driven Development is the process of writing tests that fail first, then writing code that makes the tests pass.  This ensures that tests exist, that they can fail if the condition they are testing is not met, and that the code written makes the test pass.  It has been documented that this process significantly decreases defect rate without adding significant development time. (http://www.infoq.com/news/2009/03/TDD-Improves-Quality).  Writing tests when code is written ensures that tests exist and that code is architected to be testable.  Whether or not the tests are truly written test first or written as you go along is not that important.  What is important is that the tests are not added later as this increases time as the test writer has to become familiar with the code they are writing tests for, and if any bugs are found any code that has come to rely on the class with the bug may need to also be changed to accommodate any fixes.  Finding bugs long after the code has been writing becomes more costly to fix since there may now be dependencies on the flawed code.

Writing unit tests that exercise method code at the time the method code is written is to ensure that the code does what the developer intends it to do when the developer writes the code.   As a process developers should write method testing unit tests when they write their code and should use constructor injection, mock libraries and automocking to establish efficient patterns to make this testing relatively quick and easy to implement.  Using this architecture also allows developers to write tests regardless of whether there are working or configured instances of the concrete implementations of their dependencies.  This allows development of code and its dependencies to be concurrent, code can be written as long as the signature of a dependency is known, not its full implementation.  Testing at the time of development allows for the possibility to catch bugs at the earliest possible time, and perhaps even stop some from ever existing.

 

Automating Method Tests In Continuous Integration Builds

 

Once method level tests exist they show that method code does what the developer intended it to do.  They also mock dependencies so should run relatively quickly and require no other resources to run.  Method level unit tests can be run without any dependencies actually existing or being reachable.  For this reason it is these tests that should be run on Continuous Integration builds.  By running just these tests builds run quickly, and these test ensure code has not been broken on the method level.

Functional and integration tests should be automated but should run separately and not block Continuous Integration builds as they can be long running and brittle and remain broken for long periods of time, rendering the Continuous Integration build less effective as when it is broken it is more likely due to brittle functional tests than actual code issues and is frequently ignored.