Unit Testing LINQ To LLBL

In many of the projects I’ve been working on we use an ORM tool called LLBL Gen Pro.  This tool provides a LINQ implementation that allows us to to select objects from our data source using LINQ query syntax.  An interesting obstacle we encountered was how to unit test code that used LINQ queries passed to LLBL.

For unit testing we did not want to actually hit our database, we wanted to isolate the code under test and ensure that it did what we intended.  Our first thought was to mock the IQueryable interface.  However, in the context of a LINQ query this didn’t seem feasible.   When mocking an object usually we provide a  mock implementation as a stand in and have it provide return values when needed and otherwise track the the object being’s mock usage.  It wasn’t immediately obvious how we would program a mocked IQueryable interface or what calls would be made against it during a particular LINQ query.

Instead of providing a mocked IQueryable interface we realized we could simply create an “in memory” test database of sorts for our tests.  Generic lists provide an ‘AsQueryable’ method that return their contents as an IQueryable.  In order to make our select logic testable we have our methods take the IQueryable to select against as a parameter.  When running for real the IQueryable parameter passed to the method is the data object that really hits the database, when under test  a generic list is passed in.  A very simplified example method might look like the below:

public List<SchoolEntity> GetList(SchoolFilter filter, IQueryable<SchoolEntity> schoolQ)
        {

            var result = from school in schoolQ
                         where school.Name == filter.Name
                                 && school.Colors == filter.Colors
                         select school;

            return result.ToList();

        }

In order to test we can create a list to pass in to our method:

[Test]
 public void TestSchoolColorFilter()
 {
 //--Arrange
 SchoolFilter filter = new SchoolFilter();
 filter.Colors = "Blue";

 List<SchoolEntity> schools = new List<SchoolEntity>();

 SchoolEntity school1 = new SchoolEntity();
 school1.Colors = "Blue";

 schools.Add(school1);

 SchoolEntity school2 = new SchoolEntity();
 school2.Colors = "Purple";

 //--Act
 List<SchoolEntity> retVal = GetList(filter, schools.AsQueryable());

 //--Assert
 Assert.AreEqual(1, retVal.Count());

 }

When actually running the method is called after a reference to an IQueryable object from the database is created and passed in like the below:

public List<SchoolEntity> GetList(SchoolFilter filter)
 {
         using (DataAdapter data = new DataAdapter())
         {
            LinqMetaData meta = new LinqMetaData(data);
            return this.GetList(filter, meta.School);

         }

 }

This allows us to take advantage of LLBL’s LINQ to LLBL functionality but still create unit tests that don’t have to cross the boundary to the database in order to run. So far it’s helped us to keep our code less coupled and more cohesive. The next hurde, and another post, is how we handle unit testing when LLBL specific LINQ extension methods are involved, aka Prefetch Paths!

Tracing WCF Activity

In situations where we are setting up WCF clients calling a WCF service and are having problems getting the service call to work it is helpful to be able to see a trace of the communication. In order to do this add the below to the .config file of the service and a trace file named, ‘wcfDetailTrace.svcLog’ will be written out to the root folder of the service.

<system.diagnostics>
      <trace autoflush="true">
          <listeners>
          </listeners>
      </trace>
      <sources>
          <source name="System.ServiceModel"
                  switchValue="Information, ActivityTracing"
                  propagateActivity="true">

    <listeners>
                   <add name="sdt"
                        type="System.Diagnostics.XmlWriterTraceListener"
                        initializeData= "WcfDetailTrace.svclog" />
               </listeners>
          </source>
      </sources>
 </system.diagnostics>

You can change the location and name of the trace file by changing hte ‘initializeData’ element of the ‘listener’. The trace file is verbose xml that can be parsed by the svcTrace.exe utitlity that comes with Visual Studio. If the svcTrace.exe utility is on your machine the extension .svclog will be mapped to it so double clicking on the trace file will open the trace file in svcTrace.exe.

Here is more information regarding using svctrace: http://msdn.microsoft.com/en-us/library/ms732023.aspx

If you want to log the content of each message then add the ‘MessageLogging’ elment to diagnostics underneath system.serviceModel as shown below to your config file. You will also have to add a source element for message logging to your sources element referencing “System.ServiceModel.MessageLogging”.

Continue reading “Tracing WCF Activity”

Passing Code Blocks, That Doesn’t Sound Right?

I have been working on an application that makes heavy use of Windows Communication Foundation (WCF) proxies.  Do to an issue with how the Dispose method is implemented on the WCF client base class you aren’t supposed to use them in a ‘Using’ block.  To accommodate this what we encountered was the same try, catch ,catch,catch block around any instance of a WCF client proxy in our code.

The below example was common in our code:

Repeated WCF Client Wrapper
Repeated WCF Client Wrapper

What we realized was that we could use generics and the fact that all WCF clients inherit from the same base classes and implement the same base interfaces to pull the common init and clean up code out into one place.  Luckily for us the ‘Abort’ and ‘Close’ methods are both implemented by the ‘ICommunicationObject’ interface and all WCF clients implement it.   Just like the ‘Using’ statement wraps a try/finally block around a code block we want to wrap our try/catch/catch around a code block. Continue reading “Passing Code Blocks, That Doesn’t Sound Right?”

Subverting TeamCity

Every development environment I’ve worked on during the last few years used Cruise Control and Microsoft Visual Source Safe as a combination for source control and continuous build integration.  I’ve yet to come across a team actually using Microsoft’s Team Foundation Server.  I’m assuming the teams I’ve worked on weren’t large enough to warrant usage of this all encompassing tool.

What I have encountered is much criticism about Source Safe.  For years I’ve heard there are better source control alternatives, Subversion for instance.  This weekend I decided to give an alternate environment a chance and change my home build environment around.  I setup Subversion as my source code repository, and just for a change decided to use Jetbrain’s TeamCity product for continuous build integration.  Cruise Control has been fine, but why not see what else is out there? Continue reading “Subverting TeamCity”

Refactoring To Smaller SRP Service Objects

Currently I’m going through a very procedurally driven code base and attempting to break code out into smaller service objects called by the main class.  The code I’m working with deals with creating, validating, fetching and saving orders for our products.   I’m doing this refactoring for a couple of reasons.

First of all I want to have code collected into service objects that have similar functionality.  Ideally I’d like each service class to comply with SRP, and have only one reason to change.   There are many methods in the original code base that deal with creating orders, so I’ve factored those out into an “OrderBuilder” class.  This class is concerned with putting an order together.  I’ve broken out code that has to do with persisting an order into an OrderDataManagement class.  Not only does this better adhere to SRP, but by doing this it seems easier for me to find the code I’m looking for.  Business rules for building an order are in the “OrderBuilder” class instead of buried in a much larger “OrderManagement” class.

The second reason I’m doing this is to make things easier to test.  In the original code I could mock a data connection, but if I wanted to test anything else I had to account for everything  going on around the behavior I was testing.  By breaking out the code into smaller more cohesive and less couples service objects I can mock the behavior I’m not interested in, and focus soley on what I’m trying to test.  This is allowing me to add tests for new or changed functionality much faster, and tests aren’t nearly as brittle as before.

So far so good, hopefully I’ve made things better, we’ll see how I feel in a few days.