Any Testing Will Do

I have worked on several different projects in the last few years. Often I would encounter code bases that had either no, or very little automated testing in place. Usually this goes hand in hand with regularly late delivery of somewhat buggy code and the customer ending up being the real tester. My reaction to this is that the right thing to do is to architect for testability, namely decouple dependency from implementation through dependency injection or a similar pattern so functionality can be tested discreetly, and add unit testing. I indicate either test driven development or unit testing during development should be done. I cite studies and articles that back up my position. (here’s a few: http://devops.sys-con.com/node/2235139, http://collaboration.csc.ncsu.edu/laurie/Papers/Unit_testing_cameraReady.pdf, http://www.infoq.com/news/2009/03/TDD-Improves-Quality). I ask if anyone can produce empirical evidence to support less testing is better.

I generally get agreement that unit testing should exist. However, a continuation of the exact same architectures and practices generally follow. What I have come since to realize is that it is not the how but the why that is the issue. I am making concrete recommendations of how to solve a problem in large code bases that already exist. I am also making recommendations that mean developers have to change their practices, which is hard, and often not desired by the developer. If the developers do not want to add unit testing, they won’t. Some refuse out of disagreement. The common arguments are that it slows me down to write tests, or I do not write buggy code and do not need to write tests. There is some merit to these arguments, and even if there isn’t often the people making them have no interest in hearing your counter points. Many developers also do not want to be told how to do something by other developers, they have their own approach and it is good enough for them and their shop, fair enough.

Where this led me is to take the emphasis off the how. First off I now make clear up front existing code should be left alone. If it exists and doesn’t have any test coverage, leave it alone. The architecture is most likely not friendly to testing and your customer, for better or for worse, is already testing it for you. The effort to alter the code to make it testable would most likely be a large, long, and from a management perspective, non value add task. Instead I suggest adding minimally to current code bases. Add new functionality as if it were green field and have your current code bases consume it, with as little coupling as possible

When it comes to actual unit tests I don’t care how testing occurs from an architecture or implementation perspective. I now try and get agreement that we should have some form of automated testing in place to ensure code quality. I then try and get agreement that to do that we should enforce a certain level of test coverage on new code in our builds (50% maybe). If I can get agreement to that then I quickly codify in the build server so any new code breaks if less than 50% test coverage. I leave it up to the developer to decide how to achieve this, but if they happen to achieve if by using a more decoupled architecture and dependency injection I don’t complain. In this way I am not trying to dictate how they should test, only enforcing on what we all agree on, automated testing of some form should exist.

When Static Methods Apply

In my current project we are using Dependency injection and are trying to depend on abstractions instead of implementations as much as possible. Someone recently added a static method call to resolve mappings of properties between a DTO and an Entity. My first reaction to this was that it should not be static. When asked why, I stated that if the mapping was static then wherever it is called I have to know what it is doing in order to unit test code that contains calls to it. When I heard myself say this it made me think, if it does not impact my ability to unit test code that calls it maybe it’s ok? I still think it is better to not directly couple the mapping implementation to callers, but if it does not impede the ability to unit test the callers then really it is more of standards question. So, in light of that if your standard isn’t to inject all dependencies and a dependency does not cause testing difficulty then I suppose it is ok.

That said I think there is value in sticking to standards so that readers of your code and other developers have a reasonable expectation of what they’ll find in a given circumstance. That is debatable, however.

EF Code First Needs Master Database Access

At my current client they are beginning to move towards EF code first for data access. This is a large improvement over their home grown system. I did have one hiccup initially. The connection I am using for my EF DbContext is an already existing connection that does not have credential information on it. Apparently when calling through the DbContext for the first time EF queries the master database to make sure the database you want actually exists. Not sure why they chose to do this, but OK. Problem arises in that my connection does not have access to the master database, only to the one I want to actually connect to, that is already connected to ironically enough.

After googling around some I found that you can tell EF to skip the check to master if you make the below call before your context is ever called:

 Database.SetInitializer<MyContext>(null);

Apparently the above line tells EF to assume you are smart enough to call a database that does exist, and if it does not you can just error, without checking the master db first.

I just put in the constructor of the context so it is not forgotten:

  public class MyContext : DbContext, IMyContext
    {
        public MyContext()
            : base(ApplicationManager.GetConnection, true)

        {
            Database.SetInitializer<MyContext>(null);
        }

If you’ve noticed the static call to get the connection, just ignore it, it’s a whole other story.

Automated Deployment Fun First Item From List Msbuild

Last week was an education in making Msbuild jump through hoops to automate our deployments.   Having a development background it is always strange switching context into the land of Msbuild.  I find myself wanting to create variables and control structures, alas that is not how it works.

One interesting challenge I was able to overcome was figuring out how to pick the first item from an item when the item is a list.  We have multiple destiation servers we deploy to and we keep them in one item as a list, ‘DeployDestination’. I only want to use one of them as the source for the backup, however. To do the deployment I iterate through the server items calling a task to delete the code at that destination and copy in the new code per the below:



<PropertyGroup>
<BackUpRoot>C:\BackupRoot</BackUpRoot>
<SourceFileLocation>C:WorkingBuildLocation\</SourceFileLocation>
</PropertyGroup>

<ItemGroup>
    <DeployDestination Include="\\serv1\root\" >
      <ServerName>Web1</ServerName>
      <Environment>Production</Environment>
    </DeployDestination>
    <DeployDestination="\\serv2\root\" >
      <ServerName>Web2</ServerName>
      <Environment>Production</Environment>
    </DeployDestination>

<DeployDestination="\\serv3\root\" >
      <ServerName>Web3</ServerName>
      <Environment>Production</Environment>
    </DeployDestination>
</ItemGroup>

<!--Backup From One Server Trying to figure out how-->


<!-- Deploy To All Servers -->
<MSBuild Projects="$(MSBuildThisFile)" Targets="DeleteAndCopy" 
Properties="Env=%(DeployDestination.Environment);
MachineName=%(DeployDestination.ServerName);
SourceFiles=$(SourceFileLocation);
FolderDestination=%(DeployDestination.FullPath)"  />



<!--More deployment fun below-->

The above works fine.  However, for the purpose of taking a backup I want to only grab the code from one of the machine’s we are deploying to.  I was hoping to be able to grab the first item in the ‘DeployDestination’ list, perhaps using an index that would look like the below:




<MSBuild Projects="$(MSBuildThisFile)" Targets="Copy" 
Properties="SourceFiles=@(DeployDestination)[0];
FolderDestination=$(BackUpRoot)"  />



<!--More deployment fun below-->

Of course that is the developer in me, and the above does not work.  I knew I could just create another property that had the source for the backup hard coded and just reference that, but that seemed wrong, why update the location in two places?  Luckily I was using Msbuild 4 and the relatively new item functions were able to save me!  By adding a meta data item to the destination that should be the backup source I was able to get just the one value I am looking for.

Now I can get what I’d like using the following:



<PropertyGroup>
<BackUpRoot>C:\BackupRoot</BackUpRoot>
<SourceFileLocation>C:WorkingBuildLocation\</SourceFileLocation>
</PropertyGroup>

<ItemGroup>
    <DeployDestination Include="\\serv1\root\" >
      <ServerName>Web1</ServerName>
      <Environment>Production</Environment>
         <BackupSource>True</BackupSource>
    </DeployDestination>
    <DeployDestination="\\serv2\root\" >
      <ServerName>Web2</ServerName>
      <Environment>Production</Environment>
    </DeployDestination>

<DeployDestination="\\serv3\root\" >
      <ServerName>Web3</ServerName>
      <Environment>Production</Environment>
    </DeployDestination>
</ItemGroup>

<!--Backup From One Server -->
<MSBuild Projects="$(MSBuildThisFile)" Targets="Copy" Properties=
"SourceFiles=@(DeployDestination->WithMetadataValue('BackupSource','True'));
FolderDestination=$(BackUpRoot)"  />

<!-- Deploy To All Servers -->
<MSBuild Projects="$(MSBuildThisFile)" Targets="DeleteAndCopy" 
Properties="Env=%(DeployDestination.Environment);
MachineName=%(DeployDestination.ServerName);
SourceFiles=$(SourceFileLocation);
FolderDestination=%(DeployDestination.FullPath)"  />


<!--More deployment fun below-->

 

 

 

Codemash 2013 – Testable Javascript

presenter James Kovac  JetBrains

4:50 PM 1/10/2013 Salon D

First reviewed the history of javascript.

  • First browser was lynx, all text no javascript
  • First browser with javascript was Netscape version .9.  Had it’s own javascript named livescript
  • Microsoft came out with it’s own version with DHTML to support Outlook web clients
There are now many complex javascript libraries and javascript code has been become very complex.  Because of this need to test just like we do with our server based code.  The tools he will review help bring Red-Green-Refactor motif to javascript.
Reviewed the following testing platforms
  • Qunit – built by jQuery to automate their testing
  • Jasmine
  • Mocha, with Chai
Jasmine with jQuery plug in very effective for testing jQuery.  Qunit very familiar for traditional TDD approach, Jasmine Mocha more BDD based.   Mocha is based on node.js therefore do not need a browser to run it, can run it through node.js.
Downside to all except Mocha is asynch testing.  Since Qunit and Jasmine rely on browser there is need for boiler plate code to write ansyced tests.  Mocha much better with asnyc since node.js based.
Synon.js is a mocking library for javascript.  Phantom.js is a headless browser for windows and perhaps could be used to automate tests for Qunit or Jasmine.