Handling AJAX Page Method Server Errors

On my project we make use of jQuery to make AJAX calls to our web server. On the server we use aspnet page methods to serve data to our jQuery AJAX calls and process updates sent from jQuery. One interesting aspect of this is error handling.

Since we want to avoid any leakage of application information in the case of an error, we’ve taken the approach of putting a try/catch block around all functionality in a page method. In the case of an error we log the original error and then throw a friendly exception so the jQuery AJAX caller knows something has gone wrong, but get no exception detail. Here’s an example:

[WebMethod]
        public static string SaveCustomerInformation(string customerID, 
                                                string customerCode, 
                                                string reasonForCall, 
                                                string description)
        {
            string returnValue = string.Empty;
            try
            {
                bool isValidcustomerCode = true;
                customerCode = customerCode.Trim();
                
                //--validate input data
                if (!string.IsNullOrEmpty(customerCode) && (customerCode.Length < 3 || customerCode.Length > 15))
                {
                   throw new WebMethodException(Content.ReusableContentHelper.customer_CODE_LENGTH_ERROR);
                }
                else if (!string.IsNullOrEmpty(customerCode) && customerCode.ToLower().Equals(customerCodeLoginHelper.SERVICE_REQUEST_customer_CODE))
                {
                    throw new WebMethodException(Content.ReusableContentHelper.customer_CODE_CANNOT_USE_REQUESTED_ERROR);
                }
                //--End Validation
                
                //--Do Work
                CustomerImpl customerImpl = new customerImpl()
                {
                    customerID = int.Parse(JavaScriptHelper.DecryptValueAndHtmlDecode(customerID)),
                    customerCode = customerCode,
                    customerName = reasonForCall,
                    customerDescription = description
                };

                Factory.GetCustomerServices().SavecustomerInformation(customerImpl, currentUser);
                //--End Work                
                
            }
           catch (Exception e)
          {
               //--Log Exception
                Logger().LogException(e, ErrorLogImpl.ApplicationTypeEnum.Website, RidgeTool.BusinessInterfaces.Enums.SeverityLevelEnum.Error, HttpContext.Current.Request.Url.PathAndQuery, "customer.SavecustomerInformation()", currentUser);
               //--Throw Friendly To Caller
                throw new WebMethodException("An error occured in SaveCustomerInformation");
          }
          return returnValue;
        }

In our javascript we wrap all AJAX calls in a common method allowing callers to provide a method to handle processing if an error occurs, and if no error handler is passed then an common error handling routine is run showing an alert box telling the user a problem has occurred. Here’s what our jQuery AJAX call wrapper looks like.

[javascript]
function AJAXCall(url, data, successCallBackFunction, errorCallBackFunction, completeCallBackFunction)
{
data = JSON.stringify(data, UndefinedValueCheck);

$.ajax({
type: "POST",
url: url,
data: data,
contentType: "application/json; charset=utf-8",
dataType: "json",
error: function(xhr, status, exception)
{
var err = eval("(" + xhr.responseText + ")");
if (errorCallBackFunction != null)
{
errorCallBackFunction(err.Message);
}
else
{
alert(err.Message);
}
},
success: function(result)
{
if(successCallBackFunction != null)
{
//–eliminates .d from return values
if (result.d != null)
{
successCallBackFunction(result.d);
}
else
{
successCallBackFunction(result);
}
}
},
complete: function()
{
if(completeCallBackFunction != null)
{
completeCallBackFunction();
}
}
});
}
[/javascript]

As you can see the user can choose to provide or not provide an error handling function and if they don’t user still is alerted a problem has occurred.

Here’s an example of our jQuery AJAX wrapper in use:

[javascript]
var text = $("#testingJavascriptTextBox").val();
var passedData = "{‘testString’: ‘" + replaceForJavascript(text) + "’}";
var postData = new Object();
postData.testString = text;

AJAXCall("/WebMethods/Tags.aspx/TestingReplace", postData, function(textInput)
{
alert(textInput.d);
$("#testingJavascriptSpan").append(replaceForHTML(textInput.d));
$("#resultsJavascriptTextBox").attr("value", textInput.d);
});
[/javascript]

What all this means is that we protect ourselves form leaking application details in errors by never allowing a naked error to pass through an AJAX call to our web clients. We still alert our AJAX caller to the existence of an error, so we can still know on the rare occasion (hopefully) when one happens!

Dependency Injection And Javascript

As we move towards a more unit tested and designed approach to our javascript code we’ve been playing with the notion of using dependency injection to allow us to abstract and allow mocking of javascript AJAX server calls. What we’ve done is create a javascript object that handles making AJAX server calls. The AJAX server call object has one method that takes a url, data to post, a success function handler, and an error function handler.

[javascript]
//definition of our real AJAX Server Caller
function AJAXServerCaller()
{
//this created public method that can be called by our code
this.MakeServerCall = function(url, data, successHandler, errorHandler)
{
var dataCleaned = JSON.stringify(data);
$.ajax({
type: "POST",
url: url,
data: dataCleaned,
contentType: "application/json; charset=utf-8",
dataType: "json",
success: successHandler(result.d),
error:function (xhr, ajaxOptions, thrownError){
if(errorHandler != null)
errorHandler(thrownError);
else
alert(‘An Error Has Occurred’);
}
});

}

}
[/javascript]

We house our server caller object in a AJAXServerCaller.js file, and this file is included when any calls are made that use the object to make an AJAX server call in their javascript. In order to make an AJAX Server call an instance of the AJAX server calling object is passed in to a method as a parameter.
[javascript]
//example usage of ajax server caller

//function to handle successful return of AJAX call
function UpdateCustomerSaved(serverData)
{
///…do something to UI after ajax call completes successfully

}

//function to handle error result of AJAX call
function DisplayErrorCustomerSave(errorData)
{
///…do something to UI to notify of error
}

function SaveCustomerData(customerData, ajaxServerCaller)
{
//acutual AJAX processing handed off to our server caller object
ajaxServerCaller.MakeServerCall(‘http://portal/customerSave.aspx?SaveCustomer’, customerData, UpdateCustomerSaved, DisplayErrorCustomerSave);
}

///Click Handler calls save method, creates and passes in server caller instance
function CustomerSaveClickHandler(event)
{
var data = GetCustomerData();
SaveCustomerData(data, new AJAXServerCaller());
}
[/javascript]

When we’re testing any javascript that uses our AJAX server call object we simply include a file called ‘AJAXServerCallerMock.js’ that has an object that has the exact same signature as our real AJAX server calling object, but has an extra property for fake data to be returned by a service call. When testing the instance that is created is now of our mock AJAX server caller and javascript logic that relies on server calls can now be tested without actually hitting a server.

[javascript]
//definition of our mock AJAX Server Caller, must have same object name and
//also the MakeServerCall method defined to act in place of real object
function AJAXServerCaller()
{
//properties to hold fake data to return
//and indexes so can do multiple calls in one test
this.SuccessResultDataArray = new Array();
var usedMockSuccessResultIndex = 0;

//we’ll allow an error to be programmed on a specific call
this.ErrorOnCall = 0;
this.ErrorResultData = null;
var totalCallCount = 0;

//this created public method will fake real server call, pass back preprogrammed data.
this.MakeServerCall = function(url, data, successHandler, errorHandler)
{
totalCallCount++;
// act as if error returned if programmed to do so
if(this.ErrorOnCall == totalCallCount)
{
if(this.SuccessResultDataArray.length<=usedMockSuccessResultIndex)
{
errorHandler(this.ErrorResultData);
}
}
//call success method with data matched to index of call
//and increment method call index.
else
{
if(this.SuccessResultDataArray.length<=usedMockSuccessResultIndex)
{
successHandler(this.SuccessResultDataArray[usedMockSuccessResultIndex]);
usedMockSuccessResultIndex++;
}
else
{
throw ‘no success data available at index:’ + usedMockSuccessResultIndex;
}

}
});

}

}
[/javascript]

One issue with our approach is that our mock AJAX server call in testing is not exactly like the real AJAX call would be since the mock object is not asynchronous. Our mock caller simply fires the functions passed to handle the ajax server result with the programmed mocked return data. This hasn’t been a large issue, but might be problematic down the line. Another issue could also arise if we need to support complex return scenarios with our mock, like multiple return values, or checking the number of calls made. We will have to build a more complicated mock AJAX caller if we run into situations that require more complex mock object programming.

[javascript]
//example testing using QUnit taking advantage of ajax server caller

var glbSuccess = false;

//function to handle successful return of AJAX call
function UpdateCustomerSaved(serverData)
{
glbSuccess = true;

}

test("Test Save Customer Data calls server", function() {

// — if AJAXServerCallerMock.js is included for test this should be our mock
var ajaxCaller = new AJAXServerCaller();

//– program mock return value for one expected call
ajaxCaller.SuccessResultDataArray[0] = "{result:0}";

//–Act
SaveCustomerData(data, ajaxCaller);

//–Assert – check everything ok with ui.
ok(methodToCheckCustomerInGoodState());
});

[/javascript]

Between unit testing and adding dependency injection into our javascript it’s already beginning to look and feel a lot more organized. My hope is this will allow us to produce higher quality javascript code, in less overall time. We shall see!

Unit Testing Javascript

Recently the project I’ve been working on has been moving more and more towards client side javascript, especially using jQuery. We have been pretty consistent on the server side, especially in our business logic, in creating unit tests. We have over 1,000 unit tests in our business tier leveraging nUnit and Rhino Mocks, that are run every time a check in is done using Cruise Control for continuous integration. Pretty standard stuff for server side logic.

Since we are using more and more javascript why not reap the quality benefits of good unit test coverage here as well?  Considering javascript is non-typed and allows the running of strings as code through the ‘eval’ function it would seem even more important to have good unit test coverage of our javascript.  Up to now we’ve ignored it since it’s been considered UI and not really logic.  With more and more ajax integration this no longer seems like a good practice.

For jQuery driven functionality we have been using the jQuery $.ajax method to call down to asp.net pagemethods located  in web forms we created to support the jQuery functionality.   ASP.net pagemethods are essentially webservice calls located in a webform.  The calls have to be static and must be decorated with the [Webmethod] attribute.   Here is a great blog post going into great detail about exactly how to call an ASP.net pagemethod from jQuery.  We essentially are doing the same thing.  We’ve actually wrapped our $.ajax call so we can centralize error handling and abstract away the ‘.d’ return value you get when the pagemethod does it’s json serialization.

This post is about testing, however, so enough with how we’re using jQuery ajax.   From a testing perspetive what we wanted to be able to do was achieve the same test coverage in our jQuery and javascript as we have on our server side logic.   A little googling and what did I find, but qUnit!  The motif is almost the same as writing tests for nUnit, and when you run your tests qUnit provides default css so test results are shown in a html page in a very similar manner to how the nUnit client shows it’s test results.

The great thing is how easy this is to accomplish.  All you need to do is write tests in a .js file that exercise the code you wish to test using the qUnit syntax.  Then in a html page include jQuery, qUnit, the code you are testing and your test code.  In the html provide the nodes that qUnit will want to write to and, you’re done.   Here’s a very simple html example:

<html xmlns="http://www.w3.org/1999/xhtml" >
   <head>
      <title>QUnit Test Example</title>
        <!-- css from qUnit to make output look good -->
          <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" media="screen">
     <!-- jQuery and qUnit include -->
       <script type="text/javascript" src="/scripts/jquery-1.4.1.js" ></script>
       <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script>
        <!-- javascript you are testing include -->
       <script type="text/javascript" src="/scripts/ControlScript.js"></script>
        <!-- tests you wrote include -->
       <script type="text/javascript" src="/scripts/ControlScriptTests.js"></script>
   </head>
  <!-- nodes qUnit will write to -->
   <body>
       <h1 id="qunit-header">QUnit Test Suite</h1>
       <h2 id="qunit-banner"></h2>
       <div id="qunit-testrunner-toolbar"></div>
       <h2 id="qunit-userAgent"></h2>
       <ol id="qunit-tests"></ol>
   </body>
   </html>

Very code I’m testing:

[javascript]
function SetupButton(buttonID, txtBoxID, displayText)
{
$("#" + buttonID).click(function() {
$("#" + txtBoxID).attr("value", $("#" + txtBoxID).attr("value") + ‘ ‘ + displayText);

});
}
[/javascript]

And here are the tests I’m running:

[javascript]
test("SetupButtonTest()", function() {

var btn = document.createElement("input");
btn.id = "btn";

btn.type = "button";
document.appendChild(btn);

var txt = document.createElement("input");
txt.id = "txt";
txt.type = "text";
document.appendChild(txt);

SetupButton("btn", "txt", "disp");
$("#btn").click();
equals($("#txt").attr("value"), " disp", "text box has display value: " + $("#txt").attr("value"));

})

test("SetupButtonTest2()", function() {

var btn = document.createElement("input");
btn.id = "btn";

btn.type = "button";
document.appendChild(btn);

var txt = document.createElement("input");
txt.id = "txt";
txt.type = "text";
document.appendChild(txt);

SetupButton("btn", "txt", "disp");
$("#btn").click();
equals($("#txt").attr("value"), "disp", "text box has display value: " + $("#txt").attr("value"));

})
[/javascript]

As you can see qUnit provides a pretty straightforward way to unit test the now more complex javascript we’re writing in our project (the above is just an example, not actual code from our project). The next step is to integrate qUnit into our continuous integration. We would like to have our javascript tests run with every check in, just like our server side tests do. Here is a post on how to do it, it’s seems a little complicated. I will give it a try and put up a post as to the results.

Hopefully we will be able to bring the benefites of good unit testing coverage out from just our server code and now apply it to the more and more complex client side code we’re creating.  After all why would it not be good practice to test code just because it’s javascript?  It seems even more important to test in a language that is non-typed and allows run-time execution of strings as code.

Modeling a Persistent Command Structure

The team I work with came up with a great way of encapsulating business transactions into commands that require no data persistence, require data persistence and require transactional data persistence. We were able to create a fairly simple structure by using generics to allow our return types to vary, and using class constructors to pass in parameters. In this fashion we were able to create a business command framework to use in a predictable and testable manner in our business layer. The basic structure is defined below:

Command Class Structure
Command Class Structure

To use the CommandBase class an implementation simply overrides Execute and does it’s business. Execute is marked as abstract in fact so an implementation must do this. A constructor is also created taking whatever data the command will need to manipulate to do it’s work. To use the PersistenceCommandBase class an implementation simply overrides the method ‘Execute(datacontext data)’.  However, PersistenceCommandBase implements the ‘Execute()’ method defined in CommandBase.  PersistenceCommandBase handles the creation and disposal of the data context in the ‘Execute()” method and then makes a call to the abstract ‘Execute(dataContext)’ method.

The logic needing persistence uses the passed in dataContext.  If the ‘Execute()’ method is called on the implementation the base class handles creating and cleaning up the dataContext.  The implementer can ignore all the messy details of creating and killing a dataContext.  The ‘Execute(dataContext data)’ method can also be invoked directly passing in an already in use datacontext.  Again the concrete implementation of the PersistenceCommandBase does not have to worry about where the IDataContext came from.  Below is what a simple implementation of the PersistenceCommandBase looks like:

public abstract class PersistenceCommandBase<T> : CommandBase<T>, IPersistenceCommand<T>
    {
        public override T Execute()
        {
            using (IDataContext data = DataContextFactory.MakeDataContext())
            {
                return this.Exectue(data);
            }
           
        }
        
        public abstract T Execute(IDataContext dataContext)
        {}
    }

Essentially we are using a template approach to centralize the creation and release of persistence contexts in our application. We took the same approach and created a TransactionalCommandBase, which looks a lot like the PersistenceCommandBase, except it handles the the details of starting and committing or rolling back a transaction, it look like the below:

public abstract class TransactionalCommandBase<T> : PersistenceCommandBase<T>
    {
        public override T Execute()
        {
            using (IDataContext data = DataContextFactory.MakeDataContext())
            {
                try
                {   data.StartTransaction();
                    return this.Exectue(data);
                    data.Commit();
                }
                catch
                {   
                    data.Rollback();
                    throw;
                }
            }

        }

        public override T Execute(IDataContext dataContext)
        {
            throw new Exception("The method or operation is not implemented.");
        }
    }

The thing I love about this setup is the creator of a business transaction only has to decide to what level data persistence is required and then create a class extending from the appropriate base class. Developers can focus on creating and testing their business logic and let the base class handle the data context details.

Another bonus to our structure is we can create aggregate commands, that is a command that uses other commands. Once we have a data context, we can simply pass it to the ‘Execute(dataContext)’ method of another command. In this manner we can create transactional commands that wrap non transactional commands and have them still enlisted in our transaction. Below is an example:

 /// <summary>
    /// Non Transactional Save Command
    /// </summary>
    public class SaveCustomer : PersistenceCommandBase<Customer>
    {
        private Customer _customer = null;

        public SaveCustomer(Customer customer)
        {
            this._customer = customer;
        }

        public override Customer Execute(IDataContext dataContext)
        {
            dataContext.Save(this._customer);
        }
    }

    /// <summary>
    /// Class using non transaction command in transaction
    /// </summary>
    public class SaveManyCustomers : TransactionalCommandBase<List<Customer>>
    {
        private List<Customer> _customers = null;

        public SaveManyCustomers(List<Customer> customers)
        {
            this._customers = customers;
        }

        public override List<Customer> Execute(IDataContext dataContext)
        {
            for(int i = 0; i< this._customers.Count; i++)
            {
                SaveCustomer saveCommand = new SaveCustomer(this._customers[i]);
                this._customers[i] = saveCommand.Execute(dataContext);
            }

            return this._customers;
        }
    }

We’ve run into some issues, and have made some adjustments that I’ll address in subsequent posts. The first issue is testing. Since we’re passing what are essentially parameters to our commands in their constructors it makes for an interesting testing situation. We’ve adopted a command factory approach so we can essentially abstract the creation of the commands. Only the factory creates commands, so we can test to make sure the factory calls are made correctly.

Another issue we’ve run into is how to handle rollbacks in transactions that are not based on an exception. What happens if we want our command to just rollback? We’ve also had to address what to do if a transactional command is called directly with a dataContext that is not in a transaction? It seems that a transactional command should always run in a transaction even if the dataContext it is passed is not in one. Interesting issues I look forward to addressing here soon.

More or Less Types

One debate that seems to arise in many of the projects I work on is at what level to create types?  For instance if you have a Customer object does the Customer object look like this?

    public class Customer
    {
        public string CompanyName
        { get; set; }

        public string FirstName
        { get; set; }

        public string LastName
        { get; set; }

        public string Address1
        { get; set; }

        public string Address2
        { get; set; }
        
        public string City
        { get; set; }

        public string State
        { get; set; }

        public string City
        { get; set; }

        public string ZipCode
        { get; set; }

        public string BusinessPhone
        { get; set; }

        public string HomePhone
        { get; set; }

        public string CellPhone
        { get; set; }
   }

or like this?

   public class Customer
    {
        public string CompanyName
        { get; set; }

        public Name CustomerName
        { get; set; }
        
        public Address BusinessAdress
        { get; set; }
        
        public Phone BusinessPhone
        { get; set; }

        public Phone HomePhone
        { get; set; }

        public Phone CellPhone
        { get; set; }
   }

I am a fan of having more types then less. The second Customer implementation lets me think in terms of Name, Address and Phone objects and will guide developers to have the same structure for these objects throughout the system. Without these smaller types you can have addresses with no Address2 field in some places but not others, Names with middle initials on some classes, but not others. This is all fine until these objects need to share data, and their data doesn’t conform.

I suppose the downside is you have more class files to maintain, which really isn’t a downside at all. To use the smaller objects you have to conform to their rules, which is only a downside if the system doesn’t have any consistent rules. If this is the case I’d wonder about the system design. Some would argue you should never have any primitive types on your business objects, that everything should be it’s own class. This would nicely abstract and encapsulate, but is it overkill?

At some point a class has to have primitives. In our Customer class the Address object will be made up of string properties, should we create types for Address1, Address2. etc….? Some would argue you should, but at some point the data will be stored in a primitive. So the real question in my mind is where to stop typafying everything. What I tend to advise is to have top level business objects that rarely have primitives, but allow their support objects to be made up of them. This seems to force system wide structure on support objects, without getting into objectifying everything overkill, at least in my mind. I’d be open to using more types, but would be uncomfortable with a less type driven approach.

Another plus to a more type driven approach is you can make use of operator overloads. This allows casting of one type to another, with the adaption logic implemented in the casting operator. For instance, you can cast a string to a Phone type, and the validation logic can be implemented in the casting operator. Consumers can then easily set a string to a phone variable.