Considering oAuth For Source Verification Requirement

I’m working on a Restful API that has some funky security requirements.

  • The first is that it needs to be locked down so only registered users can access it
    • No problem, have users request a token using credentials to get it, secure call to gettoken with SSL.  Have token passed in header to all subsequent calls, verify token on those calls.
  • The next is that the registered users need to have a license on their machine for our product, and if it is not there they should be denied usage
    • No problem, no license then you are refused.
  • The next is that if the user does not have a license but their call is coming from a particular source then they should be allowed to run our product with no license
    • Only way I can figure to do it is hand out private keys to each source I have to identify and have them sign something (like their token) so I can verify the source when they call the api, and if it is a source that can have no license then all is good.
      • Have to do some strange things to get application to run with no license, but ok, no big deal.
    • A consumer of the api  indicated that I did not need to do this, and if I just used oAuth all would be fine.  Based on this I did a little checking into oAuth.  It seems like oAuth lets me take tokens that are authenticated by a remote service, like Twitter or Facebook, tack on authorization information to a request token and go to town.  This is all good, although more complicated than the use case I’m dealing with.  What I don’t see is that it has a way for me to confirm that a call originated from a specific application.  I could be missing something though as oAuth documentation is not all the clear to me.

Since the api is always authenticated against our own credential store I don’t see where oAuth would give me much upside in above scenario.  It handles my first two requirements, but in a more complex manner than I’m already handling them, and doesn’t seem to satisfy the last requirement for source verification, at least not as far as I can tell.

Asserting Rest Functional Tests

I’ve been doing a lot of work with Rest services lately. Since I’ve been able to use .Net 4.5 this has meant using WebAPI and, luckily enough, I have been able to put together a nice decoupled, unit tested stack (@500 unit tests 65% code coverage), leveraging dependency injection and WebApi’s more decoupled nature than WCF.

As I’ve been adding functional tests around completed functionality I’ve had to find a quick way in C# to parse our returned json into something that can be understood in C#. At first I started parsing the returned json as a string and looking for particular structures in the string. Looked something like this:

requestUri = new Uri("http://localhost/api/students/34");

//--base test class method to prepare web request
PrepareGetService(requestUri);

//--base test class method to execute web request
var result = GetResponse(requestUri);

Assert.IsTrue(result.Contains("firstName:""John,"), "Bad First Name.");

The above has some obvious problems.

  • FirstName could exist elsewhere in my model
  • The spacing could be different than what I’m expecting in the returned json
  • Firstname could be the last field and have no comma
  • If I take the comma off FirstName could match “Johnny” as well as “John”, etc
  • There is no guarantee this is a model at all, could be just that string that is returned

I did a little googling and found that I can parse a json object into a C# dictionary fairly quickly:

var resultObj = new JavaScriptSerializer().DeserializeObject(result) as Dictionary<string, object>;

Assert.AreEqual("John", resultObj["firstName"], "Bad First Name.");

This is a little better as now I know the exact field and value and can compare them discreetly. It won’t matter where in the json FirstName is, and if the response is not deserializable into a dictionary I probably have some badly formatted json, which is also good to know.

When I realized I could deserialize into a c# object it dawned on me I could just serialize into the c# model, if I have one. Then this would work:

var resultObj = new JavaScriptSerializer().DeserializeObject(result) as Student;

Assert.AreEqual("John", resultObj.FirstName, "Bad First Name.");

The above is great as long as the json was created from serializing a type in my c# code, in this case student. This is not always the case, as sometimes the return objects are being dynamically created as anonymous types:

 public HttpResponseMessage Get(int id)
        {
            IRestContext context = _restContextProvider.GetContext(Request);
           
            Student stuudent = _studentRepository.GetStudent(id);

            return context.CreateResponse(HttpStatusCode.Ok, new { firstName = student.FirstName, lastName = student.LastName, pullDate = DateTime.UtcNow });           
        }

In the above I’ll have no type to deserialize into, but the dictionary should work just fine for this.

In short parsing strings for checking json results in my functional tests had some issues, so went to using explicit types where possible, and a dictionary where getting back anonymous types.

Controller Folder Location Route Constraint

Recently I came across a scenario on my current project where we needed a webApiRoute to only be applicable to controllers in a specific folder. At first I was thinking I have to build a constraint to only allow a match on a route if the controller was in a specific virtual directory. Then it dawned on me the controllers aren’t in any virtual directory at all, they are not deployed, they are compiled into the application!

Lucky for me our folders match our namespace structure. Based on this I was able to create a custom constraint making sure the requested controller had a namespace that ended with the desired folder path. Ends up looking like below:

  public class NameSpaceConstraint : IHttpRouteConstraint
    {
        private string _nameSpace;

        public NameSpaceConstraint(string nameSpace)
        {
            this._nameSpace= nameSpace;
        }

        public bool Match(System.Net.Http.HttpRequestMessage request, IHttpRoute route, string parameterName, IDictionary<string, object> values, HttpRouteDirection routeDirection)
        {
            if (!values.ContainsKey("Controller"))
            {
                return false;
            }
          
            string controllerName = values["Controller"].ToString();

            Type typeOfController = Assembly.GetExecutingAssembly().GetTypes().FirstOrDefault(x => x.Name == controllerName + "Controller");

            return typeOfController != null && typeOfController.Namespace.EndsWith(this._nameSpace, StringComparison.CurrentCultureIgnoreCase);
        }
    }

Works well, just need to define route with constraint in startup and all is well:

  public static void Register(HttpConfiguration config)
        {
            config.Routes.MapHttpRoute(
               name: "FolderName",
               routeTemplate: "api/FolderName/{controller}/{id}",
               defaults: new { id = RouteParameter.Optional },
               constraints: new { NameSpace= new NameSpaceConstraint(".FolderName") }
               );
}

Of course this is assuming that my controller is in the same assembly that is executing my route constraint. If this wasn’t the case I’d have to get a little more crafty looking for types. Not an issue at this point, however.

Consuming WCF In .Net

Recently I’ve been working on a system that consumes several WCF Endpoints that share types and are part of the same larger system. To this end I use an abstracted service factory to provide a wrapper that lets me call against my service interface and ensure that it is cleaned up correctly, by wrapping calls to client in what I call a proxy wrapper. I don’t call nakedly against a wcf client.

I used dependency injection so where I need to make my WCF calls I inject a service factory, and that provides what I call proxy wrappers. Code to run against the client is passed as lambda expressions and cleanup code is kept in the proxy wrapper. Looks like this:

public void DoSomething(int id)
{
    IProxyWrapper<IService> proxyWrapper = ServiceFactory.GetService<IService>();
    Data data = null;
    proxyWrapper.ExecuteAndClose(x => data = x.DoSomething(id));

    return ResponseProcessor.CreateResponse(data, context);
}

I thought the above was a good idea because the try/catch around the wcf client, and the wcf client’s lifetime were no longer a problem of the consumer of the service, the proxy wrapper and service factory would worry about it.

From a testing perspective it was a little weird because I had to program a service factory mock to return a mock proxy wrapper that would run code against a mock client. It is not hard and allows for unit testing, but it is always difficult to explain to someone else.

Recently I ran into a pattern where instead of using a WCF client at all, a client class is created that wraps the WCF client in a way that the consumer would not know or care if the functionality is presented as a WCF client or not. In this pattern the client class handles all details of WCF, or whatever mechanism is used to communicate, and the caller simply references and calls against the client class directly. The same code above becomes:

public void DoSomething(int id)
{
  IServiceClient serviceClient = ServiceFactory.GetService<IServiceClient>();
    Data data = null;

    data = serviceClient .DeleteCheckCall(id);

    return ResponseProcessor.CreateResponse(data, context);
}

The above looks a lot more normal to someone not indoctrinated into the intricacies of WCF. It is also easier to test since I just have to have my service factory return a mock of the client. Also if I have to pass data over the service that is hard to serialize I can hide that problem from my consumer by allowing them to pass data regardless of whether it’s hard to serialize and then handling serialization behind the scenes in the client wrapping class.

The only downside is then when I create a service I have to create a client interface and class to wrap calls to that service. Also, this client is only going to be consumable by code that the client is shared with. The same goes for the proxy wrapper I suppose.

All in all I think I like the client class that hides all WCF details from the consumer. Makes it easier if need to change communication mechanism since nothing that is done is specifically to accommodate WCF, as in the proxy wrapper where functionality is passed as lambda’s to all it to be encased in error handling code.

I figured there was a better way then what I was doing and I think I was right.