OnPropertyUpdate Schennanigans

The WPF application I’m working on at the moment has functionality to display multiple chart windows in a main application window page. The chart windows can be resized and moved around inside their window. I wanted to take the resizing and positioning behavior and abstract it so could apply it to other types of child windows besides just charts.

The application structure is based on an MVVM approach so this mean taking the positional pieces of a ‘ChartViewModel’ and abstracting them into a ‘PositionalViewModelBase’ that the ‘ChartViewModel’ and any other view models that would back views that are placed and manipulated on a page would inherit from.

To keep the view models agnostic to their views and to allow positioning change driven by business logic instead of just user interaction, the ‘INotifyPropertyChanged’ interfaced is implemented by the ‘PositionalViewModelBase’ type. What goofed me up was that I had to override the positional properties in the ‘ChartViewModel’ because the ‘ChartViewModel’ data is backed up by a persistence class specific to the ‘ChartViewModel’. I wanted to keep the ‘PropertyChanged’ event code in the base class though, so I came up with the below, using just width as an example:

//--base Positional class behavior.
 public virtual double Width
        {
            get { return this._Width; }
            set
            {
                this._Width = value;
                this.OnPropertyChanged("Width");
                SetParentDirty();
            }
        }


//--chart view model override
 public override double Width
        {
            get
            {
                return this.Model.Width;
            }
            set
            {
                base.Width = value;    //--What turned out to be the issue
                this.Model.Width = value;       
            }
        }

I set the property in the actual data class and the base, I do this so the base can handle setting the parent dirty and firing the changed event. However, I goofed. My chart windows were freaking out when I resized them. After investigating many conspiracy theories and trying to find ways to blame someone else, I realized my error. First I didn’t have any unit tests confirming the behavior, I just assumed it worked.

After adding a unit test I realized that when the property changed event was fired the property showed the old value? The test led me quickly to spot my error. I was setting the base value, and causing property changed event to get fired, before updating the actual data store. Any code fired by the property changed event that wanted the new value, actually got the old one, since I had not updated the data store yet. Dooh!

    
 public override double Width
        {
            get
            {
                return this.Model.Width;
            }
            set
            {
                 this.Model.Width = value;     //--setting data store first fixed problem
                base.Width = value;  
                
            }
        }

The PropertyChanged event driven behavior can be tricky to debug since flow of behavior is not immediately evident. This is why testing it even more important, and hopefully the lesson I have learned here is to test first better. As usual I didn’t save time by not testing adequately, I lost time.

Making UI Notice Change in ObservableCollection Item

I’ve been doing a lot of WPF work lately. I’m getting my head more and more around binding, especially binding to ObservableCollections. It’s interesting how the binding in XAML automagically detects additions and deletions to a collection and updates the UI appropriately. However, I ran into an issue where the update was not to the collection itself, but to item in the collection.

As an example suppose you had the ItemSource of a listbox bound to an ObervableCollection of ‘Coach’ objects.

<ListBox x:Name="lbCoaches"  
                         VerticalAlignment="Stretch" 
                         HorizontalAlignment="Stretch" 
                         DockPanel.Dock="Top"
                         ItemsSource="{Binding Path=Coaches}"
                         SelectedItem="{Binding Path=ActiveCoach}" >
      <ListBox.ItemTemplate>
      <DataTemplate>
            <StackPanel  Orientation="Horizontal">
                <TextBlock Text="{Binding Path=LastName}"></TextBlock>
                <TextBlock Text="{Binding Path=FistName}"></TextBlock>
                <TextBlock Text="{Binding Path=Age}" ></TextBlock>
            </StackPanel>
        </DataTemplate>
       </ListBox.ItemTemplate>
<ListBox>

If you add, or delete, or move a coach in the collection no problem, listbox items are updated. If you change a coaches last name, nothing. I realized that in order to get the binding to update I had to force the UI to know the ObservableCollection updated. I initially tried firing OnPropertyChanged on the collection, but that did not seem to do anything. What I realized what I needed to allow for checks to changes to the items themselves. I could have put a data trigger in place to do this, and I may yet go that route. I think for large lists this is the way to go. For my purposes, since my lists were small, I just cleared and rewrote this lists, hence forcing the UI to realize the collection had changed. Thanks to LINQ I can do this in very few lines.

    //-- coaches is observablecollection backing listbox.
     Coaches[0].LastName = "Thompson";
     var newList = Coaches.ToList();
     Coaches.Clear();
     newList.ForEach(item => Coaches.Add(item));

I think the data trigger is probably the right way to go. The above code will probably confuse people reviewing it and feels like a hack to me. Glad I thought about it, I’ll probably change it now that I was forced to think it through for this post.

Why ORM Should Be Used, At Least Most Of TheTime

Every so often a new project starts up near me, and this project requires interacting with a database.  Someone who has used an ORM (Object Relation Mapping) tool in the past suggests using it again.  Whether it is nHibernate, LLBLGen Pro or Entity Framework, the person must have had a somewhat positive experience, or they wouldn’t suggest using again. 

Almost as certainly someone who has traditionally put CRUD and some business logic in stored procedures and not used an ORM tool in the past fights the suggestion to use the ORM tool.  They most likely have gotten good at writing and tweaking stored procedures and had a positive experience with this approach or they would not fight against the ORM tool.  As usual there is no right or wrong way, just different approaches that may be more appropriate for the task at hand.

Why Use An ORM Tool:

The reason I suggest that ORM tools are the right choice most of the time is because most projects I see have aggressive time lines and limited development resources.  To that end the ORM tool lets me write a lot less code to do the same amount of work.  Take the below two methods, one gets all customers using Entity Framework 4 and the other gets the list manually using a SqlConnection, SqlCommand and SqlDataReader:

/// <summary>
        /// Method using ORM To Get Customers
        /// </summary>
        /// <returns></returns>
        public List<Customer> GetAllCustomersEF()
        {
            using (NorthwindEntities nwDB = new NorthwindEntities())
            {
                return (from cus in nwDB.Customers
                        select cus).ToList();

            }
        }

        /// <summary>
        /// Method using straight SQL to get all customers
        /// </summary>
        /// <returns></returns>
        public List<Customer> GetAllCustomersSQL()
        {
            var retList = new List<Customer>();

            string cmdText = "SELECT * FROM CUSTOMERS";
            
            using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings[0].ConnectionString))
            using (SqlCommand cmd = new SqlCommand(cmdText, conn))
            {
                conn.Open();
                using (SqlDataReader dataRD = cmd.ExecuteReader())
                {
                    while(dataRD.Read())
                    {
                        retList.Add(CreateCustomerFromDataReader(dataRD));
                    }
                }

            }

            return retList;

        }

        /// <summary>
        /// Method To Create Customer From Datareader record
        /// </summary>
        /// <param name="dataRD"></param>
        /// <returns></returns>
        private static Customer CreateCustomerFromDataReader(SqlDataReader dataRD)
        {
            Customer customer = new Customer();
            customer.Address = dataRD["Address"].ToString();
            customer.City = dataRD["City"].ToString();
            customer.CustomerID = dataRD["CustomerID"].ToString();
            customer.CompanyName = dataRD["CompanyName"].ToString();
            customer.ContactName = dataRD["ContactName"].ToString();
            //.. repeat until all fields full

            return customer;
        }

As you can see above there is a lot less code required using Entity Framework 4.  The data class, Customer, is actually code genned for me.  And the projection into the object from the database is also done for me.  All I have to do is write a query using LINQ syntax.

To get the data using a SQLCommand I have write the query in SQL either inline (as is done above) or in a stored procedure.  I have to then iterate through the results and fill a Customer class of my own making, so I have to create and maintain the code for projecting the data from the database into my objects.

The above is a very simple case,  add filtering or prefetching data in the object graph and the amount of extra code I have to write in the straight sql solution grows exponentially compared to the Entity 4 solution.  Below is possible code for grabbing all customers but with their orders also.

/// <summary>
        /// Method using ORM To Get Customers
        /// </summary>
        /// <returns></returns>
        public List<Customer> GetAllCustomersWithOrdersEF()
        {
            using (NorthwindEntities nwDB = new NorthwindEntities())
            { 
                return (from cus in nwDB.Customers.Include("Orders")
                        select cus).ToList();

            }
        }

        /// <summary>
        /// Method using straight SQL to get all customers
        /// </summary>
        /// <returns></returns>
        public List<Customer> GetAllCustomersWithOrdersAllDataSQL()
        {
            var retList = new List<Customer>();

            string cmdText = "SELECT * FROM CUSTOMERS"
                             + " LEFT JOIN ORDERS "
                             + " ON CUSTOMERS.CustomerID = ORDERS.CustomerID Order By Customers.CustomerID";
            
            using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings[0].ConnectionString))
            using (SqlCommand cmd = new SqlCommand(cmdText, conn))
            {
                conn.Open();
                using (SqlDataReader dataRD = cmd.ExecuteReader())
                { 
                    Customer customer = null;
                    string customerID = "";
                    while(dataRD.Read())
                    {
                        if (customerID != dataRD["CustomerID"].ToString())
                        {
                            if (customer != null)
                            {
                                retList.Add(customer);
                            }
                            customer = CreateCustomerFromDataReader(dataRD);
                        }
                        
                        customer.Orders.Add(CreateOrderFromDataReader(dataRD));
                    }
                    if (customer != null)
                    {
                        retList.Add(customer);
                    }
                }

            }

            return retList;

        }

        private Order CreateOrderFromDataReader(SqlDataReader dataRD)
        {
            Order retOrder = new Order();
            ///...fill order
            ///
            return retOrder;
        }

        /// <summary>
        /// Method To Create Customer From Datareader record
        /// </summary>
        /// <param name="dataRD"></param>
        /// <returns></returns>
        private static Customer CreateCustomerFromDataReader(SqlDataReader dataRD)
        {
            Customer customer = new Customer();
            customer.Address = dataRD["Address"].ToString();
            customer.City = dataRD["City"].ToString();
            customer.CustomerID = dataRD["CustomerID"].ToString();
            customer.CompanyName = dataRD["CompanyName"].ToString();
            customer.ContactName = dataRD["ContactName"].ToString();
            //.. repeat until all fields full
           
            return customer;
        }

To add the prefetch of Orders in the straight SQL solution I have to write code to project into my customer than check each row to see if the customer changes, and then add orders to the customer. This supposes I do not mind that the customer data is duplicated for each order returned for the same company. If I want to optimize data transfer things get even more complex.

Even in a fairly simple example I have a lot less code to write and maintain using the ORM solution.  When the database schema changes this means I have less code I’ll have to update to accommodate a schema change.  If the ORM is statically typed, as is Entity Framework 4 and LLBL Gen Pro, then I’ll also have the benefit of seeing where my code breaks when the schema is changed at compile time.   The less code I have to write by hand and alter when I make changes to my data model the better, and the easier these changes are to find the better.

ORM tools have not always been so friendly to data model changes.  nHibernate uses an XML mapping file, or used to.  Up until Entity Framework4 synchronizing changes to your schema into code was not that straightforward.    LLBL Gen Pro code generates its data layer so handles changes to the schema pretty well, but was hard to learn to use.  Even given these earlier drawbacks, I would still argue they allow more rapid development with limited resources available.

Also, ORM tools allow you to rely on the expertise of those who wrote them when it comes to creating the SQL that is actually run on the database.  If you have a DBA and are confident that you can tweak your own SQL effectively this isn’t an issue.  More and more, however, I run into projects where no one has a very deep SQL background.

Why Use Straight SQL:

None of this is to say that straight SQL does not have its place.  I always use stored procedures in reporting oriented applications, as their needs are heavily data processing intensive.   In circumstances where performance is the highest priority and time line and available resources are not an issue then the straight SQL solution is better suited.  If you know what you are doing you can achieve higher performance with your own code.  However, you have to know what you are doing.  Writing your own data layer does not guarantee better performance.

Conclusion:

As the title suggests I believe an ORM tool is the best choice most of the time.  Most projects have short time frames and limited resources.  Where performance is the absolute top priority or you have heavily data processing tasks in the application then a straight SQL approach may make sense.  Even in these cases I might suggest a hybrid approach, where straight SQL is used only where required, so areas of the application that don’t require heavy data processing or top performance are easier to build and maintain.

Restful Services, Who Needs A WSDL Anyway?

I remember back when I first started working with web services how happy I was that .net created it’s own client proxies. Hitting any web service that exposed a WSDL was a snap in .net, I didn’t need to know anything about SOAP or even XML for that matter. All I need was to created my trusted web reference and off I went consuming web services both near and far.

Now later I came to realize that, in .net 2.0 anyway, all the services and their arguments were created in proxies with their own name spaces, and the proxy arguments had only public fields, not properties so couldn’t bind to them. This prompted me to create wrappers around all .net 2.0 services to transform arguments and returns types to common objects and such. Windows Communication Foundation (WCF) came along and took care of all these issues for me. Still, even with the introduction of WCF I was pointing my trusted IDE at a WSDL for the most part, and it was doing it’s client proxy magic. I still needed no jedi like WSDL knowledge, although to configure WCF I know actually did need to know something about XML (especially setting up certificates and error handling).

Now we come the world of Restful services. No more WSDL, but also, no more proxy magic! I can’t point my trusted IDE at a URL and have things done for me. Now I have to actually do things for myself, sort of. It seems most publishers of Restful services (Amazon, Netflix) give you documentation about their API, so you can consume their services directly yourself, but they also provide client implementations in popular platforms, so you don’t have to write code against the restful API yourself if you don’t want to. It’s probably better since I think I have more faith in client proxy code created by the publisher of the Restful API then I do with the code generated by my trusty IDE. Plus I can look at and makes sense of the communication in an interaction with a Restful API. SOAP was always a mystery, especially when it came to authentication.

Now as we start publishing Restful services ourselves on my current project, I’m eager to publish client proxies for those that will consume them. I Feel like I’m returning the favor of the Restful services we’re consuming by using client proxies provided by the service itself. We are using WCF to host or Restful services and that in itself has some interesting nuances. Microsoft publishes a Restful WCF starter kit, which is useful to get up something that requires very little configuration. We quickly ran into problems when needing to configure our service to use HTTPS and authentication and ended up abandoning the starter kits base service object WebServiceHost, for the traditional WCF ServiceHost configured with a WebHTTPBinding and WebHTTPBehavior.  Using these allowed us to configure our Restful Services with the same XML constructs we used in our SOAP based WCF services.

All IQueryables Are Not The Same

On my current project we use an Object Relational Mapping (ORM) tool named LLBL Gen Pro.  LLBL has a really nice LINQ implementation over it which allows for writing queries much more naturally and in a strongly typed environment.  nHibernate has long had a query language that looked similar to SQL named HQL (Hibernate Query Language).  Unlike LINQ, HQL was weakly typed so changes to types wouldn’t result in compile time errors.  Since LINQ is strongly typed this is not an issue.  When we decided on LLBL Gen Pro it was before nHibernate Linq existed.  We liked being able to use LINQ for querying since could leverage the learning from using LINQ to Objects from a syntactic perspective.

The above is why we choose LLBL and we have been please with the LINQ implementation.  When using LINQ, however, it is important to keep in mind who is the daddy of an IQueryable or IEnumerable you are working with. 

I recently came across a scenario like the below where I retrieved a result set  and then further refine the result set.


//--query against IQueryable that is LLBL Entity
var codes = from c in companyQ
                 where c.CompanyID == companyID
                 select c.PromoCodes;


if (codes.Count() > 0)
            {
               //--get the freetrial from result set
                var freeTrial = codes.Where(c => c.CodeType == (int)CodeTypeEnum.FreeTrial).SingleOrDefault();

              //--get list prepaid from result set
                var prePaids = codes.Where(c => c.CodeType == (int)CodeTypeEnum.PrePaid);


      }

What I didn’t expect was that the second refining, which I meant to happen to the result set in memory, actually spawned another query in the database.  Once I realized this it made sense, since the DataProvider for the IQueryable I was working with was the LLBL Gen Pro Implementation, and it is supposed to query against the database.   To do what I intended I converted the result set to a List before doing my refining which allowed the DataProvider to be the DataProvider underpinning LinqToObjects.  This then did what I wanted and queried against the data in memory.


//--query against IQueryable that is LLBL Entity
var codesFromDB = from c in companyQ
                 where c.CompanyID == companyID
                 select c.PromoCodes;

//--convert to list based Queryable so 
//   further refining done in memory

var codes = codesFromDB.ToList().AsQueryable();

if (codes.Count() > 0)
            {
               //--get the freetrial from result set
                var freeTrial = codes.Where(c => c.CodeType == (int)CodeTypeEnum.FreeTrial).SingleOrDefault();

              //--get list prepaid from result set
                var prePaids = codes.Where(c => c.CodeType == (int)CodeTypeEnum.PrePaid);


      }

The lesson I learned here was keep in mind who owns an IQueryable or IEnumerable when using it, or you might end up with some unexpected side effects.