Agile Is More Than A Stand Up

A while ago I worked with a rather large software company that had long used a homegrown waterfall esque development process. They had built up a custom system around their process, and were trying to move to more of an ‘Agile’ based approach. I believe what was driving this was a long history of buggy and late deliverables, along with high turnover and low developer morale.

I was happy to see they were trying to change what was not working and move towards something that they thought would help solve some of their process issues. What’s interesting is that they did this in a very closed environment, that is they were creating a homegrown version of agile with little input from anyone who had worked with agile in the past. They started having ‘stand up’ meetings, but since there was little definition of team the one stand up meeting included all developers (This made them very long). They also introduced the notion of iterations, although they had not altered how they managed requirements so tasks were not defined, estimates were usually multiple days and no acceptance criteria existed.

I got the sense because there was a stand up, and tasks were in iterations the management felt they were implementing an ‘Agile’ process. I have always felt agile was more about team ownership, inclusion and transparency. I found it ironic to see an ‘Agile’ approach enforced on a group without any of the group’s input, and with no transparency into how the process was being driven.

My experience with agile, which ranges from helping to design and implement a process to participating in those crafted by others, is that the ones that work focus on process that create a sense of team ownership and responsibility. A stand up does help with this and I like stand ups. I think what is more important, however, is including the team in crafting tasks that they agree are actionable and have acceptable acceptance criteria before assigning them. I also think having regular retrospectives so the team can regularly provide feedback is a great way of creating a sense of team ownership.

If you simply impose stand ups and put ill defined tasks into iterations you are moving in the right directions. My fear, however, is that the two alone will create little change or sense of team accountability and will simply confirm that there is no better way than the homegrown waterfall system currently in place. I hate to see organizations miss opportunities to make real beneficial change as these opportunities can be few and far between.

Any Testing Will Do

I have worked on several different projects in the last few years. Often I would encounter code bases that had either no, or very little automated testing in place. Usually this goes hand in hand with regularly late delivery of somewhat buggy code and the customer ending up being the real tester. My reaction to this is that the right thing to do is to architect for testability, namely decouple dependency from implementation through dependency injection or a similar pattern so functionality can be tested discreetly, and add unit testing. I indicate either test driven development or unit testing during development should be done. I cite studies and articles that back up my position. (here’s a few: http://devops.sys-con.com/node/2235139, http://collaboration.csc.ncsu.edu/laurie/Papers/Unit_testing_cameraReady.pdf, http://www.infoq.com/news/2009/03/TDD-Improves-Quality). I ask if anyone can produce empirical evidence to support less testing is better.

I generally get agreement that unit testing should exist. However, a continuation of the exact same architectures and practices generally follow. What I have come since to realize is that it is not the how but the why that is the issue. I am making concrete recommendations of how to solve a problem in large code bases that already exist. I am also making recommendations that mean developers have to change their practices, which is hard, and often not desired by the developer. If the developers do not want to add unit testing, they won’t. Some refuse out of disagreement. The common arguments are that it slows me down to write tests, or I do not write buggy code and do not need to write tests. There is some merit to these arguments, and even if there isn’t often the people making them have no interest in hearing your counter points. Many developers also do not want to be told how to do something by other developers, they have their own approach and it is good enough for them and their shop, fair enough.

Where this led me is to take the emphasis off the how. First off I now make clear up front existing code should be left alone. If it exists and doesn’t have any test coverage, leave it alone. The architecture is most likely not friendly to testing and your customer, for better or for worse, is already testing it for you. The effort to alter the code to make it testable would most likely be a large, long, and from a management perspective, non value add task. Instead I suggest adding minimally to current code bases. Add new functionality as if it were green field and have your current code bases consume it, with as little coupling as possible

When it comes to actual unit tests I don’t care how testing occurs from an architecture or implementation perspective. I now try and get agreement that we should have some form of automated testing in place to ensure code quality. I then try and get agreement that to do that we should enforce a certain level of test coverage on new code in our builds (50% maybe). If I can get agreement to that then I quickly codify in the build server so any new code breaks if less than 50% test coverage. I leave it up to the developer to decide how to achieve this, but if they happen to achieve if by using a more decoupled architecture and dependency injection I don’t complain. In this way I am not trying to dictate how they should test, only enforcing on what we all agree on, automated testing of some form should exist.

Someone Recently Asked What Design Patterns I Used, Here’s What I Said.

There are a few design patters I use frequently, mainly because in business software the problems they are geared at solving come up often. The first is the Command pattern. This pattern in demonstrably useful in UI operations as it supports a generic means for undo and redo operations. When UI operations are implemented as atomic Command classes their undo redo operations are explicitly defined in a concrete object, and are generically invoked from the stack managing the executed commands as base commands. I have also come to find that the Command Pattern is very useful in pure business logic as well. In environments where distributed transactions are involved the Command pattern allows the definition of atomic transactions to be defined as a command in the business logic layer. The benefit of this structure is that each atomic Command can be chained and composite transaction behavior can be defined in the base Command class instead of by each operation itself.

There are two other patterns I use frequently to make code less coupled and more testable. The first, the Factory Pattern, was much more helpful before the wide acceptance of dependency injection frameworks. The Factory Pattern allows me to abstract the creation of classes from their consumers. This allows the consumed class to be an abstraction, an interface, and allows the management of its creation to be removed as a concern from the consumer. From a testing perspective this allows the injection of mock objects in situations where a consumer of an object is being unit tested and the consumed object has an outside dependency, such as connecting to a database or making a network call. The Factory Pattern also keeps the consumer from being directly coupled to the object it’s consuming. This lesser coupling makes it that much easier to manage changes to the consumed object. If changes to the consumed object do not change the interface the consumer is using, then changes to the consumed object have no effect on the consumer at all.

I use the Singleton Pattern frequently to apply abstraction to objects that do not need to keep state. Static classes cannot implement interfaces, which makes it difficult to mock them when testing their consumers. Singletons allow for a method of applying an interface to what otherwise would be a static class. Using a Singleton instead of a just creating a separate instance for each call to a class allows for more efficient memory management since only one instance to manage, and the interface allows the Singleton to be mocked when and if necessary.

Lately I have used the Mediator Pattern. This pattern became useful in the UI work I have done in the context of event handling. I implemented a Mediator Pattern to create an event aggregator so event handlers and throwers would register and listen and throw events to the event aggregator, instead of directly to each other. This assisted in minimizing memory leaks due to improper releasing of event references , and also decreased coupling of classes needed for direct event relationships.

Importance Of Releasing Often

One practice that I have come to believe is important is to have regular releases of software that is reviewed by business stakeholders. By regular I mean every 2 – 6 weeks.  These do not have to be full production releases, but they should be to at least a staging or beta platform that business stakeholders can use as if it were the final product.

Regular releases are important for two reasons.

  1. Business stakeholders get to see and provide feedback on the product long before it’s complete. Anything that is going off track can be identified, and fixed earlier, which means for inherently less cost than if found later.
  2. The development team becomes practiced at releasing user ready software.  If business stakeholders use the released version it must meet a certain level of usability and quality.  If a team only releases once a year, or even less frequently, they are performing a task they do not do often, and most likely do not do well.

We have been releasing every 2 – 4 weeks on the current project I am on and it is working very well.  Through this process we have been able to very quickly react to the requests of our business stakeholders, and we are also very good at releasing updated versions of our software.

Regular releases also help teach a team to stick to a schedule, since the releases themselves become the back bone of the schedule.  In short I believe it’s helpful to release early and often to those business stakeholders who will eventually decide if what you’re working on was worth it to them.

There are some caveats to the above.  A blog post form another blog about this same topic has a comment that points out it might be a bad idea to release early and often for software that has life and death implications.  I would agree with this when it comes to shipping products for market use in those fields.  I wonder if internally it doesn’t make sense to do frequent releases to QA or internal business stakeholders though as a matter of process.  For medical or navigation software I would completely agree final releases should be thoroughly tested and ready.

Here’s another post from a different blog about releasing early that talks about the pros and cons if you are trying to market your product in a stealth like manner.  It makes the point that getting feedback from people who are not your real potential users may not be helpful.  I would agree and many times QA or internal business stakeholders are not the real end users, and their feedback may lead to features and changes the eventual end users won’t like.    I think releasing to internal folks as a proxy to real end users would be better that not having any interim releases at all, in at least you know you are making progress towards a final release that is measurable.  The post brings up some really interesting marketing aspects I’d never really thought of since I am mostly in the code these days.

Functional Requirements?

AndersenV
Old Time Andersen Waterfall

I’ve struggled quite a bit with trying to define what are the appropriate level of requirements necessary to begin coding an application.  Back in my days at Andersen Consulting their methodology dictated a very formal process, the deep V.  As you went down into the V requirements started off with a problem statement, then a scope statement, then functional requirements that were translated into a technical design and at the bottom of the V, implementation occurred based on the technical design.   Going back up the other side of the V were testing steps meant to test the documents at their level on the other side of the V.

The Andersen waterfall looked great on paper and off I went on my projects comforted that the V would lead to straightforward project design, implementation and testing.  What I found was this did not occur.  Maybe if requirements had not changed the waterfall would have worked, and maybe it did on many projects were this was the case.  On the projects I worked on, however, this wasn’t the case.

The situation I encountered often back then, and still today, is that requirements fluctuate.  Not only do requirements flucatuate but it’s only after implementation begins that they really begin to fluctuate.  I would find myself frustrated, pointing a the V diagram, and protesting that these issues should have already been decided.  Slowly I have started to realize that many decisions are not made until implementation begins because it is only then that stake holders start to see their system taking shape.  They rethink decisions they made before they had an actual UI to play with or data on reports.  Based on actual things to look at stake holders refine their thinking and even out right change their minds.  What I have come to accept is that this is how the process really works with any sort of application that is heavily UI driven.  Agile development processes have taken this to heart, and I’m sure if Andersen Consulting still existed today (actually it does it’s called Accenture now) they’re process would reflect a far more iterative process that more readily allows for requirement changes.

The one tool that I have embraced that helps to reign in fluctuating requirements for applications that are heavily UI driven is storyboarding.  Continue reading “Functional Requirements?”