Wednesday, December 31, 2008

My 100th blog post for 2008

I'm just in time for my 100th blog post for this year. As I mentioned last year, I think this is my absolute ceiling when it comes to writing blog posts, but you'll never know. My kids are growing up, so maybe I'm going to get more time on my hands next year. Who knows?

Because it is a yearly tradition of mine to make a complete idiot of myself on the Internet, I've practiced a small country dance this week.

All that's left for me is to wish you, my dear reader, a happy and successful new year. May 2009 be the year that all your dreams come true.

Hello Real DDD World

I'm really looking forward to the next Dutch ALT.NET meeting where Yves Goeleven will discuss DDD practices while having a look at a real production application. I think we'll have some great discussions. The meeting will be held on January 27th in Mechelen from 18:00 till 21:00.

Drop by if you are in the neighborhood :-).

Hello LINQ in .NET 2.0

When using Visual Studio 2008, it is possible to use most of the new language additions of C# 3.0 in a .NET 2.0 project. This because the C# 3.0 compiler is used for both .NET 2.0, .NET 3.0 as well as for .NET 3.5 projects in Visual Studio. This means that local variable inference, object initializers, extension methods, lambda expressions, etc. ... can be used in a .NET 2.0 project.

The one thing that is missing however is LINQ because those extensions are packaged in the new System.Core assembly which comes only with .NET 3.5.

However, last week I accidentally stumbled into LINQBridge which enables you to write LINQ to Objects queries targeting .NET 2.0. The only requirements for this is that you need to have Visual Studio 2008 and a reference to the LINQBridge assembly. Note that the current implementation does not support other LINQ providers besides LINQ to Objects (no LINQ to XML and certainly not LINQ to SQL).

The following code is written for targeting the .NET 2.0 runtime.

public class Actor
{
    public String FirstName { get; set; }
    public String LastName { get; set; }
    public Int32 ShoeSize { get; set; }
}
 
public class Program
{
    static void Main()
    {
        var actors = new List<Actor>()
        {
            new Actor() { FirstName = "Chuck", 
                          LastName = "Norris", 
                          ShoeSize = 46 },
            new Actor() { FirstName = "Adam", 
                          LastName = "Sandler", 
                          ShoeSize = 41 },
            new Actor() { FirstName = "Steven", 
                          LastName = "Seagal", 
                          ShoeSize = 48 }              
        };
 
        var actorsWithBigFeet = from actor in actors
        where actor.ShoeSize > 45
        select actor;
 
        foreach(Actor bigfoot in actorsWithBigFeet)
        {
            Console.WriteLine("{0} {1} has shoe size {2}.", 
                              bigfoot.FirstName, 
                              bigfoot.LastName, 
                              bigfoot.ShoeSize);
        }
 
        Console.Read();
   }
}

Just to let you know that if you are stuck with .NET 2.0 like me, then life shouldn't be that bad either ;-). Kudos to project owners for making this possible for us poor developers!

Don't Sell Out on the Context, Dude

I've been reading a lot of code lately. When I'm doing this, I find it very important to have some unit tests that makes it easier for me to comprehend the actual production code. In order to do that the unit tests have to very readable.

Something that I see quite a lot in the projects that I'm involved with is what I call 'Betrayal of Context'. In short, this means that some BDD style unit tests (specifications) are not eligible for the context that has been set up for them. This results in unit tests that are more verbose than they should be which makes them harder to read.

As a piece of code can say more than a thousand words, let me show you the simplest example I could think of.

public class ApplicationRequest
{
    private ApplicationRequestStatus Status { get; set; }
 
    public Boolean IsApproved()
    {
        return ApplicationRequestStatus.Approved == Status;
    }
 
    public void ApproveUsing(IStrictRegulation 
                                     strictRegulation)
    {
        if(ApplicationRequestStatus.Pending != Status)
            throw new InvalidOperationException("Oeps");
 
        if(strictRegulation.Complies(this))
        {
            Status = ApplicationRequestStatus.Approved;
        }
        else
        {
            Status = ApplicationRequestStatus.Rejected;
        }
    }
}
 
public enum ApplicationRequestStatus
{
    Pending = 0,
    Approved = 1,
    Rejected = 2
}
 
public interface IStrictRegulation
{
    Boolean Complies(ApplicationRequest request);
}

What we have here is an utterly useless domain that handles application requests, but it will do for our example. In order to get approved, an application request needs to comply to some strict regulations.

Now that we've got ourselves acquainted with the subject-under-test, let me show you some example of what I consider 'Betrayal of Context'.

[TestFixture]
[Category("ApplicationRequestTestFixture")]
public class When_approving_an_application_request
    : InstanceSpecification<ApplicationRequest>
{
    protected override void Establish_context()
    {
        StrictRegulationStub = MockRepository
            .GenerateStub<IStrictRegulation>();
 
        StrictRegulationStub.Stub(strictRegulation => 
             strictRegulation.Complies(null))
            .IgnoreArguments()
            .Return(true);
    }
 
    [Test]
    public void Then_it_should_be_approved_if_it_meets_strict_regulations()
    {
        SUT.ApproveUsing(StrictRegulationStub);
        Assert.That(SUT.IsApproved());    
    }
 
    [Test]
    public void Then_it_should_be_rejected_if_it_does_not_meet_strict_regulations()
    {
        StrictRegulationStub.BackToRecord();
        StrictRegulationStub.Stub(strictRegulation 
            => strictRegulation.Complies(null))
            .IgnoreArguments()
            .Return(false);
 
        SUT.ApproveUsing(StrictRegulationStub);
        Assert.That(SUT.IsApproved(), Is.False);    
    }
 
    [Test]
    [ExpectedException(typeof(InvalidOperationException))]
    public void Then_an_exception_should_be_thrown_if_its_status_is_not_pending()
    {
        SUT.ApproveUsing(StrictRegulationStub);
        SUT.ApproveUsing(StrictRegulationStub);    
    }
 
    protected override ApplicationRequest Create_subject_under_test()
    {
        return new ApplicationRequest();
    }
 
    private IStrictRegulation StrictRegulationStub 
    { get; set; }
}

I don't know about you, but I have issues with this code. Two out of three specifications have nothing to do with the context setup in the Establish_context method. In fact, the second unit test needs to redo the entire setup for the stub object. All too often I see this happening, which is bad for my heart (at least that's what my doctor keeps telling me :-) ). By organizing unit tests this way, we are also missing out on the 'Because' goodness I'll show you later on.

The thing that disturbs me the most is that these 'shortcuts' add clutter which makes them less readable than they should be. That's what our craft is all about. Communicating! Not only with the compiler, but most importantly with the poor fellow that comes after you (in this case, me!) and needs to understand what you have been doing.

So let us refactor these specifications and put them in the right context.

public abstract class behaves_like_an_application_request_that_meets_strict_regulations
    : InstanceSpecification<ApplicationRequest>
{
    protected override void Establish_context()
    {
        StrictRegulationStub = MockRepository
            .GenerateStub<IStrictRegulation>();
 
        StrictRegulationStub.Stub(strictRegulation 
            => strictRegulation.Complies(null))
            .IgnoreArguments()
            .Return(true);
    }
 
    protected override void Because()
    {
        SUT.ApproveUsing(StrictRegulationStub);
    }
 
    protected override ApplicationRequest Create_subject_under_test()
    {
        return new ApplicationRequest();
    }
 
    protected IStrictRegulation StrictRegulationStub 
    { get; set; }    
}
 
[TestFixture]
[Category("ApplicationRequestTestFixture")]
public class When_approving_a_pending_application_request_that_meets_strict_regulations
    : behaves_like_an_application_request_that_meets_strict_regulations
{
    [Test]
    public void Then_it_should_get_approved()
    {
        Assert.That(SUT.IsApproved());
    }
}
 
[TestFixture]
[Category("ApplicationRequestTestFixture")]
public class When_approving_an_application_request_that_is_not_pending
    : behaves_like_an_application_request_that_meets_strict_regulations
{    
    [Test]
    [ExpectedException(typeof(InvalidOperationException))]
    public void Then_an_exception_should_be_thrown()
    {
        SUT.ApproveUsing(StrictRegulationStub);    
    }
}
 
[TestFixture]
[Category("ApplicationRequestTestFixture")]
public class When_approving_a_pending_application_request_that_does_not_meet_strict_regulations
    : InstanceSpecification<ApplicationRequest>
{
    protected override void Establish_context()
    {
        _strictRegulationStub = MockRepository
            .GenerateStub<IStrictRegulation>();
 
        _strictRegulationStub.Stub(strictRegulation 
            => strictRegulation.Complies(null))
            .IgnoreArguments()
            .Return(false);
    }
 
    protected override void Because()
    {
        SUT.ApproveUsing(_strictRegulationStub);
    }
 
    [Test]
    public void Then_it_should_not_get_approved()
    {
        Assert.That(SUT.IsApproved(), Is.False);
    }
 
    protected override ApplicationRequest Create_subject_under_test()
    {
        return new ApplicationRequest();
    }
 
    private IStrictRegulation _strictRegulationStub;
}

Oh no, you've started out with only one test fixture and now you've got three of them and one base class? How can this be better? Well, I think it is better.

Notice how the context setup code that leaked into the specifications is now moved to where it belongs, namely the Establish_context method where everything is arranged.

By putting each specification in the right context (which is represented by a test fixture), I've been able to act on the subject-under-test in a single reusable method named 'Because'. This saves a lot of copy/paste kung fu when we have a real-world scenario with more than one specification per context.

Also notice that the specifications themselves are now reduced to a single line of code that only asserts the outcome. The fact that there is no more than a single line of code ensures me that I can't get the specifications any simpler than that, which makes everything very comprehensible.

I've moved some common context code into a base class. This is probably overkill for this simple example, but I wanted to show this because it can be a life saver whenever the complexity starts to increase.

Anyway, some last advice for 2008: stay faithful to your context.

Sunday, December 21, 2008

Migrating a Versionable ASMX Web Service to WCF

Creating a versionable ASMX Web Service is something that was really hard to do in .NET 1.1, mostly because it involved a lot of work and discipline. Creating versionable services has become quite easy with WCF because this is an out-of-the-box feature. But what about those web services you already created and that are being used by possibly dozens of applications? Are you stuck with those pesky ASMX Web services or is it possible to easily move them to WCF without much effort? As it turns out, you can replace your old ASMX web services with WCF even without the need to change or recompile the client software applications.

First, let's talk about how I've been developing versionable ASMX web services in the past. After that, I'll show you how to easily migrate an ASMX web service to WCF.

Versionable ASMX Web Services in .NET 1.1

For building versionable ASMX web services, I've been using XML messages with a version number that indicates the particular edition of a message. These XML messages are composed by a service agent component. This service agent component is responsible for providing a strongly type interface to the using applications and mapping these to their respective XML representation. The service agent then makes a call to the ASMX web service after which it translates/validates the received XML response back to a strongly typed representation.

image

When the ASMX web service receives a message from a service agent, it first extracts the version number and then sends it to an appropriate message handler for that particular version. The message is then translated back to an object representation after which the requested action is executed.

I agree that this is a lot of work, but it turned out very well in a .NET 1.1 environment. It adds the tremendous benefit of being able to change the contract (= new version) of the ASMX web service without the need to change any of the client applications. Regression tests are certainly desirable before releasing a new version in order to check whether the most recent changes didn't break anything for the older contracts.

This simple code sample illustrates how a service agent might work:

 

public class ServiceAgent
{
    private readonly ServiceCredentials _serviceCredentials;
 
    public ServiceAgent(ServiceCredentials serviceCredentials)
    {
        _serviceCredentials = serviceCredentials;
    }
 
    public ProcessingResult ProcessOrder(Order order)
    {
        OrderXmlRequestMapper xmlRequestMapper = 
            new OrderXmlRequestMapper();
        String xmlRequest = xmlRequestMapper.MapFrom(order);
 
        String xmlResponse = String.Empty;
        using(AsmxService service = ServiceProxyFactory.
            CreateServiceProxy(typeof(AsmxService), 
                               _serviceCredentials))
        {
            xmlResponse = service.ProcessOrder(xmlRequest);
        }
 
        ProcessingResultXmlResponseMapper xmlResponseMapper = 
            new ProcessingResultXmlResponseMapper();
        ProcessingResult result = 
            xmlResponseMapper.MapFrom(xmlResponse);
 
        return result;
    }
}

And this is some sample code for an ASMX web service:

 

[WebService(Namespace = http://www.jvr.be/AsmxService)]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class AsmxService : WebService
{
    [WebMethod(MessageName = "ProcessOrder")]
    public String ProcessOrder(String xmlRequest)
    {
        // Convert the XML request back to objects
 
        // Process the order
        Debug.Write("ProcessOrder on ASMX service called.");
 
        // Return an XML response
        String xmlResponse = "This is a versioned reponse.";
        return xmlResponse;
    }
}

Notice how dutifully a meaningful namespace for the web service and a message name for the web method is provided. This is something that I always considered a best practice. Turns out that it does have some benefits.

Anyway, enough is enough. Let's see how we can replace such an ASMX web service with a WCF service.

Migrating from ASMX web services to WCF

When migrating to a WCF service, we obviously want the existing client applications to keep working with the service agents. New applications can use a WCF client proxy for communicating with a new contract of the WCF service. This way we can use the versioning technology of WCF for future releases. Existing applications can gradually move to a WCF client proxy as well, but at their own pace.

image

The whole setup is to trick the service agents that the ASMX web service is still there while it actually is replaced with a WCF service that provides an extra service contract that mimics the old ASMX web service for backwards compatibility. Let's see how we can do this in code.

First we define a service contract for the old ASMX web service.

 

[ServiceContract(Namespace = http://www.jvr.be/AsmxService)]
public interface IOldAsmxService
{
    [OperationContract(
        Action = http://www.jvr.be/AsmxService/ProcessOrder)]
    String ProcessOrder(String xmlRequest);
}

Notice the namespace provided by the ServiceContract and the action for the OperationContract. This is how the new service contract looks like:

 

[ServiceContract]
public interface IWcfService
{
    [OperationContract]
    ProcessOrderResponse ProcessOrder(ProcessOrderRequest request);
}

Implementing the concrete service class is somewhat straightforward:

 

[ServiceBehavior(Namespace = http://www.jvr.be/AsmxService)]
public class WcfService : IWcfService, IOldAsmxService
{
    public ProcessOrderResponse ProcessOrder(
        ProcessOrderRequest request)
    {
        // Process the order
        Debug.Write("ProcessOrder of IWcfService called.");
 
        return new ProcessOrderResponse();
    }
 
    public String ProcessOrder(String xmlRequest)
    {
        // Map XML request to a ProcessOrderRequest 
        // (in a separate mapper class!!)
 
        // Process the order
        Debug.Write("ProcessOrder of IOldAsmxService called.");
        var response = ProcessOrder(new ProcessOrderRequest());
 
        // Map ProcessOrderResponse to a XML response 
        // (in a separate mapper class!!)
        return "Some mapped XML reponse";
    }
}

Notice that the service method that supports the contract of the old ASMX web service delegates its call to to the new method. You can also do this the other way around if you want.

The first step to make this all work is to add a new file with an .asmx extension that contains the following line:

<%@ServiceHost language=c# Debug="true" Service="Jvr.WcfService" %>

The next step is to add the following configuration settings to the web.config file of the WCF service.

 

<system.web>
    <compilation debug="false">
        <buildProviders>
            <remove extension=".asmx"/>
            <add extension=".asmx" 
        type="System.ServiceModel.Activation.ServiceBuildProvider, 
        System.ServiceModel, Version=3.0.0.0, Culture=neutral, 
        PublicKeyToken=b77a5c561934e089" />
        </buildProviders>
    </compilation>
</system.web>

Make sure to create an endpoint for both the old and the new service contracts. You have to use basicHttpBinding for the old service contract. Now you can replace your old ASMX service with a new shiny WCF service and this without breaking any client applications.

Thoughts? Flames? Anything? Please let me know.

Saturday, December 13, 2008

Dutch ALT.NET Meeting - 11 December 2008

Again lots and lots of interesting discussions during the recent Dutch ALT.NET meeting. My employer kindly lent us a nice meeting room to facilitate us geeks. Again, muchos gracias for that. Unfortunately, a number of people couldn't make it but we did manage to talk about some AAA and BDD technicalities. The code we produced can be downloaded here. I added a copy of the tests that use the automocking container instead of plain MockRepository plumbing.

For our next gathering, Yves Goeleven will enlighten us with some real-world DDD and perhaps even DDDD stuff. I'm really looking forward to that one.

Hope to meet you there next year.

Learning about StructureMap

I've been playing around with StructureMap for the last couple of days and I must say that I'm really impressed. I had no troubles in quickly getting up to speed with this amazing IoC container. Jeremy claims that StructureMap is actually one of the first IoC containers in the .NET space. Although it has been around for a long time, somehow it has always been under (or above) my radar until my colleague Peter pointed out that I should take a look. I've always been a Castle Windsor fan boy, but using StructureMap for the past couple of days challenged my assumptions about IoC containers.

Anyway, I'm not going to repeat all of the good stuff that has already been written about StructureMap. Reading this post (also take a look at the linked articles) and watching these Dime Casts should get you started fairly quickly.

With this post I'm going to show you some of my personal favorite features.

1/ No explicit configuration required.

Suppose I have a simple message handler class with two dependencies. The usual Castle Windsor configuration looks like this:

 
_container.Register(
    Component.For<IUserRepository>()
    .Named("UserRepository")
        .ImplementedBy<UserRepository>(),
 
    Component.For<ILdapStore>()
     .Named("ActiveDirectory")
       .ImplementedBy<ActiveDirectory>(),
 
    Component.For<CreateUserMessageHandler>()
        .Named("CreateUserMessageHandler")
);
 
var messageHandler =  
    _container.Resolve<CreateUserMessageHandler>();

The StructureMap configuration for the same scenario looks like this:

 

ObjectFactory.Initialize(registry =>
{
    registry.ForRequestedType<IUserRepository>()
    .TheDefault.Is.OfConcreteType<UserRepository>();
 
    registry.ForRequestedType<ILdapStore>()
        .TheDefault.Is.OfConcreteType<ActiveDirectory>();
});
 
var messageHandler = 
    ObjectFactory.GetInstance<CreateUserMessageHandler>();

Notice that StructureMap doesn't require you to register the CreateUserMessageHandler class! It's a small detail, but a nice one.

2/ Diagnostics

If you want to provide a unit test that checks whether the configuration of the container is valid, you can use the following method :

 

ObjectFactory.AssertConfigurationIsValid();

If something is wrong, a StructureMapConfigurationException is thrown. It doesn't get any easier than this.

3/ Profiles and Contextual Binding

Using profiles in StructureMap, you can basically switch container configuration based on a context. Here is a code sample that illustrates this concept:

 

ObjectFactory.Initialize(registry =>
{
    registry.CreateProfile("ActiveDirectory")
    .For<ILdapStore>().UseConcreteType<ActiveDirectory>();
    registry.CreateProfile("Fedora")
        .For<ILdapStore>().UseConcreteType<Fedora>();
});
 
ObjectFactory.Profile = "ActiveDirectory";
var ldapStore = ObjectFactory.GetInstance<ILdapStore>();
Assert.That(ldapStore, Is.TypeOf(typeof(ActiveDirectory)));
 
ObjectFactory.Profile = "Fedora";
ldapStore = ObjectFactory.GetInstance<ILdapStore>();
Assert.That(ldapStore, Is.TypeOf(typeof(Fedora)));
 

With a single line of code you can switch configuration. Again, quite easy.

4/ Custom Instance Creation

Castle Windsor provides a FactorySupport facility which enables you create your own instances for a certain type. StructureMap provides the same option through its fluent interface:

 

ObjectFactory.Initialize(registry =>
{
    registry.InstanceOf<ILdapStore>()
    .Is.ConstructedBy(() => new OpenLdap("DC=Jan,DC=BE"));
});
 
var ldapStore = ObjectFactory.GetInstance<ILdapStore>();

The fluent interface of StructureMap makes this very readable.

5/ Auto Mocking

Ayende provided an implementation of an auto mocking container in his Rhino Tools repository that uses Castle Windsor. StructureMap provides a similar API straight out of the box. No separate downloads required. You can read this article for more information. I know that there are numerous people out there who resist the idea of an auto mocking container, but it simply makes setting up mock objects and the subject-under-test less tedious.

Conclusion

As I mentioned before, I've been a Castle Windsor fanatic for quite some time now. The sweetness of StructureMap puts me into a mind struggle about which IoC container I like the most. To me, one of the most compelling features of Castle Windsor is its extensibility with custom facilities. I didn't notice any equivalent feature in StructureMap. I could be wrong however, so please let me know if it does have a form of add-on capabilities. On the other hand, the ease of use and well thought out API of StructureMap makes it serious candidate as well. Anyone on my current team could start using it in no time. That's a very important aspect as well.

Anyway, if you are looking for an IoC container, make sure to take StructureMap into account as well. Its the small subtleties in the API that makes it so nice to use.

Tuesday, December 02, 2008

Type Analyzing Blogs

Ayende blogged about this web site which you can use to analyze the text on your blog (and those of others). It makes some guesses about the personality of the person(s) that write on the particular blog.

Turns out that both my personal blog and ElegantCode yields the same results:

INTJ - The Scientists

The long-range thinking and individualistic  type. They are especially good at looking at almost anything and figuring out a way of improving it - often with a highly creative and imaginative touch. They are intellectually curious and daring, but might be physically hesitant to try new things.
The Scientists enjoy theoretical work that allows them to use their strong minds and bold creativity. Since they tend to be so abstract and theoretical in their communication they often have a problem communicating their visions to other people and need to learn patience and use concrete examples. Since they are extremely good at concentrating they often have no trouble working alone.

Till next time

Monday, December 01, 2008

WCF and Multiple IIS Site Bindings

We ran into an issue last week when we were deploying a WCF service on an IIS web site which had multiple IIS bindings. It manifested itself by throwing the following exception:

This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.

Turns out that multiple IIS site bindings results in multiple base addresses while WCF (.NET 3.0) only supports a single base address in this scenario. Fortunately there are a couple of solutions to bypass this annoying shortcoming.

First, you can provide a filter for the base addresses by providing a custom ServiceHostFactory:

public class InvoiceServiceHostFactory 
    : ServiceHostFactory
{
    protected override ServiceHost CreateServiceHost
        (Type serviceType, Uri[] baseAddresses)
    {
        Uri preferredHostBaseAddress = 
            Settings.PreferredHostBaseAddress;
        return base.CreateServiceHost(serviceType, 
                                      preferredHostBaseAddress);
    }    
}

This custom ServiceHostFactory can now be used by specifying it in the .svc file like so:

<%@ ServiceHost Language="C#"
                Debug="true"
                Service="InvoiceService.InvoiceService"
                Factory="InvoiceService.Wcf.InvoiceServiceHostFactory"
                CodeBehind="InvoiceService.svc.cs" %>

This has the major disadvantage that requests can only be received by a single base address, which we retrieve from the configuration file in this case. To solve this issue, we can tweak our custom ServiceHostFactory like so:

public class InvoiceServiceHostFactory 
    : ServiceHostFactory
{
    protected override ServiceHost CreateServiceHost
        (Type serviceType, Uri[] baseAddresses)
    {
        return base.CreateServiceHost(serviceType, 
                                      new Uri[0]);
    }    
}

Now we have to provide a base address for each WCF binding in the configuration file:

<service name="InvoiceService.InvoiceService">
    <endpoint 
        address="http://huppeldepup.be/InvoiceService/InvoiceService.svc" 
        binding="wsHttpBinding" 
        contract="InvoiceService.IInvoiceService" />
    <endpoint 
        address="http://huppeldepup.com/InvoiceService/InvoiceService.svc" 
        binding="wsHttpBinding" 
        contract="InvoiceService.IInvoiceService" />
</service>

This way, we can start receiving requests through multiple bindings, which is what we wanted in the first place. This approach has one disadvantage though. When a corresponding IIS binding ever changes, we also have to remember to check every configuration of every WCF service that lives inside the respective IIS web site. This is error prone and involves friction.

If you are using .NET 3.5 however, then this is your lucky day. WCF that comes with .NET 3.5 now supports BaseAdressPrefixFilters.

<serviceHostingEnvironment>
    <baseAddressPrefixFilters>
    <add prefix="http://huppeldepup.be/"/>
        <add prefix="http://huppeldepup.com"/>
    </baseAddressPrefixFilters>
</serviceHostingEnvironment>

There you go. Too bad this feature is not available with .NET 3.0. Until you switch to .NET 3.5, you have to roll your own solution I'm afraid.

Till next time.