Tuesday, February 26, 2008

Single Responsibility Principle

This is a clear and funny violation of the Single Responsibility Principle. That's why I don't understand women :-).

Saturday, February 23, 2008

On Software Factories

Bart Waeterschoot is blogging on IntoFactories.NET. I'm glad he joined the blogosphere. Bart is a very smart guy and I'm looking forward to more of his writing.

Nonetheless, I have to respectfully disagree with some of the content of his last post Read Software Factories - The developer gap.

He starts his case with the fact that

The gap between ‘regular’ developers and ‘extreme’ developers is widening very fast.

This one I do agree with. Its why we need coaching skills so badly. This is one of the things that ALT.NET is all about: showing other developers in the .NET space that there is more out there than just Microsoft. Drinking the kool-aid that the ALT.NET masters are or should be laying out for us.

Bart goes on talking about automating best practices. Then he states:

And best of all, you can choose to just use the factory without knowing into detail which technologies and patterns lie underneath.

This seems very dangerous to me. Let me share some wisdom from Keith Pleas:

Protecting developers from the consequences of bad coding encourages bad coding.

And he's right you know. I'm going to make a statement here:

If you don't know the patterns underneath a certain framework or tool, then please, don't use it!

Let it be NHibernate, CAB, whatever, if you don't know what's going on under the hood, you're setting yourself up for failure. If you don't understand the problems its solving or how it does it, you are heading for disaster.

If you have someone on your team that is famous for writing the most crappy code on the planet, they will keep screwing things up. No matter how much effort you make into preventing someone messing it up: they will always find a way. Can you blame them? No. Why? Because they don't know any better. If you explain why they are doing things wrong, they can learn. If they still find resistance, then you have other problems to worry about.

You don’t need a full-blown software factory to start bridging the gap. Just start gathering all internal best practices in your company and bundle them in a document, write code snippets, save project templates, ...


You're 200% right on this one. Our team is having a meeting every month with one goal in mind: Share the intellectual wealth. We exchange information about things we find useful and about the things we don't find useful. It's like our own private ALT.NET open space conference. We keep track of these practices in a document. These are our guidelines. This is what we as a team want to do in order to constantly improve ourselves, to make progress in what we do as a team. The first thing a new team member does, is going through this document so that he can contribute as fast a possible.

The essential point I'm trying to bring here, is that I don't believe software factories as a methodology is a good thing. I'm a big believer of self organizing teams (must read!!). Everyone in the team is responsible. As a developer your responsible for the design, maintainability and stability of the system you are trying to build. I agree that there are different types of developers out there, which is a good thing. Embrace the diversity of your team, and start utilizing the different talents that you have on board.  

Trying to protect developers against their own actions shows that you are failing as a team and that you have trust issues. I recently sat down in a meeting with someone of the central architecture team (yes, this ancient phenomenon still exists). He mentioned somewhere during this meeting that we couldn't do option X because the developers of the applications using it could not be trusted! How about communicating things and explain why doing this particular thing for option X is actually bad? Maybe, these developers come up with another solution that is even better.

I'll definitely like the pragmatic approach Bart is suggesting with code snippets, project templates, etc. ... . I do believe that they are of use on a micro-level.

Let me know what you think. No silver bullets, no universal truths.

Thursday, February 21, 2008

How Microsoft is solving yesterday's problems

There's this heated discussion going on at the ALT.NET forum about Unity, a lightweight IOC container coming from P&P (make sure that you pick up this post at ALT.NET Pursefight). I blogged about this in the past here and here, so I'm not going to repeat my standpoint on this matter.

What I do want to mention is the fact that, to me, Microsoft seems to be solving yesterday's problems, not always in a successful way.

While the open-source community is working on solutions for today's problems (e.g. AOP, BDD, ...), Microsoft is following the more established OSS projects (NUnit/MbUnit, NHibernate, Castle Windsor/Monorail, etc. ...).

As far as I'm concerned, I like some more of this:

TestingFrameworkOptions

This shows that there are some people at Microsoft with some common sense.

From Space Shuttle to Software

This article, Richard Feynman, the Challenger Disaster, and Software Engineering, really struck a nerve. It's very well written and well thought out. I'll jump right to the conclusions of this article:

  • Engineering can only be as good as its relationship with management. (Right!)
  • Big design up front is foolish. (Amen, brother)
  • Software has much in common with other engineering disciplines. (Much in common, but not everything.)
  • Reliable systems are built by rigorously tested, incremental bottom-up engineering with an 'attitude of highest quality'. (This sentence is so my motto!)

If your serious about what you do as a professional software craftsman, then go read this article.

Tuesday, February 19, 2008

DirectoryEntry Close vs Dispose

While I was reading The .NET Developer's Guide to Directory Services Programming yesterday, I came across this passage called Close or Dispose?

There's a class called DirectoryEntry in the System.DirectoryServices namespace which has both a Close and a Dispose method. As it appears,  you get different behavior depending on which of these two methods you call to free up it's memory.

Now, quoting Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries:

CONSIDER proving method Close(), in addition to the Dispose() method, if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ...

The authors of  The .NET Developer's Guide to Directory Services Programming recommend that one should always use the Dispose method for releasing a DirectoryEntry object, and this for the following reasons:

  • Dispose() suppresses .NET finalization, Close() does not.
  • Calling Close() on a DirectoryEntry object enables rebinding of this object to a different object (NOT recommended).
  • Using .NET 1.x, calling Dispose() is the only way to prevent significant memory leaks due to bugs in the finalization of a DirectoryEntry object. This causes the underlying COM object never to be released. This is fixed with the released of .NET 2.0.

Needless to say that this is not included with the MSDN documentation. Also a violation of their own guidelines. What as shame!

Bottom line, always call Dispose on a DirectoryEntry object. Also make sure that you release DirectoryEntry objects that you get from properties or methods, like the Parent or SchemaEntry property. These properties return a new instance of DirectoryEntry every time you call them (being picky: they should have been methods instead). So make sure you keep a reference so you only have to call them once.

Till next time.

Saturday, February 16, 2008

Resharper 4.0 EAP

Visual Studio 2008 is finally worth the processor cycles of my PC: a first drop of Resharper 4.0 is here! Go get it.

Jan, the Resharper addict

Monday, February 11, 2008

WCF Security

For my current project, I'm involved in writing some intranet services using WCF. For some dogmatic reason, we are not allowed to use the traditional WCF bindings for intranet scenario's, like the NetTcpBinding or the NetNamedPipeBinding. The only downside for using these bindings is the fact that we cannot use IIS (at least until IIS 7.0) for hosting these services. I personally can live with this restriction as it is very easy to host a WCF service in a Windows Service until Windows Server 2008 becomes available (which should be very soon) which includes IIS 7.0/ WAS.

Anyway, as this roots out the right tool for the right job, we are stuck with using the HTTP bindings for out intranet services (the WsHttpBinding to be more accurate). Today, I encountered one of the consequences in doing this.

Out tester-on-duty performs his tests with a WinForms sample application that uses one of the intranet services. When the application starts, he makes two calls to the service. Then he leaves the application alone for a while. After an hour or so, he comes back and sends another request to the service. The following error message appears:

Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security.

After some reading on the web and some books on WCF, I solved the issue by tweaking the WsHttpBinding configuration like so:

<wsHttpBinding> <binding name="customWsHttpBinding"> <security mode="Message"> <message negotiateServiceCredential="false" establishSecurityContext="false"/> </security> </binding> </wsHttpBinding>

The negotiateServiceCredential setting determines whether negotiation takes place between the client and the service to determine a session key for signing and encryption. The establishSecurityContext setting reduces the overhead for repeatable communications between a client and a server by removing the re-authentication of credentials that are supplied by the client.


Would I had this problem if I was using one of the bindings that are meant to be used for intranet scenario's? Probably not. The default settings for these bindings are optimized for this scenario. The default settings for the HTTP bindings are optimized for Internet scenario's.


Using an HTTP binding for intranet scenario's is certainly possible, but as it appears, it involves some pain and configuration hell to get it right. Just use the correct binding (if possible :-) ) for the scenario at hand. It will save you a lot of grief.

Update: It appears that I jumped to conclusions a bit too soon. The thing that was causing this problem was the fact that I was caching the service proxy as a side effect of putting the encapsulating tasks class in a service locator. This caused the timeouts for the security session. I feel stupid that I didn't notice this a bit sooner. My opinion on using the right binding for the scenario at hand still stands. I still believe that the NetTcpBinding or the NetNamedPipeBinding is more appropriate for the services I'm building.

TeamCity 3.1

A new version of TeamCity will soon become available for download. NUnit 2.4 is going to be supported out-of-the-box along with other impressive additions and improvements. Read about it here. Looking good guys, looking good!

Sunday, February 10, 2008

Directory Programming with System.DirectoryServices. AccountManagement

Last months MSDN Magazine contains a very interesting article that titles Managing Directory Security Principals in the .NET Framework 3.5. Its a nice introduction to the classes in the new System.DirectoryServices. AccountManagement namespace.

After reading this article, I was both very enthusiastic about it and a bit frustrated at the same time. First of all, I was very pleased with the new API because using the classes in the System.DirectoryServices namespace is painful to say the least. I feel frustrated because its included with .NET 3.5, which is not an option at my current employer for many years to come (yeah, I don't understand this either).

A while ago, I needed to programmatically  setup some users in Active Directory with an X509 certificate. There was a lot of friction to accomplish this with the familiar DirectoryServices API. It took me a couple of days to figure it out. Needles to say that this is not documented anywhere. One post on the MSDN forums has put me on right track, but still, it was very hard to figure things out.

There's this book called The .NET Developer's Guide to Directory Services Programming (that I'm currently reading) that provides a lot of tips and tricks. But for the certificate issue, it didn't help either.

Solving this with the new API's is just a breeze and it involves very little code that is easy to read, easy to understand, maintainable and scalable (We all want that, right?).

For the next project I'm going to be working on, again I need to do a fair amount of directory programming. I would like to use the new API as it would probably be more productive and cuts a significant amount of the development efforts. This way I get to focus on solving the real problems at hand instead of wasting my time on low-level API's (same goes for the crappy DAL we are using instead of a decent ORM; this is a whole different story that I'll explain in another post). This means that I want to use the System.DirectoryServices.AccountManagement.dll assembly in a .NET 2.0 application which should be possible because .NET 3.5 is still built on top of CLR 2.0.

System.DirectoryServices.AccountManagementAs you can see, the AccountManagement API is built on top of the existing DirectoryServices and the DirectoryServices.Protocols API's that are available in .NET 2.0. The Protocols API is a low-level API on top of the Windows LDAP subsystem (wldap32.dll), which ensures that the AccountManagement API has some performance benefits because the ADSI COM interop layer is not used for some scenario's.

Anyway, there are some things that you have to watch out for (there's always a catch: no silver bullets, remember). If the AccountManagement API uses some .NET 3.5 specific feature or extension, you're screwed. Above all, you don't really know until you use a certain feature of the AccountManagement API and it gives you a runtime error. You also need to make sure that Service Pack 1 of .NET 2.0 is installed (you can download it separately). This because there are some additions to the DirectoryServices and the DirectoryServices.Protocols API's. These additions are used by the AccountManagement API.

Again, if you have to do some directory programming, consider using the new AccountManagement API in .NET 3.5. With all this fuss about LINQ, one loses sight on the other great additions to the .NET Framework like this one. The team at Microsoft that delivered these new API's did a really nice job. Kudos!

Thursday, February 07, 2008

Treat Warnings as Errors

This in one of my pet peeves. Its simply none negotiable!

WarningsAsErrors

I still don't understand why it can be turned off. Heck, I don't understand why its not turned on by default. I even don't understand why the C# compiler even emits warnings instead of errors in the first place. It seems that I don't understand very much, do I ? ;-)

If you want to know why you should always enable the Treat Warnings as Errors option, then just read this post from Derik Whittaker who just joined the club.

So, let's get to the order of the day, shall we? Stand up, put up your right hand and repeat after me: I will always treat compiler warnings as errors.

Now, to make your life a bit easier, you can use John Robbins' most excellent SettingsMaster add-in which comes with the source code of his book Debugging Microsoft .NET 2.0 Applications (if you don't own a copy of this book, then go get one; you are missing out). This particular add-in adds a button to the toolbar of Visual Studio. Just select a project in the Solution Explorer and press the button. That's it. Your set to go.

Push the button

Make it a habit to push that button every time you add a new project to your solution. It will save you a lot of grief in the long run. Trust me, I know. As usual, I found out the hard way.  

Sunday, February 03, 2008

Exploring BDD style specifications as a better TDD

You might think that I suffer from a severe case of acronymitis judging from the title of this post, but the only thing I suffer from right now is a terrible cold. Anyway, I'm currently looking into how a Behavior-Driven Development approach can solve some issues I have with writing unit tests. I first heard about BDD about a year ago, but I never really paid any attention to it after I've read the following posts about BDD style naming of context/specification:

There's more to BDD than just its style of naming tests, but I don't have a full picture of it yet. Heck, I'm not even sure if I agree with the whole approach that BDD has to offer.  Anyhow, I did a little spike in order to get me started and I was very pleased with the results. I first made up a very simple user story that I would implement for this sample.

        As a music fanatic I need to be able to
        view which records I own for a specific genre,
        so that I can make a choice of what I want to listen.

I first implemented this user story like I would normally do in a test-first way using the AutoMockingContainer I described in a previous post.

Unit tests

[TestFixture] public class RecordServiceTestFixture : AutoMockingTestFixture<RecordService> { private const Int64 GenreId = 12; [Test] public void GetAllRecordsForGenre_ WithGenreIdentifier_ VerifyInteractionWithGenreRepository() { using(Record) { Expect.Call(MockGenreRepository.FindBy(GenreId)) .Return(CreateGenre()); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void GetAllRecordsForGenre_ WithGenreIdentifier_ VerifyInteractionWithRecordRepository() { using(Record) { Genre genre = CreateGenre(); SetupResult.For(MockGenreRepository.FindBy(0)) .IgnoreArguments() .Return(genre); Expect.Call( MockRecordRepository.GetAllRecordsFor(genre)) .Return(CreateListOfRecords()); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void GetAllRecordsForGenre_ WithGenreIdentifier_ VerifyListOfRecordObjectsIsReturned() { IEnumerable<Record> expectedRecords = CreateListOfRecords(); Genre genre = new Genre(); SetupResult.For(MockGenreRepository.FindBy(0)) .IgnoreArguments() .Return(genre); SetupResult.For( MockRecordRepository.GetAllRecordsFor(null)) .IgnoreArguments() .Return(expectedRecords); using (PlaybackOnly) { IEnumerable<Record> records = CreateSubject().GetAllRecordsForGenre(GenreId); Assert.That(records, Is.SameAs(expectedRecords)); } } private static Genre CreateGenre() { return new Genre(); } private static IEnumerable<Record> CreateListOfRecords() { return new List<Record>(); } private IGenreRepository MockGenreRepository { get { return Mock<IGenreRepository>(); } } private IRecordRepository MockRecordRepository { get { return Mock<IRecordRepository>(); } } }

(I needed to format the names of the unit tests. My apologies for that.)

Subject-under-test 

public class RecordService { private readonly IGenreRepository _genreRepository; private readonly IRecordRepository _recordRepository; public RecordService(IGenreRepository genreRepository, IRecordRepository recordRepository) { _genreRepository = genreRepository; _recordRepository = recordRepository; } public IEnumerable<Record> GetAllRecordsForGenre( Int64 genreId) { Genre genre = _genreRepository.FindBy(genreId); IEnumerable<Record> records = _recordRepository.GetAllRecordsFor(genre); return records; } }

Take a look at the second and third unit test. Did you notice that these tests contain calls to SetupResult.For. These calls are needed in order to get our tests up-and-running. These stubs ensure that I get to the particular point in my code that I want to test. But still, they feel like noise to me. They blur the test though it is a necessary evil.

With this approach, I also have one test fixture for every subject under test. When my subject has more than one method, all the tests for these methods are assembled in this one test fixture.

With the BDD context/specification naming style, every context is now mapped to one test fixture. So, if I have multiple methods on my subject under test, then I'll have multiple test fixtures. The test cases are the specifications itself.

A fluent language approach is used for naming these contexts/ specifications. See for yourself.

[TestFixture] [Category("RecordServiceTestFixture2")] public class When_retrieving_all_records_for_a_specific_genre : Specification<RecordService> { private const Int64 GenreId = 12; private Genre _genre; private IEnumerable<Record> _records; protected override void Before_each_specification() { _genre = new Genre(); _records = new List<Record>(); SetupResult.For(MockGenreRepository.FindBy(0)) .IgnoreArguments() .Return(_genre); SetupResult.For( MockRecordRepository.GetAllRecordsFor(null)) .IgnoreArguments() .Return(_records); } [Test] public void Then_find_the_genre_for_the_specified_id() { BackToRecord(MockGenreRepository); using(Record) { Expect.Call(MockGenreRepository.FindBy(GenreId)) .Return(_genre); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void Then_find_all_records_for_a_specific_genre() { BackToRecord(MockRecordRepository); using(Record) { Expect.Call( MockRecordRepository.GetAllRecordsFor(_genre)) .Return(_records); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void Should_return_a_list_of_records() { using(PlaybackOnly) { IEnumerable<Record> records = CreateSubject().GetAllRecordsForGenre(GenreId); Assert.That(records, Is.SameAs(_records)); } } private IGenreRepository MockGenreRepository { get { return Mock<IGenreRepository>(); } } private IRecordRepository MockRecordRepository { get { return Mock<IRecordRepository>(); } } }

This looks a lot cleaner to me. The name of the test fixture now describes the context for the tests. If I have multiple contexts, then I have multiple test fixtures. I still put all these contexts in one code file, which is named <subject-under-test>TestFixture.cs. This way I can use the shortcut CTRL-SHIFT-N of Resharper to locate these. I use the Category attribute of NUnit to group the contexts in the test runner.

The test cases itself describe the actual behavior of of my subject-under-test. With this approach I was able to move the setup code to the Before_each_specification method of the context. I made some minor changes to the code of my base test fixture (which is now called Specification) that leverages the AutoMockingContainer.

public abstract class Specification<TSubject> where TSubject : class { private MockRepository _mockRepository; private AutoMockingContainer _autoMockingContainer; protected AutoMockingContainer AutoMockingContainer { get { return _autoMockingContainer; } } protected MockRepository MockRepository { get { return _mockRepository; } } protected IDisposable Playback { get { return MockRepository.Playback(); } } protected IDisposable PlaybackOnly { get { using (Record) { } return Playback; } } protected IDisposable Record { get { return MockRepository.Record(); } } protected TSubject CreateSubject() { return _autoMockingContainer.Create<TSubject>(); } protected TDependecy Mock<TDependecy>() where TDependecy : class { return _autoMockingContainer.Get<TDependecy>(); } protected TDependecy Stub<TDependecy>() where TDependecy : class { _autoMockingContainer.Mark<TDependecy>().Stubbed(); return _autoMockingContainer.Get<TDependecy>(); } protected virtual void Before_each_specification() {} protected virtual void After_each_specification() {} public void BackToRecord(Object mock) { MockRepository.BackToRecord(mock); } [SetUp] public void BaseSetUp() { _mockRepository = new MockRepository(); _autoMockingContainer = new AutoMockingContainer(_mockRepository); _autoMockingContainer.Initialize(); Before_each_specification(); CreateSubject(); } [TearDown] public void BaseTearDown() { After_each_specification(); _autoMockingContainer = null; _mockRepository = null; } }

Running the specifications with a test runner gives this results:

BddTestRun 

This approach results in very readable and concise unit tests. Every unit test describes a specification of the software you're trying to build. It also enables you to focus on a single specification at a time. The obligatory setup code is now banished to the Setup method, which reduces the amount of noise and prevents from having duplicate code in your tests.

For naming the contexts/specifications, I'm using Agile Joe's most excellent BDD macro. There's also a screen cast that is very helpful and its definitely worth your time if you're serious about using this approach. He explains how to setup the BDD macro and how to use it in Visual Studio.

I really like this approach and I can't wait to start using it in my day-to-day coding efforts.

Friday, February 01, 2008

Using NCover/NCoverExplorer from MsBuild

In my previous post I mentioned how easy it is to incorporate code coverage into TeamCity using NCover/NCoverExplorer. I'm using the NCoverExplorer Extras package that can be downloaded here.

On popular demand, here is a sample from my MsBuild file:

<UsingTask TaskName="NCoverExplorer.MSBuildTasks.NCover" AssemblyFile="NCoverExplorer.MSBuildTasks.dll"/> <UsingTask TaskName="NCoverExplorer.MSBuildTasks.NCoverExplorer" AssemblyFile="NCoverExplorer.MSBuildTasks.dll"/> <Exec Command="regsvr32 /s $(tools_dir)\NCover\CoverLib.dll"/> <NCover Assemblies="@(CodeCoverage_Assemblies)" CommandLineArgs="@(UnitTest_Assemblies, ' ')" CommandLineExe="$(tools_dir)\NUnit\nunit-console.exe" CoverageFile="$(CodeCoverageResultsXmlFile)" LogLevel="Quiet" ToolPath="$(tools_dir)\NCover\"/> <NCoverExplorer CoverageFiles="$(CodeCoverageResultsXmlFile)" FailMinimum="false" HtmlReportName="CodeCoverage.html" OutputDir="$(ProjectPath)" ProjectName="$(ApplicationName)" ReportType="ModuleClassSummary" SatisfactoryCoverage="60" ToolPath="$(ToolsDir)\NCoverExplorer\"/> <Exec Command="regsvr32 /u /s $(toolsdir)\NCover\CoverLib.dll"/>

Still a lot of XML, but it works. The regsvr32 command before and after the NCover/NCoverExplorer task is to make sure that NCover is  working correctly. This because I've put the NCover/NCoverExplorer binaries in my Subversion repository. The CoverLib.dll file must be registered.