Thursday, October 30, 2008

Attending the Kaizenconf

Peter and yours truly arrived at Austin yesterday for attending the Kaizenconf. I've been really looking forward to this. Hope to meet you there. If you see two goofy European guys wandering around, just say hi ;-).

Saturday, October 25, 2008

Refining Context/Specification BDD using Rhino Mocks 3.5

Earlier this year, I wrote this blog post about exploring Behavior-Driven Development as a better way of doing Test-Driven Development. In this post, I spoke about how to organize unit tests by their context and about how to apply a fluent language approach for naming these contexts/specifications. Here is how the example code of the context/specification from that post looks like:

[TestFixture] [Category("RecordServiceTestFixture")] public class When_retrieving_all_records_for_a_specific_genre : Specification<RecordService> { private const Int64 GenreId = 12; private Genre _genre; private IEnumerable<Record> _records; protected override void Before_each_specification() { _genre = new Genre(); _records = new List<Record>(); SetupResult.For(MockGenreRepository.FindBy(0)) .IgnoreArguments() .Return(_genre); SetupResult.For( MockRecordRepository.GetAllRecordsFor(null)) .IgnoreArguments() .Return(_records); } [Test] public void Then_find_the_genre_for_the_specified_id() { BackToRecord(MockGenreRepository); using(Record) { Expect.Call(MockGenreRepository.FindBy(GenreId)) .Return(_genre); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void Then_find_all_records_for_a_specific_genre() { BackToRecord(MockRecordRepository); using(Record) { Expect.Call( MockRecordRepository.GetAllRecordsFor(_genre)) .Return(_records); } using(Playback) { CreateSubject().GetAllRecordsForGenre(GenreId); } } [Test] public void Should_return_a_list_of_records() { using(PlaybackOnly) { IEnumerable<Record> records = CreateSubject().GetAllRecordsForGenre(GenreId); Assert.That(records, Is.SameAs(_records)); } } private IGenreRepository MockGenreRepository { get { return Mock<IGenreRepository>(); } } private IRecordRepository MockRecordRepository { get { return Mock<IRecordRepository>(); } } }

I've been practicing this Context/Specification style of BDD since I wrote that post and I learned a couple of things since then. The code in this simple example still uses the Record/Playback plumbing that was then required by Rhino Mocks. The latest version of Rhino Mocks now supports the new AAA (Arrange, Act, Assert) syntax. I tried to make the code of this example a bit more easier to read by removing the noise of the Record/Playback syntax and some refactoring. Here is what I came up with:

[TestFixture] [Category("RecordServiceTestFixture")] public class When_retrieving_all_records_for_a_specific_genre : AutoInstanceSpecification<RecordService> { protected override void Establish_context() { genre = new Genre(); records = new List<Record>(); MockGenreRepository.Expect(genreRepository => genreRepository.FindBy(GenreId)) .Return(genre); MockRecordRepository.Expect(recordRepository => recordRepository.GetAllRecordsFor(genre)) .Return(records); } protected override void Because() { result = SUT.GetAllRecordsForGenre(GenreId); } [Test] public void Then_find_the_genre_for_the_specified_id() { MockGenreRepository.AssertWasCalled(repository => repository.FindBy(GenreId)); } [Test] public void Then_find_all_records_for_a_specific_genre() { MockRecordRepository.AssertWasCalled(repository => repository.GetAllRecordsFor(genre)); } [Test] public void Should_return_a_list_of_records() { Assert.That(result, Is.SameAs(records)); } private IGenreRepository MockGenreRepository { get { return Mock<IGenreRepository>(); } } private IRecordRepository MockRecordRepository { get { return Mock<IRecordRepository>(); } } private const Int64 GenreId = 12; private Genre genre; private IEnumerable<Record> records; private IEnumerable<Record> result; }

As you can see, the test cases are now reduced to a single line of code. The only thing that remains are the asserts itself. The actual call to the subject-under-test is nicely tucked away in the Because method, which is executed before each test case. This is something I've picked up by reading this article from Scott Bellware (which is highly recommended!!) and by looking at SpecUnit. Setting up the context is still done by the Establish_context method and the AutoMockingContainer is still used by the base class.

I've also split up the base test fixture of my previous post into three different classes:

public abstract class Specification { [SetUp] public virtual void BaseSetUp() { Establish_context(); Initialize_subject_under_test(); Because(); } [TearDown] public virtual void BaseTearDown() { Dispose_context(); } protected virtual void Establish_context() {} protected virtual void Initialize_subject_under_test() { } protected virtual void Because() {} protected virtual void Dispose_context() {} } public abstract class InstanceSpecification<TSubjectUnderTest> : Specification { protected override void Initialize_subject_under_test() { SUT = Create_subject_under_test(); } protected abstract TSubjectUnderTest Create_subject_under_test(); protected TSubjectUnderTest SUT { get; private set; } } public abstract class AutoInstanceSpecification<TSubject> : InstanceSpecification<TSubject> { private MockRepository _mockRepository; private AutoMockingContainer _autoMockingContainer; protected AutoMockingContainer AutoMockingContainer { get { return _autoMockingContainer; } } protected MockRepository MockRepository { get { return _mockRepository; } } protected override TSubject Create_subject_under_test() { return _autoMockingContainer.Create<TSubject>(); } protected TMock Mock<TMock>() where TMock : class { return GetDependency<TMock>(); } protected TStub Stub<TStub>() where TStub : class { _autoMockingContainer.Mark<TStub>().Stubbed(); return GetDependency<TStub>(); } private TDependency GetDependency<TDependency>() where TDependency : class { var dependency = _autoMockingContainer .Get<TDependency>(); if(false == MockRepository.IsInReplayMode(dependency)) { MockRepository.Replay(dependency); } return dependency; } public override void BaseSetUp() { _mockRepository = new MockRepository(); _autoMockingContainer = new AutoMockingContainer(_mockRepository); _autoMockingContainer.Initialize(); base.BaseSetUp(); } public override void BaseTearDown() { base.BaseTearDown(); _autoMockingContainer = null; _mockRepository = null; } }

This has the advantage that now all test fixtures who don't need to any mock objects can be derived from the InstanceSpecification base class:

[TestFixture] [Category("RecordTestFixture")] public class When_adding_a_track_to_a_record : InstanceSpecification<Record> { protected override void Because() { SUT.AddTrack(TrackName); } [Test] public void Then_the_record_should_have_the_specified_track() { Assert.That(SUT.HasTrack(TrackName)); } protected override Record Create_subject_under_test() { return new Record(RecordName); } private const string RecordName = "Homework"; private const string TrackName = "Rollin' & Scratchin'"; }

So far, I like this way of using BDD style specifications but I would love to hear any thoughts, remarks, flames, etc. ... . I guess that this topic is still somewhat of a moving target, so I'm eager learn and further refine my approach.

Till next time

Monday, October 20, 2008

The Quest for a Personal Information Manager: MyInfo vs Evernote

I've been a long time user of MyInfo, a personal information manager in which I keep a long history of information that I've assembled over the last couple of years as a developer.

image

As you can see, it has some very nice rich-text editing features and the organization of topics is really easy to use. The search capabilities are truly great. I've installed MyInfo on a USB flash drive so I can access it anywhere I want. The full list of features can be found here.

The only feature that gets on my nerves is the capturing of web pages. MyInfo has a built in browser to support this. There are two modes for capturing a certain web resource:

  • Storing the URL of the web page. When you select the topic in the tree, the web page gets loaded into the web browser.
  • Storing a local copy of the web page. The web resource gets stored into the MyInfo file.

I'm mostly using the second option. Although it works fine most of the time, this feature is terribly broken as soon as the web resource includes some JavaScript. This results in a massive amount of script warning/error messages when reopening the collected web resource. It also has the disadvantage that the MyInfo files grow very large when you have a lot of web resources.

Besides this one little quirk, I've always been a happy user until I ran into Evernote. After creating an account, you can pretty much store anything you want wherever you are. You can use the web site, the desktop version (which I am using) or the mobile version.

image 

You can capture almost everything, although rich-text editing is not as nice as I would like. Capturing full web pages is really easy as Evernote integrates with most popular browsers like IE, Firefox and Chrome. You just select the text you want to store and press the Capture button:

image

The information gets captured into the desktop application, creating a new note. You can choose to do some more editing or capturing before synchronizing with the server. This gives me the advantage to access this information wherever I have access to the Internet. The saved notes can be organized using tags.

Capturing notes and searching them is bleeding fast compared to MyInfo. Therefore, I'm slowly migrating the web resources from MyInfo to Evernote. I'm keeping MyInfo around for some rich-text documents until I find a way to store them in Evernote as well. The one feature I didn't get a chance to investigate yet is linking to/storing PDF documents, which seems pretty cool as well.

Are you, my dear reader, using a personal information manager? What are your experiences? Please, do let me know.

Book Review: C# in Depth

C#inDepth I just finished reading C# in Depth: What you need to master C# 2 and 3. Although the book is only 358 pages, it's title is certainly not exaggerated. It feels like I've read a 1000 page book. Jon Skeet has done a good job covering C# 2.0 and C#3.0, evolving the different language enhancements starting from C# 1.1.

I must admit that I've been quite passive when it comes to the new language features in C# 3.0 and especially when it comes to LINQ. I tried to avoid the hype as long as possible. Now that C# 3.0 has gone RTM for quite some time now and the dust has settled (before getting blown up again on the PDC next week), I decided to learn more about the language features in C# 3.0. That's the reason why I bought the book.

The first part covers C# 2.0, which had very little surprises. The last part covers the C# 3.0 language features (lambda expressions, expression trees, extension methods, etc. ...) and provides a concise introduction to LINQ.

The thing I appreciated the most while reading the book was the amount of detail provided about what the compiler does behind the scene. I am glad that I've learned what all the fuss is about so that I can finally start using some of the useful language extensions provided by the C# 3.0 compiler.

I guess this book is the best C# reference money can buy at the moment. If you want to have an in-depth knowledge of C#, then it's certainly no disappointment.

Sunday, October 12, 2008

Sins of Commissions

Reading Joel Spolsky's latest article, Sins of Commissions, reminded me about a topic that I feel very strongly about, namely incentives for software developers based on some kind of software quality or metric. I don't know about you my dear reader, but I think this is just nuts!

Although Joel's article talks about sales, incentives are also being applied in the IT industry. I personally know several employers where these kind of 'commissions' are being given to software developers. These incentives are mostly based on metrics like code coverage, cyclomatic complexity or some other result coming from a static analysis tool. Applying such a rewarding system to software development is doomed to fail.

From the article:

Inevitably, people will figure out how to get the number you want at the expense of what you are not measuring, including things you can't measure, such as morale and customer goodwill.

...

His point is that incentive plans based on measuring performance always backfire. Not sometimes. Always. What you measure is inevitably a proxy for the outcome you want, and even though you may think that all you have to do is tweak the incentives to boost sales, you can't. It's not going to work. Because people have brains and are endlessly creative when it comes to improving their personal well-being at everyone else's expense.

This really hits it home. If someone gets paid based on e.g. code coverage, then he or she will find a way to write a single unit test that proves 350% code coverage to pay the mortgage. This totally defeats the purpose of quality metrics, betraying the rest of the team as well. Don't say this won't happen, because it most definitely will. You never ever achieve code quality this way.

Another sick form of incentives is rewarding people who have been in a death march for months, delivering yet-another-piece-of-unmaintainable-crap on a randomly picked date. If a team gets 3 months to deliver a software system that actually should take 6 months to build and gets rewarded for it, then the result will most definitely not be pretty. It's always going to be at the expense of something else, code quality itself in this case.

Delivering quality code/software has nothing to do with incentives, but everything with being a professional software craftsman who wants to do the best he can regardless of his salary.

Until, next time.

Monday, October 06, 2008

Refactoring Exercise: The Single Responsibility Principle vs Needless Complexity

Ray Houston has written this post on his blog named Single-Responsibility Versus Needless Complexity. His post contains the following code sample of which he suspects that it possibly violates the Single Responsibility Principle:

public bool Login(string username, string password) { var user = userRepo.GetUserByUsername(username); if(user == null) return false; if (loginValidator.IsValid(user, password)) return true; user.FailedLoginAttempts++; if (user.FailedLoginAttempts >= 3) user.LockedOut = true; return false; }

Because I'm still sitting at home slowly recovering from surgery (almost a month now), and because I'm terribly bored by now, I thought I'd just try to refactor this code. The first step I took was to isolate the validation of the user in a separate private method like so:

private bool IsValid(User user, String password) { if(user == null) return false; return loginValidator.IsValid(user, password); }

This slightly simplifies the code of the Login method:

public bool Login(string username, string password) { var user = userRepo.GetUserByUsername(username); if (IsValid(user, password)) return true; user.FailedLoginAttempts++; if (user.FailedLoginAttempts >= 3) user.LockedOut = true; return false; }

Next thing I noticed was the use of the FailedLoginAttempts and the LockedOut properties, which appear to be part of the User class. This is something I talked about in Properties - A False Sense of Encapsulation. Let us apply Tell, Don’t Ask and move this code into the User class like so: 

private const Int32 MaximumFailedLoginAttempts = 3 public void FailsToLogin() { this.failedLoginAttempts += 1; if(MaximumFailedLoginAttemptsExceeded()) LockOut(); } private Boolean MaximumFailedLoginAttemptsExceeded() { return this.failedLoginAttempts >= MaximumFailedLoginAttempts; } private void LockOut() { this.lockedOut = true; }

This eliminates the need of the properties I just mentioned, which expose private data of the User class (at least for the Login story). The code of the Login method now becomes fairly small:

public bool Login(string username, string password) { var user = userRepo.GetUserByUsername(username); if(false == IsValid(user, password)) { user.FailedToLogin(); return false; } return true; }

Besides good judgement, SRP is also about organizing complexity so that other developers/readers know where to look for it.

Now get those torches lit and flame away.

Thursday, October 02, 2008

Book Review: Clean Code

CleanCode I just finished reading the magnificent book Clean Code - A Handbook of Agile Software Craftsmanship. I must say, if I would have a software company of my own, then I would force all my employees to read this book. Make no mistake about it. If you have anything to do with reading or writing code, then stop reading this post and go buy it now! And I mean now!

Still reading, huh?

Something that has been bothering me for a while is the following quote from another excellent book that is written by the same author, namely Agile Principles, Patterns and Practices in C#:

There's no gentle way to put this: in my experience .NET programmers are often weaker than Java and C++ programmers. Obviously, this is not always the case. However, after observing it over and over in my classes, I can come to no other conclusion: .NET programmers tend to be weaker in agile software practices, design patterns, design principles and so on.

Now here comes the part that bothers me the most: he's probably right! Being part of the .NET community, I strongly believe that we have a long way to go. I also believe that applying the principles, patterns and practices as laid out by this book will move us further on up the road.

The fact that all code is written in Java makes no excuse of not reading this book. These principles, patterns and practices virtually apply to every programming language and development platform (which also includes .NET!). There a lot of design principles that I already know and apply rigorously throughout my software writing, but most importantly, I also learned a ton of new things.

The author(s) of this book went to great lengths to offer the reader a good understanding of what clean code is all about. If you want to know the definition of good maintainable code and how it looks like, then this book is for you.

The book is actually a prequel to Agile Principles, Patterns and Practices in C#. The first part describes the principles, patterns and practices of clean code. The second part consists of a number of case studies. In these case studies, the author tries to transform some problematic code into more maintainable code by applying a number of refactorings. The third part exists from a single chapter that contains a list of code smells and heuristics that are gathered from the cases studies in the second part.

This quote from the book, coined by Michael Feathers (Working Effectively with Legacy Code), clearly describes its content:

Clean code always looks like it was written by someone who cares.

or

How to Care for Code.

But something that is discussed near the end of the first chapter really struck a nerve: The Boy Scout Rule. Originally, this states as "Leave the campground cleaner than you found it". This rule got rephrased and used several times throughout the book as "We should leave the code cleaner than we found it". This is something that I want to live by as a software engineer.

This is probably the best book I've read so far about how to write good, readable and maintainable code. No catch :-).