Wednesday, December 29, 2010

My Personal and Professional Goals for 2011

Around the same time last year, I wrote this blog post where I’ve set out a couple of things to learn throughout the year. Being really honest with myself, I must admit that I could have done better. Nevertheless, I did manage to spend some free time on learning new stuff and broadening my existing knowledge.

Retrospective of 2010:

  • I did get to learn more about NServiceBus. I’ve been working on a small sample application that is built using NServiceBus, where I also tried to apply CQRS and event sourcing. I’m still working on this simple application and I’m hoping to get this up on GitHub at some point.
  • I spent most of my free time this year learning more about web development. I’ve been looking into ASP.NET MVC (which I’m also using in the sample application that I mentioned earlier), JavaScript and jQuery as well as HTML and CSS.
  • The programming language I learned in 2010 is JavaScript. This great programming language is finally making its way from client-side scripting in the browser to other platforms like server-side development with Node.js and on the database with a number of popular NoSQL document databases like CouchDB and MongoDB. Learning JavaScript most certainly challenged the way I’m writing code in C# as well.
  • I’m currently taking a deep plunge into Node.js, learning more about building server-side web applications using JavaScript. This experience most certainly led me to a better understanding of the problems that the various web application frameworks out there are trying to solve.  
  • I did get around to organize 10 E-VAN sessions in 2010. I want to thank all the speakers and those who contributed in the discussions for all their efforts. I hope most of the attendees were able to learn something. I know I did.
  • I was hoping to put out more blog posts in 2010, which I failed miserably. Almost all blog posts I wrote this year are about topics I’ve been working on in my spare time. The only explanation I can come up with is that I haven’t learned anything during my day job this year. Luckily I was already able to fix that for next year.
  • Sadly, I haven’t been able to learn more about MongoDB, Fubu MVC, Ruby and Ruby on Rails.
  • I’ve been able to read twelve books this past year, which is not bad but not great either. I’ve been doing more coding in my free time (which is obviously a good thing), but I think this has been at the expense of my reading time.

Without further ado, here are my goals for 2011:

Professional:

  • Attending a Code Retreat made me realize that I need to further invest in my TDD/BDD skills, certainly the part of letting my tests drive the design of the system.
  • I also decided that I need to become proficient with another editor besides Visual Studio. E-Text Editor and/or Vim are the ones that carry away most of my interest so I’m going to give these a try.
  • I desperately need to learn more about Git and see whether its use can convince me to move away from Subversion.          
  • Ruby, Ruby, Ruby, Ruby. Seriously, Ruby is the next programming language I’m going to learn. I hope that I’m also able to spend some time learning Ruby on Rails as well.
  • I’m also going to continue learning more about web development. I definitely need to further improve on my JavaScript coding skills, trying to develop more robust and maintainable client-side code in the process. HTML 5 and CSS 3 are topics that are high on my list as well.
  • MongoDB is back on the list again as I still find this whole NoSQL stuff very fascinating.
  • Further broadening my existing skills for the .NET platform. I’m pretty sure that I’m going to be overwhelmed with all the stuff I’m going to learn on my new job.      

Community:

  • I’m going to continue to organize new European VAN sessions throughout 2011. If you want to see a session on particular topic or you want to do a talk, then please let me know. Also feel free to get in touch with me if you want to help organizing these sessions.
  • I definitely need to blog more. I’ll be very happy if I can keep up my current pace of writing at least one blog post every week. Fingers crossed ;-).
  • There’s one area where I feel that I need to step up, and that is public speaking. Up until this point I’ve only been doing short presentations for small groups of people. It’s my plan to work on my presentation and speaking skills, and the best way of doing that is to overcome my performance anxiety and get out there and talk.
  • I also need to contribute more to open-source projects, especially in the .NET space where contributions are very much needed. My plan is to do at least five contributions (big or small, doesn’t matter) throughout 2011.       

Personal:  

  • First and foremost, I have to continue to invest in my family and personal relationships. This might sound obvious, but having a good work-life balance is very important and continually learning, improving and working on this is essential for any knowledge worker out there.
  • Obviously, I’m going to keep investing in my health by working out. I’ve been running for a couple of years now. I already lost a massive amount of weight and this has been paying off during the street runs I’ve participated throughout the year (less weight == running faster ;-) ). I’m going to keep investing in doing longer distances and also running faster.

I’ve set out some ambitious goals for myself in 2011 and I hope I get to realize most of them. Only time will tell. All that’s left for me now is to wish you an awesome New Year.

Until next year.  

Friday, December 24, 2010

Basic JavaScript Part 5: Hoisting

Here are the links to the previous installments:

  1. Functions
  2. Objects
  3. Prototypes
  4. Enforcing New on Constructor Functions

I just wanted to quickly share a little tidbit that I ran into the other day while I was messing with JavaScript again. Try to guess what the output is going to be when running the following code snippet:

var num = 56;
function calculateSomething() {
    console.log(num);    
    var num = 12;
    console.log(num);    
}

calculateSomething();

Without further ado, this is the output shown in the console window:

image

I must admit that this had me beat the first time I saw this, but it really makes perfect sense.

JavaScript doesn’t support block scope but instead it makes use of function scope. Block scope means that variables declared in a block are not visible to code outside that block. We all know that the following C#code doesn’t compile for exactly that reason:

public void Stuff()
{
    {
        var i = 2;
    }

    // Compiler error: The name 'i' does not exist in the current context.
    Console.WriteLine(i);    
}

But function scope means that all variables and parameters declared inside a function are visible everywhere within that function, even before the variable has been declared. This is the behavior that we see by running the code snippet shown earlier.

Just as with C#, we can put a variable declaration anywhere in our JavaScript function as we did earlier. In that case, JavaScript will just act as if the variable has been declared at the top of the function. This behavior is called hoisting. This means that it is valid to use this variable as long as it has been declared somewhere within the function. Going back to our previous sample, the net result is that JavaScript will interpret the function as something like this:

var num = 56;
function calculateSomething() {
    var num;            // undefined
    console.log(num);    // outputs 'undefined'    
    num = 12;            // 12
    console.log(num);    // outputs '12'    
}

calculateSomething();

Now I know why Douglas Crockford advised in his book JavaScript – The Good Parts to declare all variables at the top of the function body. In order to prevent nasty side-effects like this to happen, I think it’s best taking up on that advice.

Till next time.

Tuesday, December 21, 2010

Basic JavaScript Part 4: Enforcing New on Constructor Functions

As this is already the fourth blog post using the “Basic JavaScript” theme, I guess we’re slowly getting a small blog series on our hands. Here are the links to the previous installments:

  1. Functions
  2. Objects
  3. Prototypes

In the blog post on objects, I mentioned that there’s a general naming convention for constructor functions, using Pascal casing as opposed to the usual Camel case naming style for JavaScript functions. When following this naming convention we can make a visual distinction between a constructor function and a normal function. We want to make this distinction because we always need to call a constructor function with the new operator.

function Podcast() {
    this.title = 'Astronomy Cast';
    this.description = 'A fact-based journey through the galaxy.';
    this.link = 'http://www.astronomycast.com';
}

Podcast.prototype.toString = function() {
   return 'Title: ' + this.title;
};

var podcast = new Podcast();
podcast.toString();

Suppose that for some reason this naming convention slips our mind and we forget using the new operator when calling this constructor function. This usually leads to some nasty and unexpected behavior that is sometimes very hard to track down. What actually happens when we omit the new keyword, is that this now points to the global object (the window object when the JavaScript code is running in the browser) instead of the object that we intended to create. As a result, the properties in the constructor function are now added to the global object. This is definitely not what we want.

Rather than relying purely on a naming convention, we might want to enforce that every time a constructor function is called, this function is invoked properly using the new operator. In order to achieve this, we can add the following check to the beginning of the constructor function shown earlier:

function Podcast() {
    if(false === (this instanceof Podcast)) {
        return new Podcast();
    }

    this.title = 'Astronomy Cast';
    this.description = 'A fact-based journey through the galaxy.';
    this.link = 'http://www.astronomycast.com';
}

Podcast.prototype.toString = function() {
   return 'Title: ' + this.title;
};

var podcast1 = Podcast();
var podcast2 = new Podcast();

Adding this check verifies whether this references an object created by our constructor function, and if not, the constructor function is called again but this time using the new operator.

So adding this check to all your constructor functions guarantees that these are invoked correctly using new.

Till next time.

Friday, December 17, 2010

Christmas Light Architectures Are Not That Shiny

I just want to get something of my chest that is bothering me for quite some time now. It’s not going to be a rant of some sort, but merely a couple of observations for which I couldn't find the right words to describe things up until this point. In short: this post is way overdue :-).

image

What's the major malfunction with those old, classic Christmas lights? We've all experienced it at some point. When one goes out, all the others go out as well. This is due to the fact that these lights are wired in series. The difference compared with today's Christmas lights is that every bulb has a shunt, which basically prevents this kind of failure caused by one or more lamps. Enough about the Christmas lights for now. Where am I going with this? Back in enterprise IT, I'm seeing the same kind of failures as with those classic, old Christmas lights.

The diagram below shows a classic RPC style architecture, much like those classic, old Christmas lights. 

image

This is all fine and dandy as long as every part of the chain runs without too much hassle. But what happens if for some reason the centralized back-end web service goes down (light bulb goes out). This means that every smart client, website and batch application that uses this web service gets affected by this, like some sort of chain reaction. Parts of these client applications will no longer function correctly or they might even go down entirely. Same thing happens when the database of the centralized back-end web service goes down or any other external system that it depends on. When being confronted with this kind of architecture, how would one go about preventing this doomsday scenario to happen?

Suppose you’re a developer that has to work on the centralized back-end web service. This is usually a complex system as it obviously has to provide features for all kinds of applications. When this centralized back-end web service also has to deal with and depend on other external systems that might expose some unexpected behavior, how could one prevent the sky from falling down when things go awry in production?

Well for starters, you could start building in some stabilization points. Suppose for some reason, the centralized web service needs to incorporate some functionality offered by a highly expensive, super enterprise system that for some reason behaves very unstable and unpredictably on every full moon (expensive enterprise software not behaving correctly sounds ridiculous, but bare with me ;-) ). For example, we could use a message queue as a stabilization point.

image

This means that we put a message on a queue that is processed by some sort of worker process or service that does the actual communication with the misbehaving system. When the external system goes down, the message is either left on the message queue or put on an error queue for later processing when the external system comes back up again. There are some other things you need to think about like idempotent messaging, consistency, message persistence when the server goes down, etc. … . But if one of these dependencies goes down, the centralized back-end web service is still up-and-running which means that the systems that depend on its functionality can also continue to serve their users as they are able to keep doing their work.

Earlier this week I overheard this conversation that somewhat amazed me. I changed the names of the persons involved as well as the exact words used in order to protect the guilty.

George: We want to incorporate a message queue in order to guarantee stability and  integrity between several non-transactional systems that our system depends on. It will also improve performance as these systems behave very slow at times and become unstable under pressure. This also gives us the opportunity to root out some major points friction that our end-users are experiencing right now.

Stan: But this means that the end-users are not completely sure if their actions are indeed fully carried out by the system.

George: End-users can always check the current state of affairs in their applications. If something goes wrong, their request is not lost and things will get fixed automatically later on as soon as the cause of the error has been fixed.

Stan:  I don’t think that’s a very good idea. End-users have to wait until everything is processed synchronously, even if that means that they’ll need to wait for a long time. And if one of the external systems goes down, they should stop sending in new requests. Everything should come to a halt. They just have to stop doing what they are doing.

George: This means that because you lose the original request, some external systems might be set up correctly while others are not. Then someone has to manually fix these issues.

Stan: Then so be it! 

For starters, I was shocked by this conversation. This is just insane. Everything should come to a halt? Think about this scenario for a while: suppose you’re finding yourself in a grocery store with a cart full of food, drinks and other stuff. You come at the cash register where the lady kindly says “Can you put everything back on the shelves please? There are some issues with the cash register software and we are instructed to stop scanning items and serve customers until these issues are fixed. Can you come back tomorrow please?”. Uhm, no! How much money do you think this is going to cost compared to the system that makes use of stabilization points? An end-user that is able to keep doing its work, whether the entire production system is down or not has tremendous business value. 

I’m not saying that message queues are a silver bullet. I’m just using these as an example. As always, there is a time and place for using them. There are other things a developer can incorporate in order to increase the stability of the system he’s working on, like the circuit breaker pattern. I’m also not saying that every system should be built using every stabilization point one can think of. This become a business decision depending on the kind of solution. As usual, it depends.

But the point that I’m trying to make here is that we should stop putting software systems into production and just hoping for the best. That’s just wishful thinking. Software systems are going to behave badly and at some point they will go down. It’s just a matter of when this is going to happen and how much damage this is going to make.

The first step to take is awareness. I encourage you to pick up this book titled ‘Release It!’, written by Michael Nygard. This book is all about designing software that can survive this though environment called production. I can only hope that Stan picks up a copy as well along with some common sense.

Till next time.   

Tuesday, December 14, 2010

Taking Baby Steps with Node.js – Using Non-Standard Modules

In previous blog posts, I provided a short introduction to Node.js while also discussing the event-based model that lies at its core. For this blog post, I want to show how to effectively use non-standard JavaScript modules from a Node.js application.

As already discussed in the introductory blog post, Node.js comes with a number of built-in low-level modules. But the increasing amount of open-source modules out there makes developing applications for Node.js way more productive and effective. So far I’ve been using a package manager called Npm for quickly installing these open-source modules on my machine. By default, Npm installs all packages in the local folder of Node.js (usr\local\lib\node\).

$ npm install express

This installs the necessary packages for the express web development library. 

image

 

 

 

 

 

In our Node.js application we can now simply put the following require statement and start using this installed module.

var express = require(‘express’); 

application.get('/', function(request, response){ 
    // Some code
});

application.listen(8124);

When another developer wants to run the application by getting the source files from source control, he first has to make his way through the code to determine all non-standard modules that have been used and install the correct version of these modules. When developing .NET applications, it’s generally a best practice to put all third-party libraries in a dedicated library folder alongside your code in source control. This enables the code to be compiled and run with the same libraries that were used during development. Of course this same principle applies to server-side JavaScript as well.

The simplest way to accomplish this is to create a library folder alongside the folder that contains the source code and use git to get the latest version of a particular module.

$ git clone http://github.com/visionmedia/express.git /MyProj/lib/express

Next we add our library folder to the paths that are used by Node.js for looking up the required modules.

require.paths.unshift(__dirname + '/lib');
var express = require(‘express’); 

application.get('/', function(request, response){ 
    // Some code
});

application.listen(8124);

We just add the special variable __dirname together with the name of our library folder in the main source file (usually named something like server.js). Now whenever a require statement is used, Node.js will first look for the requested module in our custom library folder before looking into other configured directories.

You can still use npm instead of git for retrieving and installing packages into a custom folder, but then you have to create an ini file named .nmprc in your home folder:

cat >>~/.npmrc <<NPMRC
root = /<full_path_my_project>/lib
binroot = ~/bin
manroot = ~/share/man
NPMRC

There you go. I hope this might be useful for someone someday. I’m definitely having loads of fun learning about Node.js, taking one step at a time.

Until next time.

Friday, December 10, 2010

ASP.NET (MVC) and the Tale of the Continuous Application Restarts

I made a classic rookie mistake with ASP.NET (MVC) the other day. In my spare time, I’m working on this small sample application for myself in order to learn more about ASP.NET MVC. After working on this small web application for a while, I came to the point where I needed to set up a database for persisting some data. I just wanted to get this out of the way as quickly as possible, so I used Fluent NHibernate’s schema generation to quickly set up a SQLite database. Everything worked splendid so far.

But when I did some manual tests by using the web application for entering some data and storing it in the database, I started to notice some strange behaviors. Particularly, after the application managed to successfully store a record in the database, the next thing when it tried to read it back out again, the record was gone. After checking the code involved, I tried to debug this a couple of times. Although the transaction completed successfully, and the record became visible in the database (I used SQLite Administrator for that), the next thing I knew, the record disappeared again.

At first I thought it had something to do with the transaction. I’ve had some issues with transactions and SQLite in the past, but I was quickly able to root this out. After doing some more digging and debugging I found the actual reason for this strange behavior.

Whenever I’m working on a small console or Windows application, I tend to save the SQLite database file in the bin folder of the application. However, this isn’t a very good idea for ASP.NET applications. Making modifications to the bin folder of an ASP.NET web application causes the application to restart. The fact that storing a record in the SQLite database modifies the file, the web application got restarted as a result. Because I’m bootstrapping the ASP.NET application (IoC, NHibernate, etc. …) in a bootstrapper class that is initiated by the Application_Start method of the Global.asax file, the database got recreated for every application startup which caused the record to ‘magically disappear’.

I changed the configuration for the SQLite database file so that it got created in the app_data folder instead of the bin folder, and everything worked just fine again.

What can I say? I’m a noob.

Thursday, December 09, 2010

A Burden Called Meetings

I’ve been working for an enterprise corporation for 5+ years, which I’m going to be leaving soon. This organization is suffering from a wide-spread malady called “meetingitis”. This phenomenon bothers me from time to time, especially when I’m being pulled in those pointless meetings, wandering about the same thing over and over again without coming to a conclusion or a solution. Then there are also those kinds of meetings where you don’t have anything to say or contribute; these are just a complete waste of time.

Yesterday, Yves pointed me out on Twitter that it is perfectly fine to leave a meeting if you feel that you’re not able to gain or contribute anything. Today, I walked out of a meeting where one of the participants started making insults against me. I just stood up, walked to the door and left. And I must say that it felt liberating doing so. I went back to my desk, calmed down and got some actual work done. Without a basic form of respect, one simply can’t achieve anything, let alone come to win-win agreements. From now on, I’ll be evaluating all meetings that require my presence before I accept them and also keep evaluating my presence while being there.

Let me close of this mini-rant by sharing a must-see recording of a talk called “Why work doesn’t happen at work” by Jason Fried. I recommend you watch this short video, and if you like it, I also recommend picking up a copy of Rework.

I hereby rest my case.

Saturday, December 04, 2010

Basic JavaScript Part 3 : Prototypes

In previous blog posts, I talked about the rich capabilities of functions and objects in JavaScript. For this post I want to briefly touch on the concept of prototypes. Having a decent understanding of prototypes in JavaScript is highly recommended as they are a very important part of the language. I have to admit that I’m still trying to fully get my head around the concept of prototypes, but writing this blog post is part of my learning process :-). 

As you probably know, JavaScript is not a ‘classical’ language but a prototypal object language. This means that pretty much everything is an object, including functions. Every function has a property named prototype. This property is set to an empty object as soon as the former object gets created. As with every object we can augment it with our own methods.

In a previous post, I showed how to use constructor functions for creating new objects. Have a look at the following simple constructor function.

function Podcast(title, url) {
    this.title = title;
    this.url = url;
    
    this.toString = function() {
       return 'Title: ' + this.title;
    }
}

var podcast1 = new Podcast('Astronomy cast', 'http:// ...');
var podcast2 = new Podcast('jQuery podcast', 'http:// ...');

This constructor function adds two properties and one method to the objects that we created. Suppose that we developed a magnificent method for downloading the podcast itself. The most obvious place to put this code is in the constructor function as we did with the toString method. But we can also add this new method to the prototype of our constructor function.

Podcast.prototype.download = function() {
    console.log("Downloading podcast ...");
}

When a new Podcast object is created, this new object will ‘inherit’ the download method from the prototype of the constructor function and becomes available for use. In fact, Podcast objects that were created before the new function was added to the prototype of the constructor function also get this new method! Take a look at the following sample code:

var podcast1 = new Podcast('Astronomy cast', 'http:// ...');
console.log(typeof podcast1.download);        // outputs 'undefined'            

Podcast.prototype.download = function() {
    console.log("Downloading podcast ...");
}

var podcast2 = new Podcast('jQuery podcast', , 'http:// ...');

console.log(typeof podcast1.download);        // outputs 'function'
console.log(typeof podcast2.download);        // outputs 'function'

When the Podcast constructor function gets augmented with the download function, the previously created object now also exposes the newly added function. I find this a quite  fascinating feature.

As already mentioned, we can now simply call the download method that we added to the prototype.

var podcast = new Podcast('Railscasts', 'http:// ...');
podcast.download();

Even though the download method is now available for every object created through the Podcast constructor function, that doesn’t mean that this new method is ‘owned’ by the created podcast object itself.

var podcast = new Podcast('Railscasts', 'http:// ...');
console.log(podcast.hasOwnProperty('download'));    // outputs 'false'

When the download method is called, the JavaScript engine first looks at the methods of the podcast object which doesn’t seem have this method. Next thing, the engine identifies the prototype of the constructor function used for creating the podcast object. If the engine can find the method in the prototype object then this method will be called.

Besides the prototype property, every object also has a property named constructor that contains a reference to the constructor function used for creating the object. The code snippet shown earlier therefore resolves into something like this:

var podcast = new Podcast('Railscasts', 'http:// ...');
// outputs 'true'
console.log(podcast.constructor.prototype.hasOwnProperty('download'));    
podcast.constructor.prototype.download();

As I just mentioned, every object has a constructor property. Because the prototype property of the constructor function holds a reference to an object, that means that it also has a constructor which has a prototype of its own, etc … . The engine goes up the prototype chain searching for a requested method or property until it finds what needs to be called or until it reaches the root prototype, which is Object.prototype.

When a download method is added to the Podcast constructor function, then this method will take precedence over the download method of the prototype. This is illustrated by the following code sample:

function Podcast(title, url) {
    this.title = title;
    this.url = url;
    
    this.download = function() {
        console.log('Own download function.');
    }
    
    this.toString = function() {
       return 'Title: ' + this.title;
    }
}

Podcast.prototype.download = function() {
    console.log("Prototype download function.");
}

var podcast = new Podcast('Railscasts', 'http:// ...');
podcast.download();    // Outputs 'Own download function.'

These are the very basics of prototypes in JavaScript. I really enjoy learning JavaScript as it broadens my perspective on programming languages.

Until next time.