Friday, February 25, 2011

Book Review – Driving Technical Change: Why People on Your Team Don't Act on Good Ideas, and How to Convince Them They Should

Driving Technical Change

I already finished “Driving Technical Change” a few weeks ago, but I  got so caught up with other stuff that I almost forgot to actually sit down and write a review for it. I was very much looking forward to reading this book, and I must say that it certainly didn’t disappointed me. The author, Terrence Ryan, did a very good job distilling and capturing a good number of patterns and techniques for trying to convince people to adopt the concepts, tools and technologies that you yourself are trying to put forward. It’s not entirely gospel, but there is a lot of wisdom and definitely a number of great nuggets in there that might come in handy throughout ones career as a passionate software developer.

There are three main parts in the book. The first part elaborates on seven skeptic patterns. These skeptic patterns describe a particular stereotype of humans that you might encounter in your organization or work environment that are opposed to your ideas in one way or another.

These skeptic stereotypes are:

  • The Uninformed
  • The Herd
  • The Cynic
  • The Burned
  • The Time Crunched
  • The Boss
  • The Irrational

The second part of the book provides a number of countering patterns and techniques that can be used to ‘massage’ these skeptic stereotypes so that they come more in line with your way of thinking. Some of these techniques are claimed to be universal while others only work on certain types of skeptics.

This part fills your toolbox with the following patterns:

  • Gain Expertise
  • Deliver Your Message
  • Demonstrate Your Technique
  • Propose Compromise
  • Create Trust
  • Get Publicity
  • Focus on Synergy
  • Build a Bridge
  • Create Something Compelling

The third and final part of this book then talks about some strategies on how to effectively apply these countering techniques and patterns. With the information laid out here you’ll able to sort out a good approach for taking action.

These strategies are:

  • Ignore the Irrational
  • Target the Willing
  • Harness the Converted
  • Convince Management

With only a good 125 pages, this book is a concise collection of patterns and practices that are targeted towards influencing the people in your organization or working environment. This book is a highly recommended read when you want to push things forward at your work place.

Monday, February 21, 2011

Taking Baby Steps with Node.js – Implementing Events

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector
  5. CommonJS and Creating Custom Modules
  6. Node Version Management with n

As I already mentioned in one of the previous posts, events lie at the heart of Node.js. In fact, events ARE the heart of Node.js. When building our own custom modules, we are able to make use of this functionality provided by Node.js for emitting our very own events. We can do this by using the EventEmitter exposed by the built-in ‘events’ module. The following code snippet demonstrates how to use the simple API of the EventEmitter.

var events = require('events');

var eventEmitter = new events.EventEmitter();

eventEmitter.on('someOccurence', function(message){
    console.log(message);
});

eventEmitter.emit('someOccurence', 'Something happened!');

After creating an eventEmitter object, we subscribe to an event named ‘someOccurence’ and register a function with a message argument to be called when the event is raised. Subscribing is done by executing the on() method of the eventEmitter object. Next we raise our event by simply calling the emit() method of the eventEmitter object, also passing in the value for the message argument of the event handler function.

When providing our own constructor functions exposed by a custom module, we typically want to inherit from EventEmitter. This is also the most common usage throughout the implementations of the built-in modules provided by Node.js. The following example shows how easy it is to turn a constructor function into an event emitter. 

var sys = require('sys'),
    events = require('events');

function Downloader() {
    if(false === (this instanceof Downloader)) {
        return new Downloader();
    }
    
    events.EventEmitter.call(this);
}
sys.inherits(Downloader, events.EventEmitter);

Downloader.prototype.download = function(episode) {
    var self = this;
    
    var statusMessage = 'Downloading: ' + episode;
    self.emit('status', statusMessage);    
    
    setTimeout(function() {
        var finishedMessage = 'Downloaded ' + episode;
        self.emit('finished', finishedMessage);
    }, 5000);    
}

exports.Downloader = Downloader;

Here we created a constructor function named Downloader. Inside this constructor function we execute EventEmitter.call(this) which simply invokes the EventEmitter constructor function (re)using the current context. We then inherit from the EventEmitter by calling the inherits() helper method of the ‘sys’ module. The implementation of this method looks like this:

exports.inherits = function(ctor, superCtor) {
  ctor.super_ = superCtor;
  ctor.prototype = Object.create(superCtor.prototype, {
    constructor: { value: ctor, enumerable: false }
  });
};

This helper method ensures that the prototype methods of the specified superCtor are inherited into ctor. By calling this method we add the prototype methods of the EventEmitter function to our own Downloader function so they can be used by external code.

We also added a download() method to the prototype of our constructor function that emits two different events named ‘status’ and ‘finished’. The first event is raised immediately when the download() method is invoked. The second event is raised after five seconds.

The following code demonstrates how to use our Downloader which we exposed through a module named ‘podcast’.

var podcast = require("./podcast");

var downloader = new podcast.Downloader();

downloader.on('status', function(message) {
    console.log(message);
});

downloader.on('finished', function(message) {
    console.log(message);
});

downloader.download('Astronomy podcast #89');

Here we create a downloader object, register for the two events that we exposed and call the download() method. Instead of adding our constructor function to the exports object, we can also choose to replace the entire reference that is hold by the exports property of the current module as shown in this previous post.

var sys = require('sys'),
    events = require('events');

function Podcast() {
    if(false === (this instanceof Podcast)) {
        return new Podcast();
    }
    
    events.EventEmitter.call(this);
}
sys.inherits(Podcast, events.EventEmitter);

Podcast.prototype.download = function(episode) {
    var self = this;
    
    var statusMessage = 'Downloading: ' + episode;
    self.emit('status', statusMessage);    
    
    setTimeout(function() {
        var finishedMessage = 'Downloaded ' + episode;
        self.emit('finished', finishedMessage);
    }, 5000);    
}

module.exports = Podcast;

Here we renamed Downloader to Podcast and assigned it to the exports property of our custom module. Now we can use it like so:

var Podcast = require("./podcast");

var podcast = new Podcast();

podcast.on('status', function(message) {
    console.log(message);
});

podcast.on('finished', function(message) {
    console.log(message);
});

podcast.download('Astronomy podcast #89');

This approach looks a bit nicer to me, but that’s just my humble opinion.

As I mentioned earlier, deriving from EventEmitter is a common approach used by many of the built-in modules as well as many of the third-party modules provided by the community. Take a look at the following code that uses the API’s exposed by the ‘http’ module for requesting a page on the web. 

var http = require('http');
var url = require('url');

var parsedUrl = url.parse('http://www.google.com');
var client = http.createClient(80, parsedUrl.hostname);

var request = client.request(parsedUrl.pathname, {'host': parsedUrl.hostname});
request.on('response', function(response) {
    console.log(response.headers);
});

request.end();

This example simply retrieves the Google main page and prints out the headers to the console when a response is received. The dumbed down implementation of this API looks like this:

function Client() {
    // Further implementation ...
}
exports.Client = Client;

exports.createClient = function(port, host) {
  var c = new Client();
  return c;
};

Client.prototype.request = function(method, url, headers) {
  var req = new ClientRequest(options);
  return req;
};

function ClientRequest(options) {
  OutgoingMessage.call(this);
  
  // Further implementation ...
 }
util.inherits(ClientRequest, OutgoingMessage);

exports.ClientRequest = ClientRequest;

Here we basically see the same approach except that a number of factory methods are  provided in order to easily create and set up the required objects.

Events are essential when building applications with Node.js. Being able to easily use and expose them in your own custom modules is one of the strengths of this great server-side framework.

Until next time.

Tuesday, February 15, 2011

Basic JavaScript Part 10: The Module Pattern

Here are the links to the previous installments:

  1. Functions
  2. Objects
  3. Prototypes
  4. Enforcing New on Constructor Functions
  5. Hoisting
  6. Automatic Semicolon Insertion
  7. Static Properties and Methods
  8. Namespaces
  9. Reusing Methods of Other Objects

The module pattern is quite popular in the JavaScript community as is heavily applied by many JavaScript developers. There’s also the CommonJS initiative, which defines a specification for a common set of JavaScript API’s that are organized using self-contained modules. These specifications are supported by a growing community as they provide the foundation for the modules that are built into Node.js and numerous other open-source JavaScript libraries. This pattern has become so widespread because it’s an excellent way to package and organize an independent, self-containing piece of JavaScript code. The module pattern is composed by using self-executing functions combined with namespaces. Let’s show a simple example.

namespace('media');

media.podcast = (function(name) {
    var fileExtension = 'mp3';        

    function determineFileExtension() {
        console.log('File extension is of type ' + fileExtension);
    }
    
    return {
        download: function(episode) {
            console.log('Downloading ' + episode + ' of ' + name);
            determineFileExtension();
        }
    }    
}('Astronomy podcast'));

First we define a namespace called media. Then we use a self-executing function that returns an anonymous object with a method named download that can be invoked by external code. Inside the self-executing function we have a variable fileExtension and a function determineFileExtension that are private and can only be used inside the module. Notice that we provide a fixed parameter value for the self-executing function. This technique is usually applied to pass in some kind of global object. jQuery uses this same approach to inject a reference to the global window object into the scope of its module.

We can use the download method of our module like so …

media.podcast.download('the first episode');

… which outputs what we expect: 

Downloading the first episode of Astronomy podcast

File extension is of type mp3

The way we implemented the module pattern here has at least one major downside. We’re able to completely replace the implementation of the download method that is exported by the anonymous object returned from the self-executing function. This can become quite troublesome if we have other functions inside our module that also make use of the download method and thereby rely on its functionality. The way to fix this issue is to make all functions private and export them using the anonymous object:

namespace('media');

media.podcast = (function(name) {
    var fileExtension = 'mp3';        

    function determineFileExtension() {
        console.log('File extension is of type ' +fileExtension);
    }
    
    function download(episode) {
        console.log('Downloading ' + episode + ' of ' + name);
        determineFileExtension();
    }
    
    return {
        download: download
    }    
}('Astronomy podcast'));

The download method exposed by the anonymous object can still be replaced, but at least the correct implementation is preserved by the private download function for other functions that rely on its behavior. This approach is commonly called the “revealing module pattern”

Another neat approach is to export a constructor function instead of an anonymous object.

namespace('media');

media.Podcast = (function() {
    var fileExtension = 'mp3';        

    function determineFileExtension() {
        console.log('File extension is of type ' +fileExtension);
    }
    
    var podcastConstructor = function Podcast(name) {
        if(false === (this instanceof Podcast)) {
            return new Podcast();
        }
        
        this.getName = function() {
            return name;
        }
    }
    
    podcastConstructor.prototype.download = function (episode) {
        console.log('Downloading ' + episode + ' of ' + this.getName());
        determineFileExtension();
    }
    
    return podcastConstructor;
}());

Instead of returning an anonymous object from our self-executing function, we create another function and add the download method to the prototype of this constructor function. Notice that we also moved the name parameter to the constructor function instead of passing it into the self-executing function. At the end of the self-executing function we just return this constructor function like we did with the anonymous object.

We can now use this module like so …

var astronomyCast = new media.Podcast('Astronomy podcast');
astronomyCast.download('the first episode');

… which yields the same output as before.

The module pattern is a very powerful concept in JavaScript. Being able to expose and use JavaScript code, treating it as a black box, is a very common technique that is used in lots of JavaScript libraries and frameworks.

Happy coding!

Wednesday, February 09, 2011

Taking Baby Steps with Node.js – Node Version Management with n

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector
  5. CommonJS and Creating Custom Modules

The community around Node.js is definitely thriving at the moment. As a consequence, new versions of Node.js are being released very rapidly. While the 0.2.x versions of Node.js are considered stable, the 0.3.x versions contain all the new features and latest enhancements. I usually develop against the more stable 0.2.x versions of Node.js while spiking the new stuff in the latest 0.3.x versions in order to get a feel of what’s coming. This means that we need to manage multiple versions of Node.js on our development box while being able to easily switch between the different binaries.

There are a couple of alternative solutions out there that deal with the issue of version management for Node.js. First, there’s nvm which is a simple bash script for managing multiple versions of Node.js. Then there’s also nave which is another shell script that basically does the same thing. But the solution that I’m currently using and going to discuss in this blog post is a tool called n.

We can very easily install n with npm by issuing the following command:

npm install n

Installing a particular version of node.js is a easy as executing n, specifying the version number:

n 0.2.6

n will get the source code for the requested version and automatically configures/compiles in order to get the binaries. The last version that gets installed this way also automatically becomes the active binary. If we just want to install the latest version of Node.js, we can also use the following command:

n latest

We can easily check which versions of Node.js that we have installed on our development machine by just executing the following command:

n

which outputs something like this:

   0.2.3

o 0.2.6

   0.3.7

You can also check the current version of node.js by executing the following command:

node -- version

Running a Node.js application is still done using the same command as before: 

node server.js

Changing the current active version of Node.js is as simple as executing the same command that we used for installing that particular version:

n 0.2.3

This switches the used version of Node.js back to v0.2.3. We can also run an application using a particular version, overriding the current active version: 

n use 0.2.6 server.js

Removing a particular version is pretty easy as well. Just use the following command:

n rm 0.2.3

That’s it! Being able to quickly switch between the many different versions of Node.js has been a huge time-saver for me so far.  

Until next time.  

Saturday, February 05, 2011

Taking Baby Steps with Node.js – CommonJS and Creating Custom Modules

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector

In a previous blog post, I already discussed how to make use of the built-in and third-party modules inside a Node.js application. For this post I’m going to briefly touch on CommonJS and show how to create custom modules.

Most programming languages out there, like Java, Ruby, Python, C#, C++, etc …,  all come with some sort of standard library that provides developers with an API for building all kinds of applications on a variety of platforms. These libraries and/or frameworks provide all kinds of basic functionality for accessing the file system, doing network I/O, parsing command-line arguments, etc. … . Unfortunately, JavaScript doesn’t come with such a standard library. This is something that the CommonJS initiative is trying to fix. CommonJS tries to go beyond the standard JavaScript specification by defining a common set of API’s for building a broad range of systems like command-line, server-side and GUI applications. If you’re interested, you can have a look at the current specifications and proposals in development.

What does this have to do with Node.js? Well, Node.js implements the CommonJS specification for its built-in modules. Knowing how to leverage your own custom modules is not only very important for structuring your Node.js applications but also recommended for providing portability with other CommonJS compliant frameworks like narwhal. Creating a custom module is very easy. We just have to provide a JavaScript file, name it after the module that we want create and add the necessary JavaScript code. That’s it!

Let’s look at a very simple example of how to build such a custom module using the CommonJS system provided by Node.js. Suppose that we want to create a module named podcast that exposes functionality for downloading .mp3 files. As mentioned earlier, we have to create a JavaScript file named podcast.js and add the necessary JavaScript code that provides the download functionality.

exports.download = function(episode) {
    console.log('Downloading: ' + episode);
}

A slight variation to this that I see quite often used both in the built-in as the third-party modules looks like the following:

var podcast = exports;

podcast.download = function(episode) {
    console.log('Downloading: ' + episode);
}

In order to use this exciting new piece of code, we just have to add a require statement to our client code and we’re good to go:

var podcast = require("./podcast");
podcast.download('Astronomy podcast #89');

This might look very simple and easy but there’s plenty going on behind the scenes. First of all, Node.js ensures that the content of the JavaScript file that makes up our custom module gets loaded into its own scope. By doing this, Node.js automatically prevents naming collisions with other modules. When loading our custom module, Node.js provides a number of objects like module, exports and require. These are also called pseudo globals.

We use the exports object for exposing public members to external code. You can consider exports as the this reference for our module. This also means that we can just add regular JavaScript functions to our custom module without adding them to the global namespace and without the external code being able to call these ‘private’ functions as long as we don’t add them to the exports object.

var podcast = exports;

podcast.download = function(episode) {
    downloadDataFor(episode);
}

function downloadDataFor(episode) {
    console.log('Downloading: ' + episode);    
}

The external code is not able to call the downloadDataFor function which is only available inside our custom module:

var podcast = require("./podcast");
podcast.download('Astronomy podcast #89');

console.log(typeof podcast.download);            // function
console.log(typeof podcast.downloadDataFor);    // undefined
console.log(typeof downloadDataFor);            // undefined

I just mentioned that a module is provided with a couple of pseudo global objects. One of these is named module which provides a reference to the current instance of the module. This means that we are able to replace the reference that is hold by the exports property of the current module in order to provide a single export per JavaScript file. Let’s talk code: 

function Podcast() {
    if(false === (this instanceof Podcast)) {
        return new Podcast();
    }
}

Podcast.prototype.download = function(episode) {
    console.log('Downloading: ' + episode);    
}

module.exports = Podcast;

We now have to use restructure the client code as well:

var Podcast = require("./podcast");

var astronomyCast = new Podcast();
astronomyCast.download('Astronomy podcast #89');

The advantage of this approach is that both the code of our custom module as well as the client code are organized in the same way as we would write regular JavaScript objects. 

I can only hope that the CommonJS initiative succeeds in its goals by providing a common set of API’s that can be used for building a wide range of applications using JavaScript. Using modules interchangeably on all kinds of platforms still sounds very appealing :-).

Until next time.