Thursday, March 24, 2011

Basic JavaScript Part 12: Function Hoisting

Here are the links to the previous installments:

  1. Functions
  2. Objects
  3. Prototypes
  4. Enforcing New on Constructor Functions
  5. Hoisting
  6. Automatic Semicolon Insertion
  7. Static Properties and Methods
  8. Namespaces
  9. Reusing Methods of Other Objects
  10. The Module Pattern
  11. Functional Initialization

In a previous post I already discussed the phenomenon of hoisting in JavaScript. In that post I showed the effects of variable hoisting and why it’s important to declare all variables at the top of a function body. For this post I want to briefly focus on function hoisting. Let’s start off with an example to illustrate this concept.

functionExpression();        // undefined
functionDeclaration();        // "Function declaration called."        

var functionExpression = function() {
    console.log('Function expression called.');
};

functionExpression();        // "Function expression called."
functionDeclaration();        // "Function declaration called."

function functionDeclaration() {
    console.log('Function declaration called.');
}

functionExpression();        // "Function expression called."
functionDeclaration();        // "Function declaration called."

In order to understand what’s going on here, we first need to understand the distinction between a function expression and a function declaration. As it’s name implies, a function expression defines a function as part of an expression (in this case assigning it to a variable). These kind of functions can either be anonymous or they can have a name.

// 
// Anonymous function expression
//
var functionExpression = function() {
    console.log('Function expression called.');
};

// 
// Named function expression
//
var functionExpression = function myFunctionExpression() {
    console.log('Function expression called.');
};

On the other hand, a function declaration is always defined as a named function without being part of any expression.

So, for the example shown earlier, the function expression can only be called after it has been defined while the function declaration can be executed both before and after it’s definition. Let’s look at how JavaScript actually interprets this code in order to explain why it behaves that way.

var functionExpression,        // undefined
    functionDeclaration =  function() {
        console.log('Function declaration called.');
     };
     
functionExpression();        // Still undefined
functionDeclaration();        // "Function declaration called."        

// The assignment expression is still left at the original location
// although the variable declaration has been moved to the top. 
functionExpression = function() {
    console.log('Function expression called.');
};

functionExpression();        // "Function expression called."
functionDeclaration();        // "Function declaration called."

// Here we originally defined our function declaration
// which has been completely moved to the top.

functionExpression();        // "Function expression called."
functionDeclaration();        // "Function declaration called."

JavaScript turns our function declaration into a function expression and hoists it to the top. Here we see the same thing happening to our function expression as I explained in the previous post on variable hoisting. This also explains why the first call of our function expression results in an error being thrown because the variable is undefined.  

So basically, JavaScript applies different rules when it comes to function hoisting depending on whether you have a function expression or a function declaration. A function declaration is fully hoisted while a function expression follows the same rules as variable hoisting. It definitely took me a while to wrap my head around this.

Until next time.

Tuesday, March 22, 2011

Book Review: C# in Depth–2nd Edition

I really learned a lot from reading the first edition of C# in Depth, so I was very glad that I finally found some time to make my way through the second edition. The content on C# 2.0 and 3.0 was only slightly revised compared to the first edition. Never change a winning combination :-).

But the final part of the book was the one that most interested me. This part discusses the features provided by the C# 4.0 compiler like optional parameters, named parameters, covariance/contravariance and last but not least, the dynamic keyword that got it’s very own chapter. The last chapter for this final part of the book is completely devoted to Code Contracts, which is not a language feature. I personally don’t like the way that Code Contracts are currently implemented by the .NET framework. I do hope that these concepts are going to be part of the C# language one day, which will be a major improvement regarding enforcing such contracts. Until that day, I think I’m going to stick with my own implementation.  

Anyway, I still think that the title of this book is spot on. If you want to bring your C# skills to the next level, then this book will be you guide. This book is filled with knowledge that only a true C# language expert can deliver. Definitely worth the time and effort.

Happy reading.

Thursday, March 17, 2011

Taking Baby Steps with Node.js – “node_modules” Folders

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector
  5. CommonJS and Creating Custom Modules
  6. Node Version Management with n
  7. Taking Baby Steps with Node.js – Implementing Events
  8. BDD Style Unit Tests with Jasmine-Node Sprinkled With Some Should

For this post I want to quickly share a nice addition to Node.js that is available since version 4.0.x. In the previous post, I provided some example code of BDD style unit tests that make use of the should.js library that enabled us to use BDD style assertions. We loaded the ‘should’ module just as if it was a native module:

var should = require('should');

In order to accomplish the same using version 0.2.x of Node.js, we needed to either prefix a relative path:

var should = require('./../dependencies/should');

or add our dependencies folder to the require.paths:

require.paths.push(__dirname + './../dependencies/');
var should = require('should');

When you omit the “/”, “../” or “./” prefix for loading a non-native module using version 4.0.x, then Node.js will automatically start searching in the parent directory of the current module for a folder named “node_modules” and tries to load the requested module from that location on disk. If it cannot be found there, then it goes up one level again and repeats the same process until the module is found or until the root folder is reached.

For that reason I renamed all my “dependencies” folders to “node_modules” so I that I’m able to incorporate third-party modules more easily.

This might not sound like a big deal, and it certainly isn’t, but this small new feature already reduced a good number of WTF’s on my end ;-). 

Until next time.

Tuesday, March 15, 2011

Git-Tfs – Where Have You Been All My Life

My very first encounter with a version control system was CVS. I’ve used this tool for many years (late 90’s, early 2000’s), learning a lot of best practices about source control usage in the process. But there was a lot of friction as well. That is why I switched to Subversion many years ago and I’ve been pretty happy with it ever since. Sure, it has its quirks but at the very least it is a lot better than Visual SourceSafe which was an established alternative back in the days, especially in the Microsoft space.    

Fast forward a couple of years. With a few exceptions, Visual SourceSafe has been replaced with TFS Version Control in those run-of-the-mill enterprise corporations. This was quite an improvement but also a lot of the friction remained, at least in my humble opinion. Just as with Visual SourceSafe, TFS is being forced upon entire flocks of developers, mostly by management and/or non-coding architects. I still find this to be quite odd as managers never tend to use TFS themselves as this is generally considered a developer tool.

Anyway, this is usually the part where one starts writing down a five page rant against TFS. But I’m not going to. Why? Because I decided to  look for a good solution instead and holy sweet batman, I found one. A while back I decided to learn more about Git. It’s definitely not a silver bullet either but I was so impressed with all its capabilities that I moved all the code for my home projects from Subversion to Git. But the largest friction remained. I was still forced to use TFS day in and day out. But a couple of weeks ago I ran into this post from Richard Banks where he discussed a plugin for Git named git-tfs. This extension is basically a two-way bridge between TFS and Git that lets you treat TFS source control as a remote repository. The way Git works is that the entire repository is contained in a local .git folder. This way it’s able to play nicely with a TFS repository as they don’t collide.      

Setting up git-tfs is quite easy. Just download the latest version, put it in a directory and add the location to the PATH environment variable. Now you’re good to go.

To get the source from a TFS repository you have to execute the ‘git tfs clone’ command.

git tfs clone http://some_tfs_server:8080 $/some_team_project/trunk

Note that by using this command, it will fetch the entire history by retrieving all change sets. If you’re anxious to get started (as I first was ;-) ), then there’s also the ‘git tfs quick-clone’ command that will skip the history and just get the latest version.  

git tfs quick-clone http://some_tfs_server:8080 $/some_team_project/trunk

All source files that are fetched from a TFS repository are also no longer read-only, which is quite nice compared to how TFS source control does things. Now that you have all the source files, you can start by adding a .gitignore file and follow the development workflow that you would normally use with Git.

Suppose that you completed a new feature and you want to push those changes back into TFS. This can be done using the ‘git tfs shelve’ command.

git tfs shelve user_story_x

This will create a shelve set that contains the changes which you can then unshelve and check in as you would normally do with TFS source control. The latest release of git-tfs even lets you check in your source files directly without needing to shelve. This can be achieved by using the ‘git tfs ct’ command. Note that this only works for TFS 2010.  

Suppose that another developer on your team checked in some code in TFS that you want to pull into your local working copy. For this you can use the ‘git tfs pull’ command that first fetches the latest change sets and merges them with your version of the code.    

One thing that you also need to take into account are the TFS source control bindings. These are stored directly in the solution file (horrible, just horrible). When opening the solution in Visual Studio, I just choose to work offline and reestablish the bindings when I’m back in TFS.

If you’re forced to use TFS source control and you’re fed up with it, then I strongly advice you to learn more about Git, install git-tfs and be merry. Kudos to all the contributors of this wonderful open-source project. You guys are truly amazing!  

I hope this helps.

Monday, March 07, 2011

Taking Baby Steps with Node.js – BDD Style Unit Tests with Jasmine-Node Sprinkled With Some Should

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector
  5. CommonJS and Creating Custom Modules
  6. Node Version Management with n
  7. Taking Baby Steps with Node.js – Implementing Events

I probably don’t have to tell you for the umpteenth time about the importance of TDD and writing unit tests for your code. This is a non-negotiable discipline regardless the platform or programming language you’re using. With JavaScript being a dynamic language, this becomes even more important because you don’t have a compiler to fall back on that takes care of the most general sanity checks.

Some time ago, during my first explorations of JavaScript, I stumbled upon this simple BDD framework called jasmine. Using jasmine-node, this small specification framework can be made available for Node.js as well. For this blog post I’ll be showing some of the basic usages.

In order to install jasmine-node, you can either use npm

npm install jasmine-node

or use Git to get the latest version of the lib folder that contains the following three JavaScript files:

image

Now we can start using jasmine-node. Let’s look at an example of a suite.

var Customer = require('domain').Customer,
   Order = require('domain').Order,
   OrderItem = require('domain').OrderItem,
   should = require('should');
  
describe('When making a regular customer preferred', function() {           
  
   var _order = new Order([ new OrderItem(12), new OrderItem(16) ]),
       _totalAmountWithoutDiscount = _order.getTotalAmount();
       _customer = new Customer([ _order ]);   
  
   _customer.makePreferred();
  
   it('should mark the customer as preferred', function() {
       _customer.isPreferred().should.be.true;
   });
  
   it('should apply a ten percent discount to all outstanding orders', function() {       
       _order.getTotalAmount().should.equal(_totalAmountWithoutDiscount * 0.9);
   });
});

Specifications are organized in suites. A suite is defined by providing a describe() function with a description. It’s also possible to nest suites, although I wouldn’t recommend that as it doesn’t work as one might expect. In this example we set up a regular customer which we then turn into a preferred customer. A specification is defined by providing an it() function with a description. For the actual verifications I opted for using should.js which provides BDD style assertions that are test framework agnostic instead of the matchers built into Jasmine.

// Built-in matchers
expect(_customer.isPreferred()).toBeTruthy();
expect(_order.getTotalAmount()).toEqual(_totalAmountWithoutDiscount * 0.9));

// Should.js
_customer.isPreferred().should.be.true;
_order.getTotalAmount().should.equal(_totalAmountWithoutDiscount * 0.9);

I really like the syntax provided by should.js, but that’s just my personal opinion of course.

Note that suites are just plain old JavaScript functions that are executed only once. This therefore means that our setup code is also executed only once. I particularly like this as it prevents context betrayal and forces the specifications to just observe the outcome.

However, it’s also possible to provide a function that runs before each specification.

var Customer = require('domain').Customer,
   Order = require('domain').Order,
   OrderItem = require('domain').OrderItem,
   should = require('should');
  
describe('When making a regular customer preferred', function() {           
   var _order, _totalAmountWithoutDiscount, _customer;
  
   beforeEach(function() {
        _order = new Order([ new OrderItem(12), new OrderItem(16) ]),
        _totalAmountWithoutDiscount = _order.getTotalAmount();
        _customer = new Customer([ _order ]);   
  
       _customer.makePreferred();   
   });
  
   it('should mark the customer as preferred', function() {
       _customer.isPreferred().should.be.true;
   });
  
   it('should apply a ten percent discount to all outstanding orders', function() {       
       _order.getTotalAmount().should.equal(_totalAmountWithoutDiscount * 0.9);
   });
});

Now, we’ll need to provide some plumbing in order to execute these specifications using Node.js. We need to provide a small script that picks up all specifications for a particular folder and feed these to jasmine for executing them.

var jasmine = require('jasmine-node');
var sys = require('sys');

for(var key in jasmine) {
 global[key] = jasmine[key];
}

var isVerbose = true;
var showColors = true;

process.argv.forEach(function(arg){
   switch(arg) {
         case '--color': showColors = true; break;
         case '--noColor': showColors = false; break;
         case '--verbose': isVerbose = true; break;
     }
});

jasmine.executeSpecsInFolder(__dirname + '/specifications', function(runner, log){
 if (runner.results().failedCount == 0) {
   process.exit(0);
 }
 else {
   process.exit(1);
 }
}, isVerbose, showColors);

All our specifications reside in the specifications folder which are executed by jasmine when we run this script (that we named specs.js).

image

There you go. I’m also evaluating some other stuff regarding TDD and BDD for JavaScript and Node.js. I’ll be blogging about that as well in the near future.

Until next time.

Saturday, March 05, 2011

Basic JavaScript Part 11: Functional Initialization

Here are the links to the previous installments:

  1. Functions
  2. Objects
  3. Prototypes
  4. Enforcing New on Constructor Functions
  5. Hoisting
  6. Automatic Semicolon Insertion
  7. Static Properties and Methods
  8. Namespaces
  9. Reusing Methods of Other Objects
  10. The Module Pattern

I just want to quickly share some beautiful JavaScript code I picked up while watching the most excellent screencast 11 More Things I Learned from the jQuery Source by Paul Irish.

var base = dom.getElementByTagName('base')[0] | | (function() {
     // Do some stuff
     return someElement.insertBefore(dom.createElement('base'), someElement.firstChild) ;
})(); 

This single line of code basically checks whether there’s a base tag somewhere in the DOM. If there is one, then it assigns the reference for the first element to the base variable. If it’s not in the there, then a self-executing function inserts a new base tag into the DOM and returns the reference to the new element.

I don’t know about you, but I think this is pretty neat.