Saturday, May 28, 2011

A Quartet of Book Reviews

Since a couple of months or so, the amount of time spent for my daily commute to and from work nearly tripled. I also decided to travel by train instead of using my car (which kind of explains the increase of my travelling time). So, trying to make the best of things, I decided to catch up on my reading backlog. For this blog post, I’m going to briefly discuss the books I’ve read so far.

1. The 48 Laws of Power

The 48 Laws of Power, written by Robert Greene, totally blew me away. I can’t remember where I picked it up or who brought this book to my attention, but kudos anyway. The basic premise of this book is that some strategies keep you in control, while other acts of human nature decrease your influence. This book isn’t necessarily about you gaining power over others, as it is more about actions that prevent others from manipulating or gaining control over you or your close environment. Every law is illustrated with real-life stories about historic figures and how their actions put them in a powerful position or how their mistakes drove them to a cliff.

I wish I had read this book many years ago, but better late than never, right? I’m definitely going to reread several portions of this book in the near future in order to get a better grasp of some these laws. I have to admit that sometimes, this book is a bit much when you read it for the first time.

2. Dinosaur Brains – Dealing with All Those Impossible People at Work

image

I had this book literally collecting dust on my bookshelf for several years now, so finally being able to read this book has been well overdue. Dinosaur Brains, written by Albert J. Bernstein and Sydney Craft Rozen, is all about how we sometimes react and behave purely by our primal instincts, when the cortex in our brain loses control and falls back on lizard logic in some situations. Funny enough, this book has taught me more about myself than about my current co-workers or former colleagues. This metaphor of a prehistoric creature illustrates how our brain triggers a fight, flight or freight response that is sometimes well beyond our control. But this isn’t a lost cause either. This book not only provides the rules of the dinosaur brain, but it also contains a lot of advice on how to use this knowledge to your own advantage. So, overall a very interesting read.

 

 

3. Drive - The Surprising Truth About What Motivates Us

I picked up Drive – The Surprising Truth About What Motivates Us after I watched this inspiring talk by Daniel Pink, which I briefly mentioned in this blog post. The author makes a very strong case for what he calls Motivation 3.0, which is based on three concepts:

  1. Autonomy – based on the principles of self-direction, this lets knowledge workers decide how, when and where to do their job.
  2. Mastery – getting people in a state of “flow” by letting them work on stuff that they’re passionate about. “Flow” is a state of mind that where time seems to be passing by without noticing. The author clearly explains why a restrictive working environment prevents people from getting into their “flow”. 
  3. Purpose – the believe that there’s more to work than just making money. The fact that there’s meaning in what you do day in and day out, enables our intrinsic motivation.

I also couldn’t help but notice that this book tends to lay out the basics of systems thinking, which is a topic that I definitely want to learn more about.

Do yourself a favor, pick up and read this book or at the very least, watch this excellent talk. Buy this book as a gift for your boss and tell him that all the cool managers are reading this book ;-).

4. Pragmatic Thinking and Learning – Refactor You Wetware

Pragmatic Thinking and Learning, written by Andy Hunt, is also a book that I wanted to read for quite some time now. As he is one of the co-authors of The Pragmatic Programmer, I had some very high expectations. And I must say that those expectations were only partially fulfilled. The first few chapters were a reiteration of Daniel Pink’s book A Whole New Mind, but then applied to the world of software development. There was even some content in there from the book Dinosaurs Brains, which I had just finished reading at the time. But if you haven’t read these two books before, then these first chapters will definitely be an eye opener.

Nonetheless, I did manage to pick up a few neat ideas about learning in general and how to apply those to my own learning activities. But I couldn’t get rid of the feeling that most of the content was just an iteration of what I already picked up from other books and articles.

Don’t get me wrong here. This is truly a great manuscript with a lot of gems in there. If you’re not already familiar with this stuff, then this book will definitely rock your boat and I would highly recommend it.

So, happy reading and until next time.

Tuesday, May 10, 2011

Taking Baby Steps with Node.js – npm 1.0

< The list of previous installments can be found here. >

Isaac recently released version 1.0 of npm, which is a package manager for Node.js. I’ve been using npm very early on when I started fiddling with Node.js and I’ve come to rely on it quite heavily. After I upgraded to one of the release candidates, I noticed a big improvement in how and where packages are being installed.

With the old version that I was using (0.3.15), all packages where installed globally. You could install multiple versions of the same package, but this implied that we also had to take this into account when we loaded a particular module in our application:

var step = require(‘connect@0.2.7’);

Although this worked great, it’s still my humble opinion that all third-party modules should be right along the source code of the application that I’m building. With the npm 1.0 release, this has now become the default behavior.

Installing npm 1.0 is as simple as executing the following in the command-line:

curl http://npmjs.org/install.sh | sh

Executing this command asks the user whether any old versions of npm should be removed. You can also execute the following if you don’t like to be prompted:

curl http://npmjs.org/install.sh | clean=yes sh

In a previous post, I discussed a new convention introduced in version 4.0.x of Node.js for loading third-party modules from a folder named node_modules that is typically created in the root folder of a project. When installing a package, npm adheres to this convention by locally installing this package into this node_modules folder.

Say, for example, that we want to use Socket.IO in our application. We just need to run the following command from the root folder of our application:

npm install socket.io

This will locally install the requested third-party module into ./node_modules/socket.io. If a particular package includes a binary, then these will go into ./node_modules/.bin/. Now we can simply load the installed module without specifying a specific version number:

var socketIO = require('socket.io');

So what do we do if we want to install packages that are typically used from the shell and not by the source code of the application, like n, node-inspector or nodemon? Well, we can still get them installed globally using the –g command-line switch:

npm install -g n
npm install -g node-inspector
npm install -g nodemon

This ability to install packages either locally or globally is probably the most obvious new feature that has been shipped with the 1.0 release. There are several other new features and enhancements as well, but I haven’t ran into those (yet).

Apparently, there will also be no new major features or architectural changes for quite some time. So this gives us plenty of time to get to the bottom of all the new capabilities of npm. This local/global installation of packages alone is certainly worth upgrading to the 1.0 release.

Until next time.

Wednesday, May 04, 2011

Taking Baby Steps with Node.js – WebSockets

Here are the links to the previous installments:

  1. Introduction
  2. Threads vs. Events
  3. Using Non-Standard Modules
  4. Debugging with node-inspector
  5. CommonJS and Creating Custom Modules
  6. Node Version Management with n
  7. Implementing Events
  8. BDD Style Unit Tests with Jasmine-Node Sprinkled With Some Should
  9. “node_modules” Folders
  10. Pumping Data Between Streams
  11. Some Node.js Goodies
  12. The Towering Inferno
  13. Creating TCP Servers

HTML5 is definitely one of the hot topics du jour. There are several technologies that are part of the upcoming HTML5 standard, where one of the most exciting additions is WebSockets. This also means that it’s a quite popular topic in Node.js circles as well, as this technology tends to resonate well with the needs of real-time web applications. And I also admit that I’ve been putting off this topic way too long in this blog series. But hey, better late than never, right?

In short, WebSockets allow for bidirectional, cross-domain communication between the client (browser) and the server. When building web applications that need to frequently update the data that they’re displaying, one of the traditional approaches is to make use of long-polling and Ajax. Instead of letting the client drive the communication, we can now make use of WebSockets to invert this traditional model and instead let the server push the data to the client as soon as updates become available. Making use of WebSockets also reduces overhead compared to HTTP/Ajax because there are less headers involved that have to travel back and forth, which makes this kind of real-time communication more efficient. But, as always, there’s a major downside involved as well. Because the HTML5 standard is relatively new, only the most recent versions of the different browsers out there provide support for WebSockets out-of-the-box. Because WebSockets are not available in every browser, this means that we’re back to square one.

Enter Socket.IO.

Socket.IO provides a unified WebSocket API that works on every browser (yes, even IE 6) and it does that by supporting multiple communication transports under the hood. You can consider this library to be the jQuery of WebSockets ;-). The API provided by Socket.IO clearly resembles the native WebSocket API of HTML5. The WebSocket library consists of two parts, one for the client and one for the server. But enough of this blabbering. Let’s look at some code.

The simplest and most common example out there is that of a chat server. We need an HTTP server that returns a single HTML page where users can enter their name and start chatting. Sounds simple enough.

In order to serve a static HTML page and the client Socket.IO JavaScript file, we use a library called node-static. This is just a module that serves static files through HTTP, which also provides support for built-in caching.

This is how to set it up:

var clientFiles = new static.Server('./client');

var httpServer = http.createServer(function(request, response) {
    request.addListener('end', function () {
        clientFiles.serve(request, response);
    });
});
httpServer.listen(2000);

The static files that are meant to be sent to the browser are stored in a separate directory named ‘client’, which exists in the root folder of our project. We first create a file server instance, specifying the location of our static files, and simply serve the requested files when they are requested.

Next, let’s have a look at the content of the one and only HTML page that we’re providing for our simple chat application.

<html>
    <head>
        <title>Simple chat o' rama</title>
    </head>
    <body>
        <div>
            <p>
                <label for="messageText">Message</label>
                <input type="text" id="messageText"/>
            </p>
            <p>
                <button id="sendButton">Send</button>
            </p>
        </div>    
        <div>
            <ul id="messages">
            </ul>
        </div>
        <script type="text/javascript" 
                src="http://localhost:2000/socket.io.js"></script>
        <script type="text/javascript" 
                src="http://code.jquery.com/jquery-1.5.2.js"></script>
        
        <script type="text/javascript">
            $(document).ready(function() {
                var webSocket = new io.Socket('localhost', { port: 2000 });
                webSocket.connect();
                
                webSocket.on('connect', function() {
                    $('#messages').append('<li>Connected to the server.</li>');            
                });
                
                webSocket.on('message', function(message) {    
                    $('#messages').append('<li>' + message + '</li>');        
                });
                
                webSocket.on('disconnect', function() {
                    $('#messages').append('<li>Disconnected from the 
                    server.</li>');            
                });
                
                $('#sendButton').bind('click', function() {
                    var message = $('#messageText').val();
                    webSocket.send(message);
                    $('#messageText').val('');
                });    
            });
        </script>
    </body>
</html>

In order to establish communication with the server, we create a new socket and call the connect() method. Next, we register some event handlers and log some text to the screen in order to easily follow what is going on. We also use a little bit of jQuery to hook up to the click event of the send button. Here we use the socket that we created to send the message to the server. Notice how closely the API provided by Socket.IO resembles the native WebSocket API put forward by the current HTML5 specification.

And finally, here’s the code for integrating Socket.IO on the server.

var webSocket = socketIO.listen(httpServer);
webSocket.on('connection', function(client) {
    client.send('Please enter a user name ...');
    
    var userName;
    client.on('message', function(message) {
        if(!userName) {
            userName = message;
            webSocket.broadcast(message + ' has entered the zone.');
            return;
        }
        
        var broadcastMessage = userName + ': ' + message;
        webSocket.broadcast(broadcastMessage);    
    });
    
    client.on('disconnect', function() {            
        var broadcastMessage = userName + ' has left the zone.';
        webSocket.broadcast(broadcastMessage);    
    });
});

We first ask for the name of the user and for simplicity sake we assume that the first message received from a particular user is in fact the user name. From then on, every message that we receive from a user is broadcasted to all other connections.

Seeing this in action is pretty much what we expect.

imageimage

I know that a real-time chat server application is the canonical example for demonstrating Socket.IO. But think of the real-world possibilities here. Suppose we have a web application  where a user X changes some data and sends these changes to the server. These changes can then automatically be updated / merged when another user B is looking at the same data in order to reduce potential concurrency issues when this user B wants to change something as well. This greatly enhances the end-user experience. And there are a ton of other scenarios where this technology can make a difference.

The only aspect that somewhat troubles me about WebSockets is security. This is also the very reason why Mozilla disabled WebSockets in Firefox 4. But don’t dismiss it entirely either as more improvements will surely come soon.

I want to round off this post by pointing you to some other libraries that one can use to accomplish real-time communication between client and server. The first is node.ws.js, which is an minimal WebSocket library for Node.js. There’s also Faye, which is an easy to use publish-subscribe message system based on the Bayeux protocol.

Until next time and happy messaging!