JavaScript Build Process – Rake Pipeline with MiniSpade

Back in the day when we were writing simple jQuery plugins, we would simply drop 2 or 3 script tags in the markup and we were off to a start. But today’s front-end development environment is vastly different. Today’s Web applications are deploying more and more JS code on the client.

With the increasing responsibility, architecting and delivering front end code is becoming far more challenging. Code still needs to delivered in as little HTTP requests as possible. Ideally, one single JS file serving all our code is the best solution.

The problem with ambitious front end projects starts here. If you are writing considerable JS, development cannot happen in a single file. Any serious software developer wants to decouple pieces of his software into logical modules(sitting in separate files).

If you are involved in such an application, you know what I am talking about. Your JS project has hundreds of files. Each view, controller, model and template is in its own file. There is probably a separate css file per logical module or view as well.

The benefits of such code separation are very obvious. Its a much easier mental model to understand and develop upon.

But you can’t deliver all your files separately. That would mean too many HTTP requests and would slow down the user experience of your web application. Delivery of the code needs to happen as a single file optimally.

This brings in the requirement of a build process. For the last few years, people have been using build processes that deliver a concatenated version for Production and all files individually for Development environment. This allows developers to debug code in dev environment comfortably and just flip a switch to deliver the production version.

However, if you are one of those unlucky ones who happens to be bit by a bug in production which cannot be reproduced in the dev environment, i.e. dependency injection hell or minifier trouble, you understand how important it is to reduce the differences between your dev and production environment.

Hence, An ideal build process should help in the following ways:

  1. It should allow us to write modular code in separate files.
  2. It should allow us to define dependencies of each module within itself. Defining them in a separate configuration file/makefile is not desirable.
  3. It should deliver all code minified and concatenated in a single file but should provide me a debugging environment where I can see my original setup of multiple files rather than a single giant file.

In this post, I will be talking about Rake-Pipeline with Web Filters, a ruby build tool to help tackle such issues. You should start off by reading this post by Yehuda Katz. This should get you going in setting up rake-pipeline on your system. A few pointers though:

  1. Rake Pipeline will not work with Ruby 1.8. You should upgrade to Ruby 1.9.2 atleast.
  2. If you are upgrading Ruby, RVM is your best bet. It allows you to install multiple ruby versions. After you install Ruby 1.9.2 or higher, make sure you set it as your default Ruby binary.
  3. You should checkout Katz’s  rake-pipeline-web-filters  project. This project contains all the filters that can be used as part of the pipeline. This will help understand the various filters available.
After you are done setting rake pipeline with web filters and ready to use it as the build tool, the internetz will point you to checkout the Ember Todos example app. It uses the rake-pipeline to build an emberjs Todos app.
If you run it as explained, you will notice that it suffices our requirement 1 & 2 in its current form. However, requirement 3 is not delivered. In your web inspector, you see one file app.js which is the concatenated version of all JS files in the project. This is a deal breaker for the dev process as mapping errors from a single file to our various files is going to become a nightmare.
To fix this, there are two changes that you need to make in the checked out version.
  1. Update the copy of minispade.js. The checked out copy is very old (I have an opened an issue related to this on Github). Github Link
  2. Update the Assetfile. Usage of the minispade filter is incomplete in the project’s Assetfile. The minispade filter supports an “string” option. With this option set to true, the minispade filter appends a “sourceUrl” comment at the end of each module. This is later picked up by Firebug or Web-Inspector to show your files individually instead of one single file. From: Minispade Filter on Github

With this change, you are set. If you rebuild the Todos project, you will notice that web inspector is showing individual JS files in the “Sources” panel. You can check the “Elements” panel to reassure yourself that we are still only injecting one concatenated file. This delivers the third requirement too.

I am really enjoying building my JS apps using rake-pipeline now. Not only does it fullfill my 3 most important requirements above, it has a few other nifty things going on:

  1. It runs a local server on port 9292 for you. No more Mamp!
  2. It gives you a nice and consistent file structure.
  3. You simply make your changes and refresh your browser. Rake pipeline rebuilds your code. No need to stop the server and build again.
  4. The minispade filter is just 50 lines of code. Also, it follows the approach of “download all at once” but “eval on demand”. This in my opinion is much better than dynamic script injection i.e. download+eval on demand. Download+eval on demand can suffer from bad user experience on those first clicks.
If you haven’t checked out Rake Pipeline, I would seriously recommend it. In my book, it is the best build process out there so far.

			

03. October 2012 by Rajat
Categories: Uncategorized | 1 comment

Promises in JavaScript

Writing code in an async programming framework like node.js becomes super complex and unbearably ugly super fast. You are dealing with synchronous and asynchronous functions and mixing them together can get really tricky. That is why we have Control Flow.

From Wikipedia,

In computer sciencecontrol flow (or alternatively, flow of control) refers to the order in which the individual statementsinstructions, or function calls of an imperative or a declarative program are executed or evaluated.

Basically, control flow libraries allow you to write non-blocking code that executes in an asynchronous fashion without writing too much glue or plumbing code. There is a whole section on control-flow modules to help you unwind the sphagetti of callbacks that you will quickly land yourself into while dealing with async calls in node.js.

Control flow as a pattern can been implemented via several approaches and one of such approaches is called Promises.

First things first, Promises is not a new language feature of JavaScript. Nothing has fundamentally changed in the language itself to support Promises. They could have been easily implemented 5 years back as well(and probably were). Promises is a pattern to solve the traditional callback hell problem in the async land. There are other similar approaches to solve the same problem as well.

The reason why people are talking about Promises and other similar patterns now is partly because of two reasons in my opinion:

  1. 5 years back, The web was much simpler. The amount of complex, javascript based, front-end heavy web apps dealing with several asynchronous calls simultaneously were few and far between. In many ways, there was no general need for a pattern like Promises.
  2. The second reason in my opinion is node.js. Thanks to node.js, people are now writing all the backend code in async fashion, backend code where all the heavylifting is done to generate the right response from a multitude of data sources. All these data sources are queried in async fashion in node.js and there is an urgent need of control flow to manage concurrency.

Because of the above two reasons, a lot of frameworks have started providing the concept of Promises out of the box. CommonJS has realized that the pattern needs to be standardized across these various frameworks and has a spec now, http://wiki.commonjs.org/wiki/Promises

JQuery 1.5 introduced the Deferred object, which is another name for the Promises pattern. It is based of the PromisesA CommonJS spec.

Ok, so now that its clear that Promises is a pattern, what exactly is it and how is it different from other traditional approaches?

From Wikipedia,

In computer sciencefuturepromise, and delay refer to constructs used for synchronization in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially not known, usually because the computation of its value has not yet completed.

Yep, that’s nice in theory but in practice, this is what changes:

Looks almost the same thing. Infact, it looks like Promises can hide the true nature of a function. With Promises, it can be super hard to tell if a call is synchronous or asynchronous. I say so because this is how your JsDoc for your async call function would change.

to

Some people will argue second style is cleaner. I am more in favour with the first style on this one.

If you are a small scrappy startup with fast growing codebase, It’s really hard to have proper documentation. In such scenarios where proper documentation is missing, Promises are a deal breaker for me as more than once, you will have to look within a function just to understand how to consume it and whether its asynchronous or not. With the first style, the function definition is very explicit and usage doesn’t rely heavily on the documentation. If there is a callback, it most likely is asynchronous.

Having said that, there are some pros in the Promises approach.

The first one that is usually talked about is how Promises are good for dealing with a multitude of async/sync calls. Taming multiple async calls becomes much easier with Promises.

Imagine a typical scenario where one has to perform something only after a list of tasks are done. These tasks can be both asynchronous or synchronous in nature in themselves. With Promises, one can write code that functions identically calling these synchronous and asynchronous functions, without getting into the nested callback hell that might otherwise happen.

This function returns a promise, based on the results of a bunch of tasks. These tasks are a mix of async and sync calls. Notice how in the code, It does not have to distinguish between those calls. This is a huge benefit to most of the Promises implementations.

Typically, all Control-flow libraries following other patterns also provide this benefit though. An example is async module in node.js modules. After all, this is the entire reason behind control-flow. The difference between Promises based implementations vs callback based implementations is suttle and more a matter of choice than anything else.

Another advantage that is given to Promises based implementations is how they are helpful in attaching multiple callbacks for an async operation. With Promises, you can attach listeners to an async operation via done even after the async call has completed. If attached after completion, that callback is immediately invoked with the response.

Traditionally, we have been solving this problem using Events. An event is dispatched when an async task completes and all those who care, can subscribe to the event prior and execute on it when the event happens. The typical problem with Events though is that you have to subscribe early else you can miss an event.

With Promises, You can attach those callbacks at any point of time without caring if the call is already complete or not.

The second gist is definitely cleaner and helps in structuring your code better. You don’t have to bother about subscribing early too.

This is more of a clear win for the Promises based approach in my book. As I said, you can solve the same problem with an Event based approach as well but designing an Event registry just to solve the issue of multiple callbacks would be an overkill. Events by nature bring more to the table like propagation, default behavior etc. If your app requires them for better reasons, use them by all means for multiple callbacks too. They provide a viable solution. However, if you are trying to solve just the issue of multiple callbacks, Promises do it pretty well.

To summarize my thoughts, I think Promises are a nice pattern but nothing too special. Similar functionality can be mocked in other approaches too. However, if you decide to use a Promises implementation, make sure you are strictly documenting your code as otherwise it becomes an organizational problem to bring new developers on board with your codebase.

As a test exercise, I decided to mock JQuery’s implementation of its Deferred object and Promises. If you are interested in looking on how a Promises API can be implemented, my attempt is publically available here: https://github.com/lifeinafolder/Promises

15. January 2012 by Rajat
Categories: Uncategorized | 3 comments

Custom Events In MVC JavaScript

Events are the bread ‘n butter in UI development. They are frequently used to kick off JS to deal with a user action on a web page. Any frontend programmer would easily recognize the common DOM based events like click, change, submit etc. Events are great, as they allow loose coupling in your code and help separate out logic.

With the recent advent of MVC frameworks in JavaScript, we have started loosely coupling our JavaScript through events on non-DOM stuff as well. Not only do we need events to know when a user clicked on a button and stuff, we also want events when Models change within our MVC application so that we can update our Views.

In the browser, the event objects are created automagically by the browser for us to inform about the changes on DOM elements. However, to watch for changes in different pieces of our own code; for instance, watching Models to update Views in a typical MVC setup, We have to implement our own eventing system and fire our own events. This is typically done with the help of the Observer pattern.

The observer pattern (a subset of the publish/subscribe pattern) is a software design pattern in which an object, called the subject, maintains a list of its dependents, called observers, and notifies them automatically of any state changes, usually by calling one of their methods.

Wikipedia

By definition of the Observer pattern in the MVC context, Models would be the subject or the observable and the Views are the observers. Now, lets go ahead and create our own custom event scheme using the Observer pattern.

With the above implementation, we created our own event mechanism. The Models keeps track of its observers in a private list(_listeners in our example) and the Views on subscription are pushed to that list. When something changes, its as simple as walking through that list and invoking each observer function. This is the classic way of doing custom events. It fits the above definition of the Observer pattern verbatim and works perfectly.

However, if you are feeling the itch, then here is another way of doing the same thing. 

With the second approach, you will notice a bunch of little nuances:

  1. Models dont keep track of observers in a private list (_listeners array is not required).
  2. The events we fire are very similar to DOM derived event objects. They can be bubbled, cancelled etc.
    Update: Well in theory the can be bubbled but that is a moot point because they are on a single DOM node not even in the DOM.Here is a screenshot from the example of the event object that View gets when the Model changes:

    Event_object
  3. We utilise a DOM node to fire off our custom events. This DOM node is never actually injected in the DOM. This is important as injecting nodes in DOM for events would be an expensive operation.
  4. Listening to these custom events in Views is very similar to listening DOM derived events.

The approach in Gist-1 is a classic, time-tested way of doing custom events. There is absolutely nothing wrong in it.

Approach 2 is something new and I kind of find it neat. It works out nicely as well. The only thing necessary to keep in mind while using Approach 2 is, that you can only use it in browser context as we are leveraging functions like createEvent, dispatchEvent of the document object, which are available only within the browser.

06. December 2011 by Rajat
Categories: Uncategorized | Leave a comment

Why JQuery Matters?

If you have done any serious UI development, you know JQuery is not the answer. The quest for a proper UI framework for UI centric web apps doesn’t end at JQuery. Infact, it picks up where JQuery ends. Anyone who is serious about UI development and JavaScript as a language should honestly not taunt JQuery as a skill set. 

I have personally had a very love-hate relationship with JQuery. I started off loving what JQuery did for the UI developer community, the browser agnostic code and all that jazz. And then, I started hating JQuery for what it did for the community.

Suddenly, there was a JQuery plugin for everything. Writing your own code became tough, somebody had already written a JQuery plugin for you. Majority of these plugins just outright failed, and most of them are not well laid out to even do a minor tweak from its targeted use case. Before you knew, you built your app around a plugin that doesn’t meet your requirements anymore. Shit hits the fan and you are cursing yourself for fixing someone else’s bad code.

Claims like the following don’t help JQuery too: 

Often with jQuery you can use a single line of code to achieve what would have taken 10-20 lines of regular JavaScript code.

In my opinion, that is largely a false promise for any serious UI Development. Because of such large loose claims, we as a community have arrived at such puns:

There is no doubt that the JQuery initiative is nice at its core but somewhere, it started getting abused. Somehow, the lack of coherence and reliability in the plugin ecosystem of JQuery seems to be hurting the original motive of the library. I am sure that the awesome folks behind JQuery are aware of this. To some extent, the roots of the JQuery UI project lies in this problem.

So I have been mulling over my thoughts about JQuery’s role in propelling UI development for the web. What clearly started as an initiative to better the state of UI development for the web seems to be backfiring in a way. Everyone knows JQuery; nobody seems to know proper UI development and JavaScript.

But we cannot discard the positive effect of JQuery as well. As a community of front-end developers, we have come a long way, thanks to JQuery. What started as a fledgling community of a few people putting small hackish scripts together to stitch simple interfaces has now grown into one of the most active and huge community of extremely talented and passionate folks making complex UI’s possible. Just take a look at Github. JavaScript is by far the most popular language on it and JQuery consistently stays in the top forked projects on Github.

One of the more recent revelations about the importance of JQuery happened to me when I recently came across this article and this data.

What seems to have happened is something very very special in my opinion. JQuery has eclipsed Flash. Yes, I repeat; more websites have JQuery running on them instead of Flash. From the data gathered by httpArchive, its on 48% of the web, a tiny but impactful 1% more than Flash; and nothing makes me more happy than that. 

There is no question in my head that Flash is not the framework to go forward with for the openness of the web. Some people will argue that is not true. I will not go into that debate.Some people will argue that, neither is JQuery. I’ll probably give you that one.

JQuery might not be the UI framework for the future but I think its gonna lead its way into one. It will act as a stepping stone for us as a community to move ahead and flesh out the UI development stack. If you haven’t noticed, that is sort of already happening. Nobody is making JQuery clones anymore as compared to when it all started. People have moved ahead to solve other issues of the UI dev-stack.

JQuery eclipsing Flash is a big conceptual win for the community and that is why, JQuery matters!

UPDATE(12/4/2011): JQuery shuts down the plugins website.

Plugins

Somehow, the lack of coherence and reliability in the plugin ecosystem of JQuery seems to be hurting the original motive of the library

15. September 2011 by Rajat
Categories: Uncategorized | 1 comment

Cutting Loose : Dynamic Namespacing in JavaScript

Most of the JavaScript libraries come wrapped in an easy to refer single object. The object acts as a namespace, tightly wrapping everything related to the library and provides a clean systematic handle to access the functionality of the library. For instance, here is a your own awesome library that you just built :

This is all fine and dandy till you decide that you want to call your new shiny thing myFunkyLibrary instead of myAwesomeLibrary. Suddenly, you find yourself  find-and-replace’ ing every occurrence of myAwesomeLibrary.

Next comes a better idea. You understand that myFunkyLibrary is just not gonna sit well with the readers of HN and you will probably change it again and again.

This looks better. To change the library name, you just make the change at one place. You also get the added benefit of true private properties and functions. Also for namesake, you impress the kids on the block with the term “module pattern“.

Cool! Off you release your new piece of code.

After a few days, you guessed it right. You are not very good at naming things. Some dudes are just not ok proliferating their code with myFunkyLibrary.this and myFunkyLibrary.that. Even more, they are just plain tired of writing myFunkyLibrary over and over again.

You are like, whatever man! Name it whatever you feel like but I still think myFunkyLibrary is a pretty awesome name. So you go :

With this, you basically allowed any user of your library the freedom to dynamically namespace/use your library with any darn name he wants, like this:

This is what we term as dynamic namespacing. We wrapped the entire library and used this as a stand-in for the execution context. The user of the library gets to choose what that context is by invoking the library on an object literal of his choice using call. Nice and simple!

References:

http://javascriptweblog.wordpress.com/2010/12/07/namespacing-in-javascript/

20. January 2011 by Rajat
Categories: Uncategorized | Leave a comment

The tilde ( ~ ) operator in JavaScript

From the JavaScript Reference on MDC,

~ (Bitwise NOT)

Performs the NOT operator on each bit. NOT a yields the inverted value (a.k.a. one’s complement) of a. The truth table for the NOT operation is:

a NOT a
0 1
1 0

Example:

9 = 00000000000000000000000000001001 (base 2)
               --------------------------------
~9 = 11111111111111111111111111110110 (base 2) = -10 (base 10)

Bitwise NOTing any number x yields -(x + 1). For example, ~5 yields -6.

Now lets look at the Logical NOT(!)

! (Logical NOT)

Returns false if its single operand can be converted to true; otherwise, returns true. 

Mixing the two NOT operators together can produce some interesting results:

!~(-2) = false

!~(-1) = true

!~(0) = false

!~(1) = false

!~(2) = false

For all integer operands except -1, the net operand after applying the ~ operator for the ! operator would be truthy in nature resulting in FALSE.

-1 is special because ~(-1) gives 0 which is falsy in JavaScript. Adding the ! operator gives us the only TRUE.

When to use this special case ?

A lot of times in JavaScript String manipulation, you are required to search for a particular character in a string. For example,

We can use the operators instead of the comparison operators, like this:

06. December 2010 by Rajat
Categories: Uncategorized | 9 comments

Understanding Currying

I recently fully understood the concept of utilising Currying in JavaScript. The core concept can be grasped from these references:

So far, I only got the concept but couldn’t understand when and how to use it effectively until now. Lets see:

Scenario

You are making a bunch of JSONP calls in a loop. You want the responses to fill in an array store appropriately i.e. you want the response something like:

[JSONP Call 1 Response, JSONP Call 2 Response, JSONP Call 3 Response, …]

Since JSONP calls are asynchronous, each of those individual calls can come back with the response at a different point in time. Hence, a simple loop never works. You will end up with responses in improper locations in the array store. Something like:

[JSONP Call 3 Response, JSONP Call 2 Response, JSONP Call 5 Response, …]

If you have worked a little with JavaScript, you will most probably do something like this to counter the problem:

This is all good and fine. It works perfectly and doesn’t complain. However, if your while loop is itself inside more closured functions, things can quickly get out of control. Code maintainability decreases fast as there are way too many closures to track down.

Enter Currying!

From John Resig’s post:

Partially applying a function is a, particularly, interesting technique in which you can pre-fill-in arguments to a function before it is ever executed.

In the 1st code gist, we were storing the position inside a closure to persist the state for the returned callback. However, The above definition helps us rethink the problem like this:

Let’s pre-fill the callback function index argument with the current position value so that when the JSONP call completes and the callback is called, it already has that index.

Notice the simplified while loop now!

I found this to be profoundly oversimplified in terms of understanding whats happening in the code. Personally, coming back days later and taking a peak at code snippet 1 wasn’t easy as the closures were not self-explanatory. The 2nd code gist with currying is so much easier to wrap your head around.

One last thing: The version of curry function in the second snippet pre-fills arguments from the left. It is called left-curry. If you want to pre-fill arguments from the right, you will have to re-arrange the curry function a bit.

16. November 2010 by Rajat
Categories: Uncategorized | Leave a comment

Image Beacons

Using Image Beacons is one of those popular techniques that you would probably know about without knowing its name. Thanks to Nicholas Zakas again, He has it pinned down and named in his latest book High Performance JavaScript.

The technique helps in cases when you are mostly interested in sending data to the server. You can receive response through this technique but the nature of the method makes it ideal for cases when you are more concerned about posting data back to the server ( maybe for analytics purposes, saving user state etc. ).

You create in your JavaScript a new Image Object and point its src property to the script on your server. The data to be posted back to the server is added as the query string in the url for the src property.

Thats it! Notice how we don’t inject the image in the DOM. So, No DOM manipulation required.

Also in the particular example above, we are listening for a server response in the onload handler. If we don’t want to send any data over the wire for the client to download, we can just set the appropriate response code “204 No Content“. In that case, the onload won’t be fired.

Now a quick laydown on the advantages/disadvantages of the method:

Pros:

  1. Cross-domain.
  2. Light-weight (if no data is sent by the server).
  3. No DOM Manipulation.

Cons:

  1. Since the data is sent as query string, you are limited by the amount of data the string can carry.

Moral of the story : If you want a confirmation of the call, you might as well use XHR but otherwise, this is a cheap and easy method to send data flying.

20. October 2010 by Rajat
Categories: Uncategorized | Leave a comment

The new document.querySelectorAll() method and the gotchas with it.

The latest browsers have new powerful methods to do DOM lookups called document.querySelector and document.querySelectorAll. While the first method matches only the first element according to your selector, the second and the one we are going to talk about returns all DOM elements matching your selector.

From MDC, document.querySelectorAll

Syntax

elementList = document.querySelectorAll(selectors);

where

  • elementList is a non-live NodeList of element objects.
  • selectors is a string containing one or more CSS selectors separated by commas.

The returned NodeList will contain all the elements in the document that are matched by any of the specified selectors.

The method brings home a style of lookup most JQuery developers would be used to, like :

document.querySelectorAll(‘#container .klass’);

to select all elements within the element with id ‘container’ which have a class name ‘klass’.The method is powerful as you can nest selectors but if used like this:

document.querySelectorAll(‘.klass’);

it acts just like document.getElementsByClassName.

Clearly, it seems like the mecca of DOM lookup methods.You can easily get rid of your getElementsByClassName or getElementsByTagName or the getElementById. The method seems promising to bring in consistency to your style of coding. Not so fast!

As per the recent article by Nicholas Zakas, querySelectorAll is slower for lookups that can be correspondingly done by getElementsByClassName/getElementsByTagName.

That is because querySelectorAll returns a static NodeList whereas getElementsByClassName /getElementsByTagName return a live NodeList. The dynamic NodeList elements are created in a cache whereas the static ones are populated in a separate file causing the difference in lookup speeds. The article by Zakas has an excellent detailed explanation on this.

With this post, I just wanted to bring home the point that you don’t want to update your DOM lookups that quickly. Also, as always, IE7 doesn’t have the native method.


29. September 2010 by Rajat
Categories: Uncategorized | Leave a comment

Dont You getElementsByClassName me.

I recently hit this issue about how browsers and libraries differ in the implementation of getElementsByClassName.

getElementsByClassName is a browser native function used to get elements from the DOM which have the same class. Here is the syntax taken from the Mozilla page for the function :

Syntax

  • elements is a live NodeList of found elements in the order they appear in the tree.
  • names is a string representing the list of class names to match; class names are separated by whitespace
  • getElementsByClassName can be called on any element, not only on the document. The element on which it is called will be used as the root of the search.

The first point should be carefully noticed.  I’ll explain what it exactly means by a live NodeList. Imagine you looked up elements by class name “list” using the getElementsByClassName function and you picked up the following in your variable elements :

Later, you modified the DOM to this :

By live, it means that your reference elements will point to this new list now i.e. if the DOM is manipulated, the changes reflect in the reference in JS for you automatically. That is how almost all other functions like getElementsByTagName, getElementsByName which return a NodeList work.

However, here comes the interesting piece of information now. Since, getElementsByClassName is not implemented as a native browser function in IE7 and below, we do what we do best – create a version of the function that uses the browser native function if available else uses our implementation. For getElementsByClassName, I use the very popular version of the same by Robert Nyman since I dont have the luxury of Jquery so far.

Now this is where it caught me. It seemed natural to me that these implementations would behave the same as the SPEC and hence spit back a live NodeList. However, both Robert’s version and even $(‘.classname’) in Jquery dont return a live NodeList. This assumption caused me a good 30-45 mins this other day when i was trying to figure out the problem in my code.

If you think about it after you know it, it makes sense as it would be radically tough for a version of the function besides the browsers native version to return a live NodeList( think about all event listeners you would have to attach/tear down for listening to changes in DOM ) of elements. So all these implementations return you a static NodeList of elements which doesnt auto-update once the DOM changes.

Next, you might think that if they do feature detection and use the browsers native version if available, they should still return a live NodeList. The answer is : Yes, they can! but that would mean different responses for different browers as it would be terribly hard to give a live NodeList for IE7 ( yuck ).

Working day in and day out in JS might cause you to forget this subtle difference. Also, it doesn’t bite you unless you are changing the DOM structure after your access and most of the times ( in my code ), I wasn’t changing the DOM structure in this fashion. So the wrapper getElementsByClassName was perfect for me until this day.

So, next time you use getElementsByClassName, dont assume it would behave as the SPEC. Think of it as a different implementation with different interface than the SPEC. And always remember that for your future good. :D

18. July 2010 by Rajat
Categories: Uncategorized | 1 comment

← Older posts