Mark Dalgleish

Mark Dalgleish

UI Engineer - Melbourne, Australia

Web Components: Why You’re Already an Expert

Web Components are going to fundamentally change the nature of HTML.

At first glance, they may seem like a complicated set of new technologies, but Web Components are built around a simple premise. Developers should be free to act like browser vendors, extending the vocabulary of HTML itself.

If you’re intimidated by these new technologies or haven’t experimented with them yet, this post has a very simple message for you. If you’re already familiar with HTML elements and DOM APIs, you are already an expert at Web Components.

“Components” today

To understand why Web Components are so important, we need look no further than how we’ve hacked around the lack of Web Components.

As an example, let’s run through the process of consuming a typical third-party widget.

First, we include the widget’s CSS and JavaScript:

1
2
<link rel="stylesheet" type="text/css" href="my-widget.css" />
<script src="my-widget.js"></script>

Next, we might need to add placeholder elements to the page where our widgets will be inserted.

1
<div data-my-widget></div>

Finally, when the DOM is ready, we reach back into the document, find the placeholder elements and instantiate our widgets:

1
2
3
4
5
// jQuery used here for brevity...

$(function() {
  $('[data-my-widget]').myWidget();
});

This was a bit of work, but we’re still not finished.

We’ve introduced a custom widget to the page, but it is not aware of the browser’s element lifecycle. This becomes clear if we update the DOM:

1
el.innerHTML = '<div data-my-widget></div>';

Since this isn’t a typical element, we now must manually instantiate any new widgets as we update the document:

1
$(el).find('[data-my-widget]').myWidget();

The most common way to avoid this constant two-step process is to completely abstract away DOM interaction. Unfortunately, that’s a pretty heavy-handed solution that usually results in widgets being tied to particular libraries or frameworks.

Component soup

Once our widgets have been instantiated, our placeholder elements have been filled with third-party markup:

1
2
3
4
5
6
<div data-my-widget>
  <div class="my-widget-foobar">
      <input type="text" class="my-widget-text" />
      <button class="my-widget-button">Go</button>
  </div>
</div>

This markup is now sitting in the same context as our application markup.

Its internals are visible when we traverse the DOM, and the styles for this widget exist in the same global context as our styles, leading to a high risk of style clashes. All of their classes must be carefully namespaced with my-widget- (or something similar) to avoid naming collissions.

Our code is now all mixed up with the third-party code, with no clean separation between the two. Basically, there is no encapsulation.

Web Components to the rescue

With Web Components, it becomes clear what we’ve been missing.

1
2
3
4
5
<!-- Import it: -->
<link rel="import" href="my-widget.html" />

<!-- Use it: -->
<my-widget />

In this case, we’ve imported a new custom element with a single import and used it immediately.

More importantly, since <my-widget /> is an actual element, it hooks into the browser’s element lifecycle, allowing us to add a new instance to the page like we would with any native widget:

1
2
el.innerHTML = '<my-widget />';
// The widget is now instantiated

When we inspect this element, we can see that it is a single element. However, if we enable Shadow DOM in our developer tools, we see something very interesting.

Hidden inside this element is its private implementation details, in the form of a document fragment:

1
2
3
4
5
#document-fragment
  <div>
    <input type="text" />
    <button>Go</button>
  </div>

While these elements are visible to the naked eye, they are hidden from us when we traverse the DOM or write CSS selectors. To the outside world, even when instantiated, our custom widget is still just a single element.

We finally have a simple, encapsulated widget that behaves exactly like a standard HTML element.

In the interest of time

When we talk about Web Components, we’re not talking about a single technology.

We’re talking about a suite of new tools that are each useful in their own right: Custom Elements, Shadow DOM, HTML Templates, HTML Imports, and Decorators.

The primary goal of Web Components is to give us the encapsulation we’ve been missing. Luckily, this goal can be achieved purely with Custom Elements and Shadow DOM. So, in the interest of time, we’ll begin by focusing on these two technologies.

Rather than immediately jumping into a list of new browser features, I find it helpful for us to first reacquaint ourselves with what we already know about the native elements we’ve been consuming for years. After all, it doesn’t hurt for us to understand what we’re trying to build.

What we already know about elements

We know that elements can be instantiated through markup or JavaScript:

1
<input type="text" />
1
2
document.createElement('input');
el.innerHTML = '<input type="text" />';

We know that elements are instances:

1
2
document.createElement('input') instanceof HTMLInputElement; // true
document.createElement('div') instanceof HTMLDivElement; // true

We know that elements perform their own initialisation:

1
2
3
4
5
// If we create an input with a value attribute defined...
el.innerHTML = '<input type="text" value="foobar" />';

// ...the value *property* is already in sync
el.querySelector('input').value;

We know that elements can respond to attribute changes:

1
2
3
4
5
// If we change the value *attribute*...
input.setAttribute('value', 'Foobar');

// ...the value *property* updates accordingly
input.value === 'Foobar'; // true

We know that elements can have hidden internal DOM structures:

1
2
<!-- A single 'input' provides a complex calendar  -->
<input type="date" />
1
2
// Despite its complexity, to us it's still just a single element
dateInput.children.length === 0; // true

We know that elements have access to child elements:

1
2
3
4
5
6
<!-- We can provide as many 'option' tags as we like -->
<select>
  <option>1</option>
  <option>2</option>
  <option>3</option>
</select>

We know that elements can provide style hooks to their internals:

1
2
3
dialog::backdrop {
  background: rgba(0, 0, 0, 0.5);
}

Finally, we know that elements can have their own private styles. Unlike the custom widgets of today, we never need to manually include CSS for the browser’s native widgets.

By understanding all of this, we’re well on our way to understanding Web Components. With Custom Elements and Shadow DOM, we can now recreate all of this standard behaviour in our widgets.

Custom elements

Registering a new element can be as simple as this:

1
2
var MyElement = document.register('my-element');
// 'document.register' returns a constructor

You might have noticed that our element name contains a hyphen. This is an important requirement for Custom Elements to ensure our tag names don’t clash with current or future elements.

This element now works like any other native element:

1
<my-element />

Which, of course, means that our element works with all the standard DOM APIs:

1
2
3
4
5
document.create('my-element');

el.innerHTML = '<my-element />';

document.create('my-element') instanceof MyElement; // true

Breathing life into our custom element

Currently, this is a pretty useless element. Let’s give it some content:

1
2
3
4
5
6
7
8
9
10
11
12
// We'll now provide the second argument to 'document.register'
document.register('my-element', {
  prototype: Object.create(HTMLElement.prototype, {

    createdCallback: {
      value: function() {
        this.innerHTML = '<h1>ELEMENT CREATED!</h1>';
      }
    }

  })
});

In this example, we’ve set up the prototype for our custom element, using Object.create to make a new object that inherits from the HTMLElement prototype.

We’ve defined a createdCallback function which will run every time a new instance of the element is created.

We could also optionally define attributeChangedCallback, enteredViewCallback and leftViewCallback.

Inside our callback we can modify our new element however we like. In this case, we’ve set its innerHTML.

So far we’re able to dynamically modify the contents of our custom element, but this isn’t too different from the custom widgets of today. In order to complete the picture, we need a way to provide encapsulation to our new element by hiding its internals.

Encapsulation with Shadow DOM

We’re going to modify our createdCallback a bit.

This time, instead of setting the innerHTML directly on our custom element, we’re going to do something quite different:

1
2
3
4
5
6
7
8
createdCallback: {
  value: function() {

    var shadow = this.createShadowRoot();
    shadow.innerHTML = '<h1>SHADOW DOM!</h1>';

  }
}

In this example, you would see the words ‘SHADOW DOM!’ when looking at the page, but inspecting the DOM would reveal a single, empty <my-element /> tag. Instead of modifying the containing page, we’ve created a new shadow root inside our custom element using this.createShadowRoot().

Anything inside of this shadow root, while visible to the naked eye, is hidden from DOM APIs and CSS selectors in the containing page, maintaining the illusion that this widget is only a single element.

If we were writing a custom calendar widget, the shadow root is where our complex calendar markup would go, allowing us to expose a single tag as a simple interface to its hidden complexity.

Accessing the “light DOM”

So far, our custom element is just an empty tag, but what happens if elements were nested inside our new component? We may want a widget with similar flexibility to the <select> tag, which can contain many <option> tags.

As a working example, let’s assume the following markup.

1
2
3
4
5
<my-element>
  This is the light DOM.
  <i>hello</i>
  <i>world</i>
</my-element>

As soon as a new shadow root is created against this custom element, its child nodes are no longer visible. We refer to these hidden child nodes as the “light DOM”. If we inspect the page or traverse the DOM we can see these hidden nodes, but the end user would have no clue these elements exist at all.

Without shadow DOM, this example would simply appear as ‘This is the light DOM. hello world’.

When we set up the shadow DOM inside the createdCallback function, we can use the new content tag to distribute elements from the light DOM into the shadow DOM:

1
2
3
4
5
6
7
8
9
10
11
12
createdCallback: {
  value: function() {

    var shadow = this.createShadowRoot();
    // The child nodes, including 'i' tags, have now disappeared

    shadow.innerHTML =
      'The "i" tags from the light DOM are: ' +
      '<content select="i" />';
    // Now, only the 'i' tags are visible inside the shadow DOM
  }
}

With shadow DOM and the <content> tag, this now appears as ‘The “i” tags from the light DOM are: helloworld’. Note that the <i> tags have rendered next to each other with no whitespace.

Encapsulating styles

What’s important to understand about Shadow DOM is that we’ve created a clean separation between our widget’s markup and the outside world, known as the shadow boundary.

One powerful feature of Shadow DOM is that styles declared inside do not leak outside of the shadow boundary.

1
2
3
4
5
6
7
8
9
10
createdCallback: {
  value: function() {

    var shadow = this.createShadowRoot();
    shadow.innerHTML =
      "<style>span { color: green }</style>" +
      "<span>I'm green</span>";

  }
}

Even though a very gereric span style was defined in the shadow DOM, it has no effect on <span> tags in the containing page:

1
2
<my-element />
<span>I'm not green</span>

If we’re distributing elements from the light DOM into the shadow DOM, like in our <i> example earlier, it’s important to understand that these nodes don’t technically belong to our widget.

Distributed nodes still belong to the containing page, meaning that we can’t style these elements by simply writing a standard selector.

Instead, we must style these distributed elements with the ::content pseudo-element:

1
2
3
::content i {
  color: blue;
}

Which, in the context of a component, looks something like this:

1
2
3
4
5
6
7
8
9
10
createdCallback: {
  value: function() {

    var shadow = this.createShadowRoot();
    shadow.innerHTML =
      '<style>::content i { color: blue; }</style>' +
      'The "i" tags from the light DOM are: ' +
      '<content select="i" />';
  }
}

Exposing style hooks

When we hide the internal markup of our custom element, it is still sometimes desirable to allow certain aspects of our element to be re-styled from outside.

For example, if we were writing a custom calendar widget, we might want to allow end users to style the buttons, without giving them access to the entirety of our widget’s markup.

This is where the part attribute and pseudo element comes in:

1
2
3
4
5
6
7
8
createdCallback: {
  value: function() {

    var shadow = this.createShadowRoot();
    shadow.innerHTML = 'Hello <em part="world">World</em>';

  }
}

The ::part() pseudo element lets us style any element with a part attribute:

1
2
3
my-element::part(world) {
  color: green;
}

This part contract is essential in maintaining encapsulation. In the previous example, we styled the word “World”, but users of our widget would have no idea that it’s actually an em tag under the hood.

One important benefit of this system is that we’re free to dramatically change the markup inside our widget between versions, so long as the “part” attributes are still in place.

Just the beginning

Web Components finally give us a way to achieve simple, consistent, reusable, encapsulated and composable widgets, but we’re only just getting started. It’s a great time to start experimenting.

Before you can begin, you need to make sure your browser has the relevant features enabled. If you use Chrome, head to chrome://flags and enable “experimental Web Platform features”.

To target browsers that don’t have these features enabled, you can use Google’s Polymer, or Mozilla’s X-Tag.

Time to experiment

All of the functionality presented in this article is simply an exercise in emulating standard browser behaviour. We’ve been working with the browser’s native widgets for a long time, so taking the step towards writing our own isn’t as difficult as it might seem.

If you haven’t created a component before, I urge you to open up the console and experiment. Try making a custom element, then try creating a shadow root (against any element, not just Custom Elements).

This experimentation will naturally raise questions about topics not fully discussed in this article. Do we have to use strings to define the markup in our Shadow DOM? No, this is where HTML Templates come in. Can we bundle an HTML template with our component’s JavaScript? Yes, with HTML Imports.

Even if it’s too early to use this stuff in production, it’s never too early to be prepared for the future of the web.

This article is based on a 15 minute presentation from Web Directions South 2013 in Sydney, Australia (Slides).

Please note: The Web Component APIs have been in constant flux, so if anything in this article is out of date, please let myself and others know in the comments.

Using Promises in AngularJS Views

One of the lesser known yet more surprisingly powerful features of AngularJS is the way in which it allows promises to be used directly inside views.

To better understand the benefits of this feature, we’ll first migrate a typical callback-style service to a promise-based interface.

Working with callbacks

For now we’ll sidestep a discussion on the advantages of promises compared to callbacks, and focus soley on their mechanics.

As a working example, let’s take a look at an example service with a single ‘getMessages’ function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var myModule = angular.module('myModule', []);
// From this point on, we'll attach everything to 'myModule'

myModule.factory('HelloWorld', function($timeout) {

    var getMessages = function(callback) {
      $timeout(function() {
        callback(['Hello', 'world!']);
      }, 2000);
    };

    return {
      getMessages: getMessages
    };

  });

This admittedly contrived service’s ‘getMessages’ function takes a callback, then waits two seconds (using Angular’s $timeout service) before passing an array of messages to the callback function.

If we use this example service inside a controller, it looks like this:

1
2
3
4
5
6
7
myModule.controller('HelloCtrl', function($scope, HelloWorld) {

    HelloWorld.getMessages(function(messages) {
      $scope.messages = messages;
    });

});

Upgrading to promises

If we instead wanted our ‘HelloWorld’ service to expose a promise-based API, it would look something like this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
myModule.factory('HelloWorld', function($q, $timeout) {

  var getMessages = function() {
    var deferred = $q.defer();

    $timeout(function() {
      deferred.resolve(['Hello', 'world!']);
    }, 2000);

    return deferred.promise;
  };

  return {
    getMessages: getMessages
  };

});

You’ll notice that we’re now relying on Angular’s $q service (based on Kris Kowal’s Q) to create a ‘deferred’. We return the deferred’s ‘promise’ property as a public hook to its state, which is safely tucked away inside a closure.

Now that we have a promise API, we need to update the service interaction inside our controller.

1
2
3
4
5
6
7
myModule.controller('HelloCtrl', function($scope, HelloWorld) {

    HelloWorld.getMessages().then(function(messages) {
      $scope.messages = messages;
    });

});

Our controller is essentially the same, except ‘getMessages’ no longer accepts a callback. Instead, it takes no arguments, and returns a promise object.

As is standard for promises, it has a ‘then’ function that takes two arguments: a success callback and an error callback. For our purposes, we’ll ignore the powerful error handling capabilities that promises afford to asyncronous code.

Wiring up the view

In both the callback and promise versions of our controller, we end up with a ‘messages’ property on the scope. Which means, of course, that our view would remain unchanged.

A very simple view that only displays the messages would look like this:

1
2
3
4
5
6
<body ng-app="myModule" ng-controller="HelloCtrl">
  <h1>Messages</h1>
  <ul>
    <li ng-repeat="message in messages"></li>
  </ul>
</body>

In this case, we’re simply iterating over the messages that were returned from the ‘HelloWorld’ service.

Using promises directly in the view

AngularJS allows us to streamline our controller logic by placing a promise directly on the scope, rather than manually handing the resolved value in a success callback.

Our original controller logic for handling the promise was relatively verbose, considering how simple the operation is:

1
2
3
4
5
6
7
8
9
// BEFORE:

myModule.controller('HelloCtrl', function($scope, HelloWorld) {

  HelloWorld.getMessages().then(function(messages) {
    $scope.messages = messages;
  });

});

To simplify this, we can place the promise returned by ‘getMessages’ on the scope:

1
2
3
4
5
6
7
// AFTER:

myModule.controller('HelloCtrl', function($scope, HelloWorld) {

  $scope.messages = HelloWorld.getMessages();

});

When Angular encounters a promise inside the view, it automatically sets up a success callback and substitutes the promise for the resulting value once it has been resolved.

Seeing it for yourself

I’ve set up a plunk of this simple example so you can get a better feel for how this pattern works.

Using promises directly on the view is a feature of AngularJS that many people are unaware of, but it opens up a lot of opportunities for making our controllers as lean as possible.

In fact, once you’ve become familiar with the concept, you might be surprised how often you see that this pattern can be applied.

Testing jQuery Plugins Cross-Version With Grunt

The jQuery team have made the tough, but inevitable decision to stop supporting IE8 and below as of jQuery v2.0, while maintaining v1.9 as the backwards compatible version for the forseeable future.

In the world of modern, evergreen and mobile browsers, this was a necessary move to ensure jQuery stays relevant. Of course, this split leaves plugin authors with a bit more responsibility.

Where previously we could simply require the most recent version of jQuery, we are now likely to want to support both 1.9.x and 2.x, allowing our plugins to work everywhere from IE6 to the most bleeding edge browsers.

To facilitate this, we’ll run through the creation of a plugin using the popular JavaScript build tool, Grunt. We’ll then configure our unit tests to run automatically across multiple versions of jQuery.

A simple jQuery plugin

Note: If you have an existing plugin that doesn’t use Grunt, I’d suggest running through these steps in a clean directory and porting the resultant code into your project (with some manual tweaks, of course).

Assuming you already have Git and Node.js, we first need Grunt-init and the Grunt command line interface installed globally. Run the following command to ensure you have the latest version:

1
$ npm install -g grunt-init grunt-cli

Note: If you already have an older version of Grunt installed, you’ll need to first remove it with npm uninstall -g grunt.

We also need to install the ‘grunt-init-jquery’ template into our ’~/.grunt-init’ directory by cloning the repository:

1
git clone git@github.com:gruntjs/grunt-init-jquery.git ~/.grunt-init/jquery

We can now scaffold a new jQuery project:

1
2
3
$ mkdir jquery.plugin
$ cd jquery.plugin
$ grunt-init jquery

Once we’ve responded to all the prompts, we’re left with a basic jQuery plugin with QUnit tests.

Before we continue, we need to install our Node dependencies by running the following command from within our new plugin directory:

1
$ npm install

We can run our placeholder tests like so:

1
2
3
4
5
6
7
$ grunt qunit

  Running "qunit:files" (qunit) task
  Testing test/plugin.html....OK
  >> 5 assertions passed (51ms)

  Done, without errors.

For the purposes of this tutorial, we’re not terribly interested in the contents of the plugin. Instead, we’ll focus solely on the build and test infrastructure.

Before we make changes to our placeholder project, it’s worth having a closer look at what has been generated.

Inspecting the build

All of the configuration for our Grunt build process sits inside our Gruntfile (Gruntfile.js) in our project directory.

We have ‘qunit’ configuration, which looks for all QUnit files in the ‘test’ directory:

1
2
3
4
5
6
7
// snip...

qunit: {
  files: ['test/**/*.html']
},

// snip...

At the end of our Grunt configuration is the definition of our default task:

1
2
// Default task.
grunt.registerTask('default', ['jshint', 'qunit', 'clean', 'concat', 'uglify']);

The default task is run when the ‘grunt’ command is executed without any arguments:

1
$ grunt

Inspecting the test

The QUnit test for our plugin resides in ‘test/plugin.html’. Its default markup looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <title>Plugin Test Suite</title>
  <!-- Load local jQuery. This can be overridden with a ?jquery=___ param. -->
  <script src="../libs/jquery-loader.js"></script>
  <!-- Load local QUnit. -->
  <link rel="stylesheet" href="../libs/qunit/qunit.css" media="screen">
  <script src="../libs/qunit/qunit.js"></script>
  <!-- Load local lib and tests. -->
  <script src="../src/plugin.js"></script>
  <script src="plugin_test.js"></script>
  <!-- Removing access to jQuery and $. But it'll still be available as _$, if
       you REALLY want to mess around with jQuery in the console. REMEMBER WE
       ARE TESTING A PLUGIN HERE, THIS HELPS ENSURE BEST PRACTICES. REALLY. -->
  <script>window._$ = jQuery.noConflict(true);</script>
</head>
<body>
  <div id="qunit"></div>
  <div id="qunit-fixture">
    <span>lame test markup</span>
    <span>normal test markup</span>
    <span>awesome test markup</span>
  </div>
</body>
</html>

This page is responsible for including jQuery, QUnit (both JavaScript and CSS), our plugin, and any helpers required. It also provides the markup needed for QUnit to generate an HTML report.

You’ll notice, the first script file included is ’../libs/jquery-loader.js’. If we look at the contents of that file, we find this:

1
2
3
4
5
6
7
8
9
10
11
12
(function() {
  // Default to the local version.
  var path = '../libs/jquery/jquery.js';
  // Get any jquery=___ param from the query string.
  var jqversion = location.search.match(/[?&]jquery=(.*?)(?=&|$)/);
  // If a version was specified, use that version from code.jquery.com.
  if (jqversion) {
    path = 'http://code.jquery.com/jquery-' + jqversion[1] + '.js';
  }
  // This is the only time I'll ever use document.write, I promise!
  document.write('<script src="' + path + '"></script>');
}());

By including this script, we now have the ability to add ‘?jquery=X.X.X’ to the query string, when viewing this page in the browser.

Doing this will cause a hosted version of our specified version of jQuery to be included in the page rather than the default version provided inside our project.

Preparing the build

You might think that we could simply modify the QUnit file matcher in our Gruntfile to add a query string, but this won’t work. Files must exist on the file system, and query strings aren’t part of that vocabulary.

To automatically run our tests with different query strings, we first need to host our test on a local server.

Luckily, Grunt has an officially-supported ‘connect’ task which does the work for us by running a server using Connect.

To install the ‘grunt-contrib-connect’ Grunt plugin, we need to install it, and automatically save it as a development dependency in our ‘package.json’ file:

1
$ npm install --save-dev grunt-contrib-connect

Before we can use this Grunt plugin, we need to register it with Grunt by adding the following line to our Gruntfile’s ‘loadNpmTasks’:

1
grunt.loadNpmTasks('grunt-contrib-connect');

We can configure our server by adding the following task configuration to our Gruntfile:

1
2
3
4
5
6
7
connect: {
  server: {
    options: {
      port: 8085 // This is a random port, feel free to change it.
    }
  }
},

If we modify our default task to first include our newly configured ‘connect’ task, this server will start every time the default task is executed, and stopped when the build has completed:

1
2
// Default task.
grunt.registerTask('default', ['connect', jshint', 'qunit', 'clean', 'concat', 'uglify']);

Since we want to be able to test our plugin without having to concatenate and minify it, I recommend adding the following ‘test’ task:

1
grunt.registerTask('test', ['connect', 'jshint', 'qunit']);

We can now lint and test our code from the command line like so:

1
$ grunt test

Configuring the test URLs

So far we have a local Connect server running every time we trigger a build, and we have a ‘test’ task which will run the server before linting our code and running our QUnit tests.

However, you’ll find that we’re still pointing QUnit at the file system. Instead, we want it to point to our new server.

To achieve this, we’ll pass QUnit an array of URLs rather than files:

1
2
3
4
5
6
7
8
9
qunit: {
  all: {
    options: {
      urls: [
        'http://localhost:<%= connect.server.options.port %>/test/plugin.html'
      ]
    }
  }
},

Now when we run our tests, we should basically see the same result as before:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ grunt test

  Running "connect:server" (connect) task
  Starting connect web server on localhost:8085.

  Running "jshint:gruntfile" (jshint) task
  >> 1 file lint free.

  Running "jshint:src" (jshint) task
  >> 1 file lint free.

  Running "jshint:test" (jshint) task
  >> 1 file lint free.

  Running "qunit:all" (qunit) task
  Testing http://localhost:8085/test/plugin.html....OK
  >> 5 assertions passed (41ms)

  Done, without errors.

You’ll notice that this time, QUnit is accessing a URL instead of a file. This means that we’re now free to add query strings to our URLs, allowing us to automate testing across multiple versions of jQuery with ease:

1
2
3
4
5
6
7
8
9
10
qunit: {
  all: {
    options: {
      urls: [
        'http://localhost:<%= connect.server.options.port %>/test/plugin.html?jquery=1.9.0',
        'http://localhost:<%= connect.server.options.port %>/test/plugin.html?jquery=2.0.0b1'
      ]
    }
  }
},

Since there will be a lot of repetition in the URLs, let’s clean it up with use of the Array prototype’s ‘map’ method:

1
2
3
4
5
6
7
8
9
qunit: {
  all: {
    options: {
      urls: ['1.9.0', '2.0.0b1'].map(function(version) {
        return 'http://localhost:<%= connect.server.options.port %>/test/plugin.html?jquery=' + version;
      })
    }
  }
},

If we run our tests, you’ll see multiple URLs have been loaded, and twice as many assertions have passed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ grunt test

  Running "connect:server" (connect) task
  Starting connect web server on localhost:8085.

  Running "jshint:gruntfile" (jshint) task
  >> 1 file lint free.

  Running "jshint:src" (jshint) task
  >> 1 file lint free.

  Running "jshint:test" (jshint) task
  >> 1 file lint free.

  Running "qunit:all" (qunit) task
  Testing http://localhost:8085/test/plugin.html?jquery=1.9.0....OK
  Testing http://localhost:8085/test/plugin.html?jquery=2.0.0b1....OK
  >> 10 assertions passed (98ms)

  Done, without errors.

Making it bulletproof

By default, this setup loads each version directly from the jQuery site. If you’re anything like me, you sometimes develop with little to no internet connectivity, and this limitation would prevent you from running the full suite.

It’s a good idea to add each major supported version of jQuery to your ‘lib/jquery’ directory (with a ‘jquery-x.x.x’ naming convention), and modify ‘libs/jquery-loader.js’ to load these local copies instead:

1
2
3
4
5
6
7
8
9
10
11
12
(function() {
  // Default to the local version.
  var path = '../libs/jquery/jquery.js';
  // Get any jquery=___ param from the query string.
  var jqversion = location.search.match(/[?&]jquery=(.*?)(?=&|$)/);
  // If a version was specified, use that version from code.jquery.com.
  if (jqversion) {
    path = '../libs/jquery/jquery-' + jqversion[1] + '.js';
  }
  // This is the only time I'll ever use document.write, I promise!
  document.write('<script src="' + path + '"></script>');
}());

Testing in the cloud

As always, it’s a great idea to automatically run these tests after every push to GitHub, or on every pull request that is sent to you. To achieve this, we can leverage Travis CI with only a couple of changes to our project.

First add the ‘.travis.yml’ configuration file to your plugin’s base directory:

1
2
3
4
language: node_js

node_js:
  - 0.8

Then, set the ‘npm test’ script in your ‘package.json’ file to run our new Grunt ‘test’ task:

1
2
3
4
5
6
7
// Snip...

"scripts": {
  "test": "grunt test"
},

// Snip...

Finally, follow the official Travis CI guide to create an account, if needed, and activate the GitHub service hook. Once completed, you’ll have the confidence of knowing that the downloadable version of your plugin can’t be broken by mistake.

Keeping it in check

Now that we have a framework for testing multiple versions, it’s worth testing the minimum jQuery version your plugin supports, and each major version above it.

At a minimum, I’d recommend testing in 1.9.x and 2.x to ensure that any differences between the two versions don’t inadvertently break your plugin. Since both versions will be developed in parallel as long as old versions of IE maintain significant market share, it’s the least we can do for our users.

Update (19 Feb 2013): This article now reflects changes made in Grunt v0.4