Chaining Backbone Plugins with Require.js

You may have run into this situation.. You are building a backbone application, and you want to take advantage of any number of plugins.. let's say for this example, extended models and caching.. oh, and you also want to override some of backbone's prototypes, because you have some custom needs (my use case - adding pointer.js to the event object to handle click/touch events). Let's also say you are using require.js, because if you aren't using either require or something like browserify, you should be (please don't argue which one is better, for my particular use case, browserify is not an option). In any case, you have a bunch of plugins which extend Backbone, but you don't want to have to attach them all to each of your modules. It would be oh-so-much easier to just be able to do define(['backbone']... since you want access to all of these, everywhere...

Require.js and the shim to the rescue:


require.config({
paths: {
        jquery: '../components/jquery/jquery-2.0.mobile',
        backboneBase: '../components/backbone/backbone',
        underscore: '../components/underscore/underscore',
        backbone: 'plugins/backbone.overrides',
        DeepModel: 'plugins/backbone.deep-model',
        Cache: 'plugins/backbone-fetch-cache',
},
shim:{
        underscore: { exports: '_' },
        backboneBase: {
            deps: ['jquery', 'underscore'],
            exports: 'Backbone'
        },
        backbone:{
          deps:['backboneBase'],
          exports:'Backbone'
        },
        Cache:{
            deps:['backbone'],
            exports: 'Backbone'
        },
        DeepModel: {
            deps: ['Cache', 'underscore', 'plugins/underscore.mixin.deepExtend']
        },
}
});

That's the code.. so what exactly are we doing here?

First, we're setting aliases to all the backbone related script files that we need. Second, we're using shim to load each in the order that we need them to properly set up extension availability. For example, I have a file that overrides the default View.delegateEvents method to convert click events to pointer.js standardized pointer events (so it works for mouse or touch). I want to load this one as the main backbone export, so I call it backbone, and the original, unadulterated backbone is 'backboneBase'.

Next, I want to add the fetch-cache and deepModel plugins. Since I'm already referencing define(['DeepModel'... in my project, I want to make sure Cache is always available with DeepModel, so I load it as a dependency to DeepModel. It exports Backbone with fetchCache attached. If I changed my code to define(['backbone... I would lose both of these extensions.

You can play around with how you want to name/load your extensions in order to provide the module setup that fits your project. I'm sure someone has a better way for me to do this, and I'd love to hear it!

Yeoman, Karma, jQuery, Require, Backbone, SASS, Mocha, Dust, i18n... oh my!

I've added a skeleton file directory / config to github in order to perhaps save you time in setting up an application using the tools listed in the title. https://github.com/mrforbes/yeoman-backbone-stack

The main reason I added it was because getting Testacular Karma working with require was not as straightforward as I had originally assumed it would be, owing to the fact that you need to tell Karma which files to include, but require wants to include them all asynchronously. The workaround is in the karma.conf.js file:

files = [
  MOCHA,
  MOCHA_ADAPTER,
  REQUIRE,
  REQUIRE_ADAPTER,
  'app/scripts/config.js',
  {pattern: 'test/lib/chai.js', included: false},
  {pattern: 'test/test.js', included: false},
  {pattern: 'app/nls/*', included: false},
  {pattern: 'app/nls/**/*', included: false},
  {pattern: 'app/templates/*', included: false},
  {pattern: 'app/scripts/*.js', included: false},
  {pattern: 'app/scripts/libs/*.js', included: false},
  {pattern: 'app/scripts/plugins/*.js', included: false},
  {pattern: 'test/spec/*.js', included: false},
  'test/test-main.js'
];

 

The included:false command tells Karma the files will be there, but it doesn't inject them into the head, which leaves require to do its thing.

The second spot to notice is test/test-main.js... This is where all of the actual test files need to be added, instead of in the karma.conf.js file. You still need to alert Karma to them ( {pattern: 'test/spec/*.js', included: false} ), but you don't want them loaded that way.

Here's test-main.js:

require({
  // !! Karma serves files from '/base'
  deps: ['app','main'],
  baseUrl: '/base/app/scripts'
}, [
/* test runners go in here */
'../../test/spec/example',
'../../test/spec/i18n',
'../../test/spec/router',
'../../test/spec/index'
], function() {
  window.__karma__.start();
});

 

Pretty self-explanatory. Karma will wait to start until all of the specs have been loaded.

So.. if you have the need, grab the repo. Keep in mind this is a skeleton not a finished app, so some of the structure decisions are likely not optimal, and definitely not final. Move stuff around to suit your style / needs.

 

UPDATE 3/11/2013: I've updated the git repository to the Yeoman 1.0 beta. The 1.0 version of yeoman has some major changes in it, so the new application framework reflects those changes.

I've also added the following:

  1. JsHint watching
  2. Karma through Grunt (watching through Grunt)
  3. Debug code removal (grunt-groundskeeper)
  4. Backbone form validation, model binding, and deep model libraries
  5. Dust templates were always there, but I never mentioned them in the original post

UPDATE 5/3/2013: Testacular has been renamed Karma (for which I am thankful). I changed my references to match.

Getting Started with Testacular - A Tutorial

I've recently discovered the Testacular javascript unit testing test runner. It's built on node.js, and if you're into unit testing (you should be) and streamlining your workflow (you should be), it will really help light a fire under your butt and jack your productivity. This tutorial is for basic installation only. The best place to learn more is to watch the screencast.

If you haven't already, you'll need to install node.js.  You can get it here for the OS of your choice.

Once you have node installed, you just need to follow the simple install instruction on the testacular web site. I'll save you the click though, just do this:

node install -g testacular

 

Ok, it's not THAT simple, but only because you also have the option of installing the unstable build. If you'd like to use qunit, you'll want to do that (at least at the time of this writing). Here's the command:

node install -g testacular@canary

  As of the writing of this post, stable is at 0.4, unstable at 5.8 (stables are always even). If you want stable with qunit support, wait for 0.6. you can check your version once you install with:

testacular --version

  Ok, so the next thing you need to do is set up the config file. You can do this from the command line via:

testacular init

  This will give you a nice setup flow where you can choose to do some stuff. Or you can skip it all and fill in the resulting .js file yourself.  Here are the 3 most important config settings (remember, this is a basics tutorial)

files = [
    QUNIT,
    QUNIT_ADAPTER,
    ..
]

browsers = ['PhantomJS'];

// enable / disable watching file and executing tests whenever any file changes
autoWatch = true;

  The first two files are variables set in testacular which wire up your testing framework. 0.5.8 has OOTB support for jasmine, mocha, and qunit. The rest of the files would be your scripts and their tests. At this point, you'll want to add your files. For example if you had a helloWorld.js and a spec/helloWorld.js file, you would do:

files = [
    QUNIT,
    QUNIT_ADAPTER,
    helloWorld.js,
    spec/helloWorld.js
]

 

If you know your tests will all be in the spec folder, you can also use wildcards:

files = [
    QUNIT,
    QUNIT_ADAPTER,
    helloWorld.js,
    spec/*.js
]

 

Next, you need to choose the browsers you want to run the tests in. You do this by either adding environment paths to the browser application/exe, or by setting symlinks. I've only done this in windows so far, by adding (for example) CHROME_BIN = 'path to chrome.exe' to my local environment variables (you can go to control panel/search and type in 'environment variable' if you aren't sure where to add this).

browsers = ['PhantomJS'];

  One of the best things about testacular is that you can have it watch for file changes, and rerun your tests when it detects a change. This way you have a constant monitor to whether or not you are screwing anything up, and can react immediately without having to launch a browser or type into the command line. Awesome.

// enable / disable watching file and executing tests whenever any file changes
autoWatch = true;

  So, now you have your test runner setup, you have your testing framework ready to go.. all you have to do is start the test server:

testacular start

  Now go in and add some tests, save your files, and watch the magic happen. Testacular can also integrate with build servers like Jenkins (which we use at Linksys), but that's perhaps fodder for another post.

Also of note, Testacular is a part of the awesome Yeomen, which I'll be diving into soon.

Web Performance Optimization Tip - jQuery + Document Fragments

Here's the setup... You have a REST API returning a JSON object filled with a bunch of data that you want to put into a list. It looks vaguely like this:

[{1234:{name:'bob'}},{5678:{name:'karen'}}]

 

Two names for brevity, but the list can be N length.

Let's say you want to put this into a simple list of names.

Here's the way I still see way too many devs doing it:

<html>
<body>
<ul class="names"></ul>

<script>
$(function(){
  var restResponse = [{1234:{name:'bob'}},{5678:{name:'karen'}}];

  $.each(restResponse,function(index,value){
       $('.names).append('<li>'+value.name+'</li>'); 
  }); 
});
</script>
</body>
</html>

 

It works.. and its not really THAT bad for two names.. but what if you had 100, or 1000? Especially if you are developing for mobile, you need to use this, or you're killing your performance over one fairly simple block of know-how.

That know-how is called document fragments. I'm not going to go in depth on what document fragments are, you can check out the post on the subject by John Resig here.

The short version is that they are DOM elements you can create in memory, that don't cause screen redraws when you manipulate them. The benefit: You can add LOTS of nodes to them in memory, write them to the screen, still keep them in memory, remove them from the screen, still keep them in memory, and manipulate them as some kind of awesome phantom DOM object. Seriously, document fragments are bad-ass, and if you want to be bad-ass, you need to know how to use them. Not only that, but jQuery makes it stupendously easy to work with them.

Let's go back to the example... What if we make some minor modifications?

<html>
<body>
<div class="names"></ul>

<script>
$(function(){
  var restResponse = [{1234:{name:'bob'}},{5678:{name:'karen'}}];
  var $ul = $('<ul>');
  $.each(restResponse,function(index,value){
       $ul.append('<li>'+value.name+'</li>'); 
  }); 
  $('.names').append($ul); // or you could do $ul.appendTo('.names') whatever makes you smile.
});
</script>
</body>
</html>

 

Do you see what I did there? Here it is in super slow-mo:

var $ul = $('<ul>');

 

Yes, that made a document fragment of an unordered list. That's all it took. You can also add attributes to it like this:

var $ul = $('<ul>',{id:'listOfNames'});

 

Then we loop through our JSON object and stick each list item into the fragment:

 $.each(restResponse,function(index,value){
       $ul.append('<li>'+value.name+'</li>'); 
  });

 

Then we stick it onto the screen.

$('.names').append($ul); // or you could do $ul.appendTo('.names') whatever makes you smile.

 

If you've been making this mistake, stop now and go refactor your code.

Thinking Big... Architecting a large application with jQuery / Backbone / Require, an Overview

A Little Background Recently at work we've been in heavy development mode on a new thick-client application architecture.  If you aren't familiar with the concept, thick-client essentially means putting most of the work onto the client's processor, and removing it from the server.  The benefits of this are many, not the least of which is a much faster site response time for your users, and a much lighter load for your server.

To accomplish this basically means using javascript and a lot of AJAX, as well as implementing the application such that that a) search engines can still index your site and b) the user can both bookmark and use the back button wherever they are in your app.

In order to facilitate this heavy use of ajax and management of state, a number of MVC(ish) patterned javascript libraries have been created, and more appear seemingly every week.  After a little bit of research, we settled on Backbone due to its barebones nature, low overhead, lesser learning curve, and the size of the community.  It always helps to be able to reach out when you get stuck.

In combination with jQuery (we are already using it heavily), and Require.js (a dynamic resource loader which has also been in use for quite a while), we had the perfect trinity of tools to get the job done.

Basic App Structure

There are a lot of things to consider when architecting a large application.  For HMS, this is compounded by the fact that a) we have a lot of users with a lot of different site configurations, b) we have different variations on the main backend application, c) there is a lot of legacy code and newer front-end logic that needs to maintain compatibility so that pieces can be added one at a time.   This is where Require and its support of AMD (asynchronous module definition) really shines.

All of our thick-client application code is managed through AMD, and there are two things about it that are very awesome:

  1. It allows seamless integration on a module-by-module basis with the current application code
  2. Even portions of modules can be broken out and used before their parent module is complete - for example, we already have the autocomplete module live, even though the search module is still in development. When the time comes, it will be trivial to move it back into its proper place.

The general application structure goes like this:

  1. Event Aggregator Outlined here, we have a single global object, and its job is to manage communication between modules.
  2. Outer Router We have two routers.  This helps keep the application router a lot simpler, and keeps module related logic inside the module. The outer router manages routes for the entire application, and determines which modules to load.
  3. User Module Right now, this is the only module that sits everywhere in the application. As such, it is loaded by the outer router.
  4. Modules Each module contains its own router as well as templates and views. The router loads the views, the views load the templates (we're using Hogan).  When we compile with require, each module (including templates) becomes a single file (the module router).
  5. Models The models are the glue between the client and the server,  and they sit in their own folder because multiple modules may use the same models.
  6. Form Mediator This is actually one of the modules, but its worth describing here.  We use a special Backbone view to handle every form, enhancing functionality as we go.  This view simplifies forms immensely, managing the model, validation, and state with ease.  Its special power is to allow multiple modules to combine into a single form definition - most usefully for search / advanced search.
  7. UI Modules Most DOM manipulation (outside of insertions/removals)  and all UI effects are offloaded into separate UI modules.  The reasons for this are a) it separates presentation concerns and b) it allows for a much easier time should we decide to switch from jQuery for future manipulation (one can dream of full standards support across browsers)
  8. Backbone.Store (localStorage) Of course we're using this for our app.  Currently it persists forms, helps maintain the user session client-side, and caches model data.

Obviously, there are many ways to architect a thick-client application.  Our approach intends to prevent module dependency (outside of UI Modules), separate DOM manipulation from data logic, keep communication centralized in an event aggregator, load quickly (through smart module loading and use of localStorage, among other things), and preserve the global namespace through the use of AMD.  There are a lot of details I've left out (for instance reuse for mobile), but I do hope to dive a little deeper in future posts.