follow me on twitter

follow me on twitter.

Tuesday, September 18, 2012

x-tags!

Browser technology advanced a lot in recent years. The hype about HTML5 spotlighted how much browsers improved. Still, HTML5 is a low level technology compared to other UI toolkits common in languages like .NET, Java and Flex. These are much more component based providing big building blocks for developers to assemble applications quickly. Sure, there is extjs and several others libs providing equally rich component sets made on top of low-level HTML/JavaScript/CSS.

Anyway, with the Web Components spec currently in the works of standard bodies and browser makers, HTML5 is going to take a next step towards a full-fledged RIA platform. Web Components roughly includes templates, decorators and custom tags. Rather a technical implication than a feature is the shadow DOM which divides the DOM in initial content and shadow elements.

For custom elements there is already a working implementation available. At x-tags.org Mozilla provides an polyfill in JavaScript which is on standards track. The strategy is obvious: create a specification of a feature, provide polyfill implementation for early use which finally can be replaced by a native implementation. Where this is possible, this is a good plan! It also helps to get early feedback on something like a 'reference implementation'. So how does this x-tag thingy work in use? It's pretty simple. See how to create a map:

<head>
   <link rel="stylesheet" href="/x-tags/map/map.css"/>
   <script src="/x-tags/x-tag.js"> </script>
   <script src="/x-tags/map/map.js"> </script>   
   <style>
      #detail-map {
          box-sizing: border-box;
          width: 100%;
          height: 480px;
      }
      x-map {
          border: 1px solid #black;
      }
   </style>
</head>
<body>   
    <x-map id="detail-map" data-key="xxx"></x-map>
</body>

The code samples shows how to include the xtag library providing support for all browsers including IE9 and better (so it completely safe for the mobile web). The map.js script registers a custom element and defines form and behavior. Once registered, instances of the element are recognized in the DOM and initialized accordingly. The web component spec defines a couple of life cycle events for custom elements (like created, inserted or attributeChanged). This way logic shipped with a custom element can be hooked in and do its augmenting work.

These technical details are basically interesting for component or framework developers only. An application developer does not have to fiddle around often with these details. More often he is interested in a set of components to quickly wire up an application. That's what x-tags is meant for. At x-tags.org you will find a JavaScript implementation of x-tag which ships with a bunch of nice components. I built a first FirefoxOS application with these and found them very handy. There is an x-slidebox and an x-flipbox element with which I quickly was able to construct a basic navigation behavior for my application in a declarative way.

<x-slides>
   <x-slide>
        <x-map id="map" data-key="xxx"></x-map>
   </x-slide>
  <x-slide>
      <x-flipbox class=" x-flip-direction-right">
<x-card>
<div>Front side</div>
<div>Back side</div>
</x-card>
</x-flipbox>   
   </x-slide>
</x-slides>

Here again we have to include the JavaScript and CSS files in the header or at the end of the document:


<link rel="stylesheet" type="text/css" href="/x-tags/map/map.css"/>
<link rel="stylesheet" type="text/css" href="/x-tags/slidebox/slidebox.css"/>
<link rel="stylesheet" type="text/css" href="/x-tags/flipbox/flipbox.css"/>
<script src="/x-tags/x-tag.js"> </script>
<script src="/x-tags/slidebox/slidebox.js"> </script>
<script src="/x-tags/map/map.js"> </script>
<script src="/x-tags/flipbox/flipbox.js"> </script>


All in all x-tags or better custom elements are a promising feature of HTML5 to make development of large applications and library integration more convenient. Thanks to JavaScript polyfills, it can be used much earlier than before all browser support it natively. I'm quite sure JavaScript libraries will jump on that train once available because it is easy to do and it offers a nice way to integrate third-party behavior in a declarative and semantic way.

See a short slide deck about Mozillas x-tag polyfill or jump to the demo page.

Sunday, September 16, 2012

Developing and deploying a FirefoxOS application

At the MozillaCamp2012 in Warsaw I came upon the x-tag implementation of Mozilla. Eager to try some x-tags and the Mozilla implementation I started a first FirefoxOS application. While I currently have no hardware device on which I can run b2g applications, we can find other ways.

While having a hardware device is essential to develop and test an application, it is a nice asset of a platform to have other ways to run an application. While developing a FirefoxOS application, the turnaround can be extremely quick for a great part of the app. When e.g. it comes to fiddle around with the UI to make it beautiful and shiny it's a bless to be able to simply reload the app in a browser and having all the beloved development tools available for debugging, adjusting styles and the like. Once it fits after a dozen or so development cycles, it can be deployed and tested on the device again.

For FirefoxOS aka b2g aka boot2gecko applications there are many ways to be run while development:
While reloading the app in the browser is the fastest, browsers lacks decent support for Web APIs so far. Anyway, having these three ways available boosts development in many regards.

Possible productive deployment platforms are:
  • a mobile phone device running FirefoxOS (not commercially available yet)
  • a mobile tablet device  running FirefoxOS (same as phone)
  • a sandboxed browser (Web API support required)
  • a desktop application (e.g. deployed via FirefoxAurora)
All ways have their own appeal and taken them all together with a single app is great.

Mozilla provides a product line demonstrating the ability of doing this right now. As it is fully based on standards or standard proposal it is more than a proprietary FirefoxOS or Mozilla application. It's a convincing proposal on how web application technology can be leveraged on every platform with a similar degree of integration into the hosting device or platform as native counterparts or even run directly on a mobile and web enabled linux kernel like b2g. There might be never a full "write-once, run-everywhere" for high-class, high-fidelity apps, but given support for standardized, installable HTML5 apps on desktop and mobile is a great asset, pushing competitors further in many regards.

See this YouTube video with the screencast on how I deployed and run an web application in a web browser, the firefoxOS phone device emulator and deployed as a desktop application on OSX:


Saturday, February 25, 2012

Navigating the web programmatically with Phantom.js

Phantom.js is a headless webkit browser which let's you navigate the web in a programmatic manner from the command line. A JavaScript API allows to control the browser for example to load a web page, check its contents or run JavaScript within the page. Obviously this is a powerful tool for many purposes. Running test cases comes to mind quikly.
I created a first example to load a qUnit test case, run the test and, after a short delay, grab the test result from the DOM elements: 


var page = new WebPage(),
    url = "http://localhost:3000/repo/test/model/model.qunit",
    delayInMillis = 200;

page.onLoadFinished = function (status) {
   
    window.setTimeout(function() {
       
        var result = page.evaluate(function () {
            return {
                passed: document.querySelector("#qunit-testresult .passed").innerText,
                failed: document.querySelector("#qunit-testresult .failed").innerText,
                total: document.querySelector("#qunit-testresult .total").innerText
            };
        });
       
        console.log(JSON.stringify(result));
        phantom.exit();
       
    }, delayInMillis);
};

page.open(url);

The sample creates a WebPage object and registers a listener for the loadFinished event. The page is loaded by simply call page.open(url). As soon as the page has been loaded completely the handler is called. To get the results of the qUnit test a functions is evaluated. This function is called within the page just loaded so it can access the DOM to get the test result from the DOM as rendered by qUnit. To make sure all tests have finished at the time the script evaluates the DOM, a timeout is used to delay the action.

Once phantom.js is on the PATH the still naive script above can be called from the command line:

# phantomjs load-qunit.js

The ability to inject JavaScripts into pages is very powerful. It makes integration testing of a web application quite easy as we can inject scripts to emulate user inputs triggering events. Afterwards the resulting action can be asserted by checking the changes in the DOM. This adds another possible testing layer above unit and component testing which could be done with qUnit alone. It's not cross browser, but it's kind of lightweight and fast and requires less setup then Selenium or JsTestDriver. A good tool well suited to be run on both a developer workstation or an integration test server.

While being headless (there is still a dependency to X on Linux; see work around) Phantom.js is still able to render pages into a bitmap. This makes it easy to create a detailed test report especially when something went wrong. A rendered picture of the failing page can support debugging and complete other data which can be drawn from the DOM in case of a failing test.

It's dead easy:

var page = new WebPage(),
    url = "http://localhost:3000/repo/test/model/model.qunit",
    delayInMillis = 200,

    screenshotFile = "failedTest.png"
    viewportWith = 1024,
    viewportHeight = 1000;

page.viewportSize = {
    width: viewportWith,
    height: viewportHeight
};


page.onLoadFinished = function (status) {
   
    window.setTimeout(function() {
       
        var result = page.evaluate(function () {
            return {
                passed: document.querySelector("#qunit-testresult .passed").innerText,
                failed: document.querySelector("#qunit-testresult .failed").innerText,
                total: document.querySelector("#qunit-testresult .total").innerText
            };
        });
       
        console.log(JSON.stringify(result));


        if (result.failed > 0) {
            page.render(screenshotFile);

        }

        phantom.exit();
       
    }, delayInMillis);
};

page.open(url);


 As demonstrated Phantom.js allows for automating many tasks done with a webbrowser. Not only testing but many other possible applications of Phantom.js can be very useful for web developers. It's obvious that testing with Phantom.js is not enough for the mainstream web as here cross browser testing is a must and solutions like Selenium are available.


Installation on OSX was as easy as downloading and extracting an archive file. Examples shipped with the distribution and the official documentation allowed for a quick jump start. So Phantom.js is another usefule webkit application which brings the web and it's technology out of the usual browser environement. Try it out!

Screenshot taken with Phantom.js after qUnit test has run:



Tuesday, February 14, 2012

Using jslint

JSLint and JSHint are tools which help to avoid bugs related to some JavaScript oddities. Linting my JavaScript code has become common for me and I found some issues in my code which I would not have found without it. It also helps refactoring code. For instance when moving code to a separate file, JSLint tells me about variables and functions I want to use in the script but are still in the old file. I don't have to run a test case to find the bug. The following option helps me out in this particular case:

undef: false #  true if variables and functions need not be declared before used.

On OSX I use Textmate for JavaScript code. JSLintMate is a bundle for Textmate which integrates JSLint and JSHint into the editor and fires it whenever I save a JavaScript file. So, customizing my lint options has become an issue quickly. At the end of this post you can find  my current JSLint options which are stored in the .jslinrc file in my home directory and hence are passed to JSLint by default.

Within a JavaScript file I can put additional instructions in comments. The jslint instruction can be used to pass options overriding or completing the default options:

/*jslint browser: true, devel: true */

The above tells JSLint to include global variables for the browser environment and allow calls to development tools like the console.

Other globals can be added by the global instruction:

/*global jQuery: true, craftjs: true */

This can be very useful to tell JSLint which globals are provided by third-party libraries as jQuery in the snippet above. Besides libraries there might also be globals created by my own scripts in other files which are linked into the page or prepended by a build tool like sprockets or craft.js.

Besides JSLint, JSHint becomes more and more popular. The mentioned craft.js I'm currently working on uses JSHint. Both roughly offer the same functionality. JSHint offers additional options and hence offers more control over the lint process. The JSLint instructions within the JavaScript comments are supported by JSHint as well. That's why JSLint and JSHint can be used  side by side without cluttering the source code even more.

Conclusion

Being used of using IDE features for languages like Java makes a JavaScript developer feel lonely. Not only can JavaScript IDEs not offer such a rich feature set like Java IDEs can. JavaScript even does not have a compiler which tells a developer about typos and other mistakes. JSLint/JSHint can help to some extend by providing instant feedback when writing JavaScript code and supports the endevour of delivering high-quality JavaScript code.

my current jslint default options

adsafe: true # true if ADsafe rules should be enforced. See http://www.ADsafe.org/
bitwise: false # true if bitwise operators should be allowed.
cap: false # true if upper case HTML should be allowed
css: false # true if CSS workarounds should be tolerated
debug: false # true if debugger statements should be allowed (set to false before going into production)
eqeq: false # true if the == and != operators should be tolerated.
es5: false # true if ECMAScript 5 syntax should be allowed
evil: false # true if eval should be allowed
forin: false # true if unfiltered 'for in' statements should be allowed
fragment: false # true if HTML fragments should be allowed
indent: 4 # Number of spaces that should be used for indentation - used only if 'white' option is set
maxerr: 50 # The maximum number of warnings reported (per file)
maxlen: 120 # Maximum line length
newcap: false # true if Initial Caps with constructor functions is optional.
nomen: true # true if names should not be checked for initial or trailing underbars.
on: false # true if HTML event handlers (e.g. onclick="...") should be allowed
passfail: false # true if the scan should stop on first error (per file)
plusplus: true # true if ++ and -- should be allowed
regexp: false # true if . and [^...] should be allowed in RegExp literals.
safe: false # true if the safe subset rules are enforced (used by ADsafe)
sloppy: true # true if the ES5 'use strict'; pragma is not required
sub: false # true if subscript notation may be used for expressions better expressed in dot notation
undef: false #  true if variables and functions need not be declared before used.
vars: false # true if multiple var statement per function should be allowed.
white: false # true if strict whitespace rules should be ignored.

predef: '' # Names of predefined global variables - comma-separated string or a YAML array

browser: false # true if the standard browser globals should be predefined
rhino: false # true if the Rhino environment globals should be predefined
windows: false # true if Windows-specific globals should be predefined
widget: false # true if the Yahoo Widgets globals should be predefined
devel: true # true if functions like alert, confirm, console, prompt etc. are predefined
node: false # true if the node.js environment globals should be predefined


Thursday, February 2, 2012

Leveraging node.js for unit testing in JavaScript development

I played around with node.js to find a way to properly unit test my JavaScripts. While doing that I stumbled upon a nice feature node.js offers by the vm module. The function

vm.runInNewContext(script, globalObject, filename);

allows to run a script and pass a context to it. This context is nothing else than the evil global object of JavaScript we all know about and work around. So the above code allows us to run a script like the one below which is stored in test.js and has two dependencies to the global object:

(function () {
    "use strict";
   
    var renderer = require("./renderer");
   
    assert.ok(renderer);
}());

By using runInNewContext we can now load the test.js script and run it with an empty context. This is done by test-runner.js:


(function () {
    "use strict";
    var vm = require("vm"), fs = require("fs"),
        filename = "app/test.js",
        testScript = fs.readFileSync(filename).toString(),
        globalObject = {};

    vm.runInNewContext(testScript, globalObject, filename);
}());

Now we can run the test runner with node:

#  node test-runner.js

To get an error saying:

ReferenceError: require is not defined

This is exactly what we expected because the globalObject passed to runInNewContext is empty and hence the context has no require to offer as we are used to in a node.js environment. However, being able to pass our own global object, we can mock dependencies the test requires. This is exactly what we need to mock away our collaboraters and unit test our modules in an isolated manner. In Java, libraries like EasyMock do some magic to provide mocks for interfaces and classes. A great and very useful tool for test driven development in Java. In JavaScript mocking a function is a snap compared to that. The dynamic type system and the functional approach with late binding makes it easy to mock a function or an entire object with literals. Hence the test-runner.js script needs to setup the globalObject properly to provide an appropriate global object on which test.js can rely on:

(function () {
    "use strict";
    var vm = require("vm"), fs = require("fs"),
        filename = "app/test.js",
        globalObject = {
          require: function(module) {
            return {};
          }

        },
        testScript = fs.readFileSync(filename).toString();

    vm.runInNewContext(testScript, globalObject, filename);
}());

The above script now offers a require function as a property of the global object. It simply returns an empty object which is enough to fulfill our test case. Now our test script fails a liitle bit later because it misses the assert function to check the imported module. In this case we don't want to reimplement the assert function node.js already offers. Instead we just pass the real implementation:

globalObject = {
     assert: require("assert"),
     require: function(module) {
            return {};
     }
}

Running the test runner passes now without complaint, because we successfully mocked all dependencies for our test case.

The exampled I showed here is very simplicistic. But it clearly shows the opportunity the runInNewContext function of the vm module offers for unit testing JavaScript code. We can not only mock the global require function, but also other global properties as for instance the document or window properties in a browser or a jQuery Ajax call $.ajax({}).

While it realy sounds like a big burden to mock away an entire browser environment, the impact of such an development style on JavaScript code would be huge and raise not only quality and testability but also leads to flexible reusable modules, objects and functions. While I have not applied such an development approach for JavaScript by now, my experiences with TDD in Java makes me sure this would be a good idea for JavaScript as well. Not having Eclipse and its great refactorings for Java (a kindom for Ctrl+1!!) makes it not that easy and powerful, but the benefits make it worth doing it.

As shown above, node.js in general and vm.runInContext in particular offer a nice starting point for such an endeavour. 

Thursday, December 29, 2011

Knockout.js - a nice framework comes with a new major release

The year ends, development never stops. Knockout.js releases a new major version 2.0.0 (http://www.knockoutjs.com). Even the first major version was a great framework to design and architect your JavaScript SPAs (Single Page Application :-).

Release notes: http://blog.stevensanderson.com/2011/12/21/…

Now with release 2 the Knockout.js gets even better. I tried and studied some JS framework in the last year an KO was the one which convinced me most. It delivers a nice design concept by applying the MVVM pattern. Integration of templating and the excellent data binding makes creating HTML frontenend application fun and incredibly efficient.

I would see knockout.js as a macro-framework. It's not a replacement for a swiss-army-knife library like jQuery on which KO is built, btw. Rather it is concerned about the general structure of a browser application. This is one of the key pain-points of the browser as an application platform. While doing little things quickly is easy and done quickly, an app often tangles up into a chaos of interweaved functions while growing larger. KO help to overcome this by delivering a clean and easy design (MVVM) which can be applied on application and widget/component scope and helps keeping up maintainability.

Key points I like:

  • introduces clean application design
  • helps to solve the biggest web application development problem: structure and know your application
  • no constraint on the view side - everthing goes
  • powerful observable model with dependency tracking
  • great data binding to synch model and view automatically
  • great template integration re-rendering templates automatically on model change

Note: ko is not a component library with standard widgets; do-it-yourself or use the component library of your choice,


MVVM
The application is based on a view model with contains an observable data model and the activities to access and manipulate these data. Based on Observables a model can be put together quickly by enforcing a clean design. Once the model is defined the UI can be attached to and synched with that model out-of-the box.

Observable model
Great approach as known from other stuff out there (e.g. backbone.js, spine.js, JavaScript MVC). An observable model triggering change events. Observable can even be chained to other obsrevables (dependentObservables in v1 or computedObervables in v2)

Data binding
An excellent data binding is key for a efficient web development experience. Knockout.js has it and it's great! The observable model can be bound to DOM elements with 'data-bind' attributes. This makes it easy to render JavaScript data into DOM elements. On a model change the templates are re-rendered automatically. This way an application developer has only to be concerned about the business functionality manipulating the model while the view updates automatically.

When doing UI development having data and UI in synch is a big share of work- mostyl trivial but time consuming. Having this done by the framework saves time to make the UI incredible and awesome (or, sadly, simply cheaper ;-) Knockout.js makes 

button.attr("disabled", "disabled")
never clutter your js code again as you quickly bind such stuff to an observable model value and make the framework synch the UI on model change:

<button data-bind="enable: myItems().count < 5">Add</button>

Templating
Knockout.js is based on jQuery and integrates with the templating engine of Boris Moore (Microsoft) nicely. Templates in the page can be attached to DOM elements by data binding and are executed by KO automatically. It's and deep and efficient integration. Changes on an data item of a collection make only render again these view elements belonging to that particular item and not the entire collection.

With 2.0.0 templates integrate even nicer. A template can now simply be part of the DOM (as of version 1.x templates are included in a script element with type text/tmpl or similar). That way the markup code is very clean. I guess it's the approach which delivers the least ugly markup template code I've ever seen. No other template language I know is cleaner. This might be a detail, but knowing that there is no template language which is not ugly for advanced templates, this can be a real plus regarding maintanability. Beside that, having the template markup parsed by the browser while page loading is an appreciated perfomance boost for large apps.

So now, if you ever have to do web application development for desktop you definitly should check out knockout.js. It's not only great but supports browsers back to the stone edge (IE6). Documentation is extensive and the framework is easy to use, helps you keep things together and greatly supports the knitty-gritty, boring house-keeping web dev work browser developers have to tackle.

Thursday, June 2, 2011

Pusher - websockets made simple and easy

While visiting the Falsy Values Conference in Warsaw I stumpled upon pusher.com. Pusher offers an interesting websocket infrastructure which makes jumping into websocket application development easy.

Currently websocket technology on the server side is not as widely available and common as traditional HTTP server technology. In the Java ecosystem there are pleinty of  well known http server implementations available which use non-blocking I/O. However, non-blocking HTTP is rather new in the Java world and has just been added to the most recent JEE6 specification.  A particular websocket implementation is not part of the current Java spec.

Grizzly (Glassfish) offers a Websockets implementation, Jetty does as well and Atmosphere even aims for a portable solution. Tomcat, the most popular Servlet container does not.

Traditional webservers bind each connection to a thread. This limits scalability for websockets and look-a-likes tremendously. NIO implementations like node.js or Java Servlets 3 are event driven. One (node.js) or more worker threads are dispatching events. That way many connections are handled by a single thread which allows for better scalability especially for connection held open for a long time.

It seems that he protocol and NIO technology required for doing websockets add complexity to web applications which is currently not common for the community and hence not that easy to achieve. However, applications based on websockets mostly need nothing more than some kind of publish-subscribe mechanism. Publish-subscribe is easy to use, but very powerful and flexible.

Thats exactly what Pusher offers. A Websocket (or Flash-socket) based service to publish and subscribe JSON messages. On the browser side a JavaScript provided by Pusher is included to do the the websocket work. Browsers not offering Websocket support are supported by a Flash fallback. This is of great value and not a snap to achieve by a homebrew solution. Today it's essential to have a good and proven third-party library to achieve solid websocket applications targeted to a main-stream web audience.

An application subscribes for a given event and gets its callback called when a given event occurs.

var pusher = new Pusher('KEY');
pusher.subscribe('iam');
pusher.bind('news',
  function(data) {
    document.getElementById("news").innerHTML = "<p>" + data.msg + "</p>";
  }
);

Trigger events is not only possible from within the browser. Pusher offers a REST interface to trigger events from any language supporting HTTP. It's not a suprise to find implementations in many languages listed on the Pusher website. There is a Java library for Google App Engine for Pusher available at github. As I'm not using Goolge App Engine and required a adapted solution so I forked the project and provided a implementation based on Jakartas HttpClient. The refactored project added some more OO style eg. to have it easily available in a Spring environment or other IOC containers. This has been mostly achieved by removing the rather static nature of the GAE solution. Have a look at my fork at github.  The GAE implementation of Stephan Scheuermann was of great value for me. The code was well-structured and his implementation of creating appropriate hashed signatures worked for me like plain vanilla and helped me having a nice solutions quickly. Thx for that nice work!

Sending an event to a Pusher application from within Java is easy that way:

Pusher Transport httpClientTransport = new HttpClientPusherTransport();
PusherChannel channel = new PusherChannel("iam", APPLICATION_ID,
          APPLICATION_KEY, APPLICATION_SECRET,  httpClientTransport);

PusherResponse response = channel.pushEvent("news", JSON_STRING);
PusherResponse response = channel.pushEvent("news", ANOTHER_JSON);

I like that Pusher stuff. I can easily create a Websocket implementation without going into other solutions as node.js, kaazing or Glassfish offer and for which it's is hard to find hosting providers. I can stick to the server environment which I and my server administrator is used to and opt-in websockets with Pusher for those applications relying on websockets (which are not that many by now). Having a solid client-side solution with support for many browsers is another big plus.