6 September 2015

FalcorJS with AngularJS v1.x

Ever since it was presented at ng-conf, I've been excited to get my hands on Netflix's new "One Model Everywhere" data-fetching framework: FalcorJS.  Now it's been released on the world as a Developer Preview, I wanted to show how it can be used with AngularJS v1.x.

What is FalcorJS?

"A JavaScript library for efficient data fetching"

That sums up FalcorJS pretty well.
It's designed to retrieve only as much data as you need at a time by letting the view specify the values it needs, then the model can batch those requests up when it calls the server.

Falcor represents your data as a big JSON 'graph', which means you can request the same data from different paths and it will use the cache from the first request.

Sample from FalcorJS' JSON Graph Guide

One odd thing you will have to get used to with FalcorJS is that you only request 1 value at a time. And by "value" I mean: string, number, boolean, or null. This feels totally alien coming from... everything else, but it works because FalcorJS takes care of the efficiencies for you. Instead of having 1 request for an array of objects, which you then have to split up into 20 fields on your view - you instead make 20 requests from your view, which are automatically batched for you into 1 request.

Other mechanics that Falcor provides is functions on your JSON Graph than you can 'call()', which is good for things like creating whole new objects and transactions where you want to be sure everything is being handled in one go. Responses can also include multiple values, for situations where you're sure it's going to be required, and can specify what routes should have their cache invalidated.

The last thing to mention is that FalcorJS is built on Observables from RxJS for all it's asynchronous operations. For those unfamiliar with them, Observables and the whole ReactiveX API are the ultimate asynchronous toolkit. Observables fill in the gaps with Promises, things like how to handle multiple values over time, and whether new 'observers' see just new values or the preceding values too. I won't go into Observables too much, but there are lots and lots of excellent resources online.

Using with AngularJS v1.x

Disclaimer: Once again, FalcorJS is in Developer Preview. So what's written here today may not work tomorrow. Check the FalcorJS API Reference and Github if you have difficulties.

In this example we're going to build a UI with AngularJS v1.x that uses a FalcorJS Model with a HttpDataSource pointed at our falcor-express-demo and falcor-router-demo server.

With our Model setup we can pull values from the server using this.model.getValue(path). It will then build the necessary XHR call and return an Observable for it's result.

The $apply problem

Wait a sec... if it makes it's own XHR call, how will Angular know when to start a $digest loop?

Well the usual solution in these circumstances is to use $scope.$apply(), such as when binding to an event that's not triggered within Angular's digest loop. However we can't actually be 100% sure that we're not in the digest loop either. For example, if we're hitting cached values, all the callback functions could be called immediately. A better solution is to use $scope.$evalAsync() which will start a new digest loop if, and only if, one is not already in progress. So any time we subscribe to an Observable, we simply need to call $scope.$evalAsync() and we're good to go.

Calling from the View

The next thing we want is to start making calls from the View rather than making it the controller's responsibility. This helps to decouple the view and the controller further (all the controller supplies is a reference to the model), and means we're only requesting as much data as we're actually using. eg. If we're showing a cut-down mobile friendly view, we don't have to pull back as much data.
But how are we going to get our results onto the View? All our requests are going to return fresh Observables.

This is where the Model.getCache(path) method comes in. It synchronously returns the value from the Model's cache. If that returns undefined, we can call Model.getValue(path).subscribe(_ => $scope.$evalAsync()) - It'll request the value and call $scope.$evalAsync() once it returns, which means a new digest loop, which will trigger a new call, which will get a successful hit from the cache.

Dealing with collections

Since FalcorJS doesn't support returning objects and arrays (well, sort of) we need a way to deal with collections in our data. The solution that I prefer is to generate an array of index numbers which represent a 'page range', and request the length of your collection ahead of time to set your max index.

Another thing you may be able to use is the Model.deref(path) method, passing the result to another directive that can handle resolving the Observable.

What about writing data?

Falcor comes with two primary methods for writing values: setValue(path, value) and call(path, args).

setValue() sets the value for a particular path, returning an Observable for the result. Integrating it with ng-model is simple using ng-model-options="{getterSetter: true}". We create a function which takes the model and the path, then returns a function which either calls getValue() or setValue(), depending on the number of arguments.

setValue() is great if you want to set one value at a time. But what if you want to save the entire form at once, after you've confirmed your data is valid?
This is where call() comes in.

The Falcor Router supports 3 different types of operations on each route: get, set, and call. call turns the route into an RPC endpoint where you pass in arguments, which can include objects and arrays (anything that can be JSON encoded), and it returns a set of PathValues and cache invalidations for the Falcor Model to interpret.


Once again I have to repeat: Falcor is in "Developer Preview" mode. Which means that not only is it not ready for production, but it's also only recommended for those of us who like to cut ourselves on the bleeding edge.

With that in mind: Falcor is pretty solid, and the learning curve is quite smooth. It does require rethinking your data API to fit "the falcor way", but that's a common cost with opinionated frameworks which pays back with simpler and more consistent code.

Where it really excels most is with "mostly-read/rarely-write" type applications, but you can still do write operations with `setValue()` and `calls()`. I probably wouldn't try to write any games with it (though that might be a fun exercise all the same).

I've started a github repository with a working example of everything I've shown here, along with a "falcor" module that encapsulates the falcor specific code.

Jason Stone

28 March 2015

Tips for designing a JSON Schema

I've been writing a lot of JSON schemas lately.
Some were just small API requests and responses.
Some were big all encompassing documents that involves over 20 unique object types.

When I started the big ones, I went looking around the internet for some resources.  What I found was plenty on how to write a JSON Schema (This being the best one), but not on how to design one.
To contrast, there's plenty of material available on how to design relational database schemas - from blog articles to university subjects.
I still managed to come up with something resembling a process, so I thought I'd share with the class in the hopes it'll help someone else.

Just a quick disclaimer: These are just my own personal findings and opinions.
If you have your own findings that you think are better, by all means stick to them.  Share them with the class too, if you can.

Read the 'JSON API' standard

I cannot emphasise this point enough.
Go to http://jsonapi.org/ and read through the documentation and examples.
I'm not expecting you to implement it - especially if you're not creating JSON Schemas for a web API - but it's great example of a JSON based standard, and well worth the time to look at.  I've based a lot of my own schemas off elements of it, without implementing it completely.
Things like using the "id", "type", and "links" keywords.

If you do decide to implement it, take into account that it's a work in progress.
In the last month they've published a release candidate which had some significant differences from the previously published versions. But the fact that they're calling it a "release candidate" says it should be pretty stable by now.

Keep a flat structure

Don't embed objects within objects within objects within objects.
You're better off keeping a flat structure of one or many arrays at the top level containing objects.
Rather than nesting one object inside another, create a separate object in one of those arrays and link the two together - even if you're sure it's a 1-to-1 relationship for both objects.
The JSON API uses a single "included" array that contains all additional objects.  I personally prefer having multiple arrays that group the objects by "type" - but that's me.

You can also use the same prefix for property names, rather than creating a new embedded object.
eg. {"name": {"first": "John", "last": "Smith"}} becomes {"nameFirst": "John", "nameLast": "Smith"}


  • It means less code like this:
    var value = data.article && data.article.author && data.article.author.phone && data.article.author.phone.number;
  • It simplifies things if you're marshalling and unmarshalling your JSON into and out of other data structure. eg. Classes and SQL tables.
    Even if you're using a NoSQL DB with NodeJS today - tomorrow you might create a new microservice in something new. Doesn't hurt to be flexible.
  • You have a 1-to-1 link today, but a new feature tomorrow might change that.
    Keeping things flat give you some extra flexibility for the future.

Work out how relationships are defined

Even if you don't go with a flat structure, you're going to need some way to say "X relates to Y because Z" - unless you honestly plan to duplicate and embed each and every object.
Take a look at how JSON API does it - the way the "links" property contains relationships, and each of those has a "linkage" object(s) with the "id" and "type" of the other object(s).
You don't need to do the exact same thing. But you should be consistent throughout your entire schema.
Preferably throughout all the schemas in your organisation, if you can manage it.


Because if you have a consistent format for defining relationship, you can write functions that will traverse those relationships for you.

New properties are cheap

Sometimes there's the temptation to reuse the same property for a slightly different purpose.
Maybe you want to define an "accountType" which then gives a completely different context to the 5 other properties, and you think "I just saved myself from writing 5 new properties into the schema, for the price of 1 - woooo!"

Don't do that.

Of all the possible schema changes you could make - new properties are the cheapest.
Don't make things more confusing by trying to shove square pegs into round holes.
Unless you are writing super memory efficient software, where you need to squeeze every last bit from the hardware (in which case, why are you using JSON at all?) - just make a new property.

Prepare for change

No matter how hard you try, how long you work, how much you analyse and survey and workshop - there's going to be something you need to change in the structure of the schema.
Now you can either live in constant fear of this day, or you can work out your strategy ahead of time.

Embed the version of the schema in your JSON documents and have a process in place for running migration scripts.
Whether your strategy is a 'big-bang' during downtime or an ongoing background process.
Just have your migration process ready.

Use "allOf" to create schema mixins

I said this wasn't about how to write a JSON Schema, just how to design it - this is the exception.

I gave the example earlier of prefixing property names rather than nesting objects - 'name' to 'firstName' and 'lastName'.
But what if you have 5 different objects that need to have a 'name'?
Am I supposed to duplicate that in the schema 5 times too?


The JSON Schema standard defines a number of keywords that can be used to combine schemas: "oneOf", "allOf", and "anyOf".
"oneOf" is good for applying "switch" logic branching.
"anyOf" is for when you're a lot more forgiving.
"allOf" is great for constructing a schema from sub-schemas.
For example you can create a sub-schema for an object with "nameFirst" and "nameLast" properties, and include it as part of the schema for a completely different object.

Jason Stone

21 January 2015

Using ES6 with your AngularJS project

Earlier this month Glen Maddern (aka. The <x-gif> guy) posted an article, Javascript in 2015, with a YouTube video giving "A brief tour of JSPM, SystemJS & ES6 features".
If you haven't seen it already, go take a look.

In a short time he has a front-end application written in ES6 running, with modules being loaded and transpiled (via Traceur) by SystemJS.  And at the end he creates a self-executing, minified bundle of that application code, including source maps.

These tools make it easy to start using ES6 now with your existing AngularJS applications - which will almost certainly be step 1 on the migration plan to Angular v2.0. Even without that, the goodies from ES6 are too good to pass up.

As a demonstration, we're going to take angular-seed, update it to ES6 using SystemJS, bundle the app for production with systemjs-builder, and configure Karma to run unit tests using the karma-systemjs plugin.

SystemJS Setup

OK - lets get started.
We're going to clone an angular-seed repo, switch to a new es6 branch, then install dependencies from npm and bower. The dependencies for angular-seed are a bit old, so we've updated them too:

git clone https://github.com/angular/angular-seed.git
cd angular-seed
git checkout -b es6
npm install --save-dev systemjs-builder karma-systemjs karma#~0.12 karma-chrome-launcher karma-firefox-launcher karma-jasmine
bower install -f -S angular#1.3.x angular-route#1.3.x angular-mocks#~1.3.x system.js

Next thing we need is a config file for SystemJS: `app/system.config.js`. This is where we tell SystemJS how to find certain modules.  In this case, we're going to map the module names 'angular' and 'angular-route' to their long paths:

Next we'll change `app/index-async.html` to use SystemJS rather than angular-loader:

Last thing to do is add `import` statements to `app/app.js` to include the rest of the application, as it's what will be loaded by `System.import('app')`:

Now if you load up `app/index-async.html` using a local webserver (The one included with angular-seed start with `npm start`) you should see the angular-seed application running as normal, only it's been loaded as a set of ES6 files.

This is just the bare minimum required to get existing code to ES6 using SystemJS.
You can add more ES6 syntax and rearrange the modules as you like.

Here's my suggestions:

  • Use classes for Controllers (Controller as), Services (.service()), and Providers
  • Use ES6 modules over angular modules
  • Use arrow functions - cause they're sugary sweet!

Bundling for Production

With our SystemJS config file already in place, we only need to pass that into systemjs-builder with a slight tweak to bundle our code for production. This is a simple node script you can run with `node bundle.js` that will bundle all the modules imported by `app/app.js` and output the result to `app/bundle.js`:

And finally we change `app/index.html` to include `traceur-runtime.js` and the `bundle.js` file we just created:

Unit Testing

Karma works like most a typical browser application: Load all <script/> tags, then start the application `onload`. In this case, we want Karma to let SystemJS handle loading everything, then start the test runner once everything is ready.  This is essentially what karma-systemjs does for us.

First we need to adjust `karma.conf.js`:

Next we update our test suites to act like ES6 modules - importing the code to be tested, along with any dependency libraries.

Note how `module()` has been changed to `angular.mock.module()`.

And that's it.
An existing AngularJS project converted over to using ES6 with SystemJS, with working unit tests and a means of generating bundles for production.
You can find the complete code here: https://github.com/rolaveric/angular-seed/tree/es6

Jason Stone

5 November 2014

ng-europe Retrospective Part 2: RIP

In the last post we went through some of the new Javascript syntax that the Angular team are trialling through AtScript, a superset of TypeScript, which is a superset of ES6...

In this post we'll look at what's being removed between v1.x and v2.0 - The victims from Igor Minar and Tobias Bosch's talk at ng-europe.

First, it's worth mentioning some of the great material that's come up since the last post.
And no doubt there'll be more good stuff published in the coming days.  Keep an eye on #AngularJS and the AngularJS team on Twitter and Google+.

Now then.  Lets take a look at the kill list.


First, a little disclaimer: I don't know for sure what Angular v2.0 will look like.
Nobody does for sure.  Not even the Angular team, because they haven't finished it yet.
I'm merely extrapolating from the ng-europe presentations, and what's currently available in AngularDart.


Controllers, as we know them, are going.
This has already been trialled in AngularDart.  Controllers were originally a sub-class of directives, but were eventually deprecated and then removed for the release of v1.0.

Instead you will have a way to expose the properties and methods of a class with a @Directive annotation onto your templates.  Igor and Tobias' talk mentioned some of the potential Directive subclasses that will make things easier for you.

The closest equivalent in v1.x I can think of would be a directive with a 'controller' and a 'controllerAs', like so:

Directive Definition Object (DDO)

The old DDO syntax that we all know and... know, for writing directives will be gone.
The heart of your directive logic will be a class, with annotations that describe how the directive will be used.

Once again, to get the best idea of how this will work, look no further than AngularDart.


Now this is an interesting one - the classic $scope object gone for good.
How is that even possible?

Well, $scope provides 3 main functions:
  • exposing properties to the template
  • creating watchers
  • and handling events
The first is somewhat redundant because every control-I mean, every component class will be exposing it's public properties to the template.  As though "controllerAs" had become mandatory.

Watchers will be handled by their own new module: watchtower.js
This allows watchers to be grouped together in a way which doesn't depend on... well, anything really. But certainly not a scope hierarchy like in Angular v1.x

As for events... I'm not sure.
I haven't seen anything specific about events, so I can only speculate.
In saying that - my money is on using the native DOM API for events, as that's how Angular v2.0 is heading for DOM manipulation.

Speaking of which...


The Angular team have found that jqLite has become too much of a performance bottleneck for them.
And since Angular v2.0 is for evergreen browsers, they have no fear of relying on native implementations of the DOM API.

DOM traversal is easily handled by querySelector() and querySelectorAll(), events with addEventListener(), and good old createElement(), setAttribute(), and appendChild() for manipulation.

But in the end, if you want to use jQuery, there's nothing stopping you from using it.
It just won't be part of the Angular Core.


angular.module() has 2 somewhat overlapping jobs:

  • Register different types of components (eg. Directives, controllers, filters, etc.)
  • and register injectables and their dependencies.
The first has been replaced by annotations, like @Directive.
The second has been split into it's own module: di.js

The workings of di.js is pretty simple.
You declare the dependencies for a class using the @Inject annotation, specifying the class for each dependency (The actual class - not just it's name as a string, like Angular v1.x does), and then create an Injector for the classes you want.
Checkout the example on the di.js github: kitchen-di

The great thing about this is that it doesn't make any assumptions about your application.  It's totally separated from Angular.
You can even create multiple injectors, maybe one for each instance of a directive.


This wasn't listed with the other tombstones, but you'll notice it was missing from the slide with the experimental template syntax.
It's also not used by AngularDart applications.
Instead they create their own application object, which you then register classes with, and then run the application.  Not unlike "angular.bootstrap()".
I think Angular v2.0 will do something similar.

Final Thoughts

These changes are about making your code less "Angular" and more "Javascript".
But nothing is really set in stone yet for Angular v2.0:
In my next post I'm going to make some wild assumptions about Angular v2.0 and come up with some things we can do with our v1.x code to make the transition easier.
(Spoiler alert: It's going to involve ES6)

31 October 2014

ng-europe Retrospective Part 1: New Syntax

ng-europe, the European AngularJS conference, was held last week in Paris (France - not that other Paris), and the videos from the sessions were uploaded to Youtube earlier this week:

The main topic around the conference was, to no one's surprise, AngularJS v2.0
When will it be released? What will it bring? What will it break?
The answers to which break down to: Soon ™; The same but faster and using future standards; And a lot less than some people are panicking about.

If I were to sum up what I've seen of AngularJS v2.0: It will have everything that AngularDart got, with some of the crust cut off.
In fact, the Angular team have organised the source code so that they can build AngularJS and AngularDart from the same code base, which is an impressive feat.

New Syntax

Wait... but how can they do that?
AngularDart uses classes and type reflection to handle dependency injection, and annotations for marking classes as directives or web components.
These things aren't in Javascript...
But they are in ES6, TypeScript, and AtScript.

That's where this slide comes in handy.


The class syntax is being standardised in ES6.
And when it's compiled down to ES5, it uses the good old Javascript prototype chain: Effective but ugly.


For dependency injection to work we need to know what type of objects each dependency is.  Today this is achieved by identifying dependencies with strings.
For Angular v2.0, the Angular team wants to move away from this idea that everything must be registered explicitly with Angular (ie. module.factory(), module.value(), etc.) and instead just use classes.  Same way it's been done for AngularDart.

The trouble is that ES6 still has no syntax for declaring that a parameter should be a particular type.
If I want my function to only accept parameter "meep" if it came from "MyClass", then I'm going to have to do my own manual "meep instanceof MyClass" check.
This is where the "name:type" syntax comes in.  It provides both type assertions at runtime, and documentation.
It's currently being used by TypeScript, and a proposal was drafted for ES6, but it looks like it will be deferred till ES7 at least.

Type Introspection

What TypeScript does is great for doing type checking on values, but it doesn't actually help with things like dependency injection.  It's not about passing in the correct values to a function - It's about knowing what the function wants in the first place.
That's where the need for type introspection (or type reflection) comes in.

Looking at the code generated by traceur, AtScript's solution is to attach the classes or some equivalent to the function as a property called "parameters".  Simple, but effective.


The last piece is annotations.  Metadata which declares something about a class or function without directly interfering with it.
Annotations become a property of the function called "annotations", similar to parameters.

Final Thoughts

All of these new syntaxes are going to be added to Javascript, sooner or later, and I believe they'll be a welcome addition to the language.

Creating class-like structures right now requires either a third-party library or some very ugly looking uses of the 'prototype' property.  It's about time it got standardised.

Fact: Types are useful.
If you don't want to use them, fine.  The type system is optional.
But they can help stop trivial bugs, make IDEs more useful, and refactoring a lot easier.

And I think the way Angular uses annotations proves just how useful they can be.

Next Time

Next time I'm going to take a look through the things Igor and Tobias mentioned will be killed off in Angular v2.0, and what will replace them.
After that I'll take a look at a few things we could do in v1.x that might make migrating to v2.0 a little less jarring.

Till then.

Jason Stone

16 May 2014

Guide to Javascript on Classic ASP

Disclaimer: As the title says, this is for Classic ASP with "Javascript".  If your project is using Visual Basic, you may be able to glean some information from this article, but it's not written with you in mind.

I've had some recent experience with a legacy system using Javascript (technically "JScript") on Classic ASP, and the thing I found most frustrating was the lack of coherent documentation available. Despite it's reputation, w3schools is still one of the best references available on the web.

I guess this shouldn't be a surprise.  It is a deprecated platform, and they don't call it "Classic" for nothing. But there are still legacy systems out there which need to be maintained. If the system is purely in maintenance mode, you can probably get along just by reading the existing code and holding your nose. But if you need to add features and make significant changes, it's worth knowing some of Classic ASP's secrets so you can take advantage of the modern Javascript ecosystem.

Note: This is meant to be a reference for anyone who's forced to work with Classic ASP. In no way am I condoning using it by choice.  But if you've got to use it - use it right.

Javascript in Classic ASP is ECMAScript 3

First thing to be aware of is that the code you're writing, at it's core, is ECMAScript 3.  Another way to think of it: If your code runs in IE8 (minus the DOM API, obviously) it'll run in Classic ASP.

I've seen some developers approach Classic ASP code like it's some ancient writing which only the old masters knew how to interpret.  It's not.  There are only 6 things which are not, strictly speaking, Javascript: Request, Response, Server, Application, Session, and ASPError.
Your biggest challenge is learning to live without features from ES5, or finding appropriate shims.

The global scope cannot be directly manipulated

This is the biggest WTF to get your head around if you're used to coding in browsers or NodeJS. Trying to work with the global scope object as "this" will cause errors, giving developers the false impression that modern libraries won't work with Classic ASP.

Say you've got a third party library (like UnderscoreJS) that declares itself like so:

(function () {
  this.myExport = {};

If I try to run this in Classic ASP, it will throw an error.
You can easily work around it, though it is somewhat tedious:

var surrogate = {};
(function () {
  this.myExport = {};
var myExport = surrogate.myExport;

Use <script src="" runat="server"> to include code without tags

Most Classic ASP I know uses the #include directive provided by IIS to compose source files.  The directive works by essentially pasting the entire file content into the directive location, like so:

<script runat="server">
var myInclude = {};

<!-- #include file="include.asp" -->
<script runat="server">
var myProgram = {include: myInclude};

Results in:
<script runat="server">
var myInclude = {};
<script runat="server">
var myProgram = {include: myInclude};

The downside to this is that you can't use any Javascript code quality tools.  They'll start to parse your file, find the tags, and throw syntax errors. This prevents you from doing style checks, code coverage, and code metrics. It can also cause headaches for editors and IDEs.

Instead you can use <script src="" runat="server"> to include files into your tags, just like any ".js" file into HTML:

var myInclude = {};

<script src="include.js" runat="server"></script>
<script runat="server">
var myProgram = {include: myInclude};

Using this method, there's noth... very little to stop you using the same tools enjoyed by NodeJS developers. Though you obviously need to be configure IIS so that it doesn't expose your code files as static resources.
That would be bad...

Code in <% %> tags is parsed before <script runat="server"></script> tags

This was a bit of a head scratcher when I first discovered it.  But sure enough, code in <% %> tags executes before <script runat="server"></script> tags (stackoverflow).

<script runat="server">Response.Write("first");</script>

Result: second, first

My recommendation is: Don't use <% %> tags.

It's too easy to create tag soup, and you're better off using <script src="" runat="server"> anyway for JS code tooling.

Core ASP objects don't produce Javascript primitives

var param = Request.QueryString('param');
param == "test"; // true
param === "test"; // false
String(param) === "test"; true

This means you tried to call "param.substring(1)", it would throw an error saying that "substring" was undefined.  So you need to make sure you wrap results from core ASP objects in String(), Number(), or Boolean() before you try to use them.

That about does it.
To all the poor bastards out there stuck working on Classic ASP: This is for you.

See more on Know Your Meme

E2E testing AngularJS with Protractor

In the beginning, there was JSTestDriver.
It was a dark time, with much wailing and gnashing of teeth.

Then came Testacular: The spectacular test runner.
For a time, once everyone stopped sniggering like teenagers, it was good.
Unit tests ran quick as lightning on any browser that could call a web page.

Finally, to please the squeamish who were too embarrassed to speak of Testacular to colleagues and managers, the creator moved heaven and earth to rename it Karma.
And it was, and still is, good.

But there was still unrest.
While unit tests were as quick as the wind, E2E (end-to-end) tests were constrained from within the Javascript VM.
"Free me from this reverse proxy! Treat me as though I were a real user!"
And thus Protractor was born.

Protractor is the official E2E testing framework for AngularJS applications, working as a wrapper around Web Driver (ie. Selenium 2.0) which is a well established and widely used platform for writing functional test for web applications.  What makes it different from Karma is that Karma acts as a reverse proxy in front of your live AngularJS code, while Web Driver accesses the browser directly. So your tests become more authentic in regards to the user's experience.

One cool thing about Web Driver, which I didn't realise till recently, is that it's API is currently being drafted up as a W3 standard.  We're also seeing a number of services appear for running your selenium tests using their VMs, which is useful for doing CI and performance testing without taking on the operational overhead yourself.

Let's go!

The App

I've created a simple application to write tests for.  So first we'll clone the application from github, install the local NodeJS modules, and then install the required bower components:

git clone https://github.com/rolaveric/protractorDemo
cd protractorDemo
npm install
node node_modules/bower/bin/bower install

Now you should have a copy of the application with node modules 'bower' and 'protractor' installed, and AngularJS installed as a bower component.

The application is dead simple.  It has a button with the label "Click to reverse".  When you click it, it (you guessed it) reverses the label. So our tests should look something like this:
  • Load App
  • Click button
  • Assert that label is now reversed
  • Click button again
  • Assert that label is now back to normal

Installing Selenium

Protractor comes with a utility program for installing and managing a selenium server locally: webdriver-manager
Calling it with "update" will download a copy of the selenium standalone server to run.

node node_modules/protractor/bin/webdriver-manager update

Setting up for tests

First thing we need is a configuration file for protractor.  It tells protractor everything it needs to know to run your tests:  Where to find or how to start Selenium, where to find the web application, and where to find the tests.

Since we're using webdriver-manager to run selenium server, we'll tell it the default address to find it: http://localhost:4444/wd/hub
Optionally you could give it the location of the selenium server JAR file to start itself, or a set of credentials to use SauceLabs.

The tests we'll place in "test/e2e", and npm start spins up a local web server at "http://localhost:8000/".  So the basic configuration file stored in "config/protractor.conf.js" looks like this:

exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',
specs: ['../test/e2e/*.js'],
baseUrl: 'http://localhost:8000/'

There's actually a lot more you can do with the configuration file, but this is all we need to get going. Check out the reference config in protractor's github for more options.  You can do things like pass parameters to selenium, your testing framework, and even to your tests (eg. login details).

Writing tests

In "test/e2e/click.js" is a simple test for the "Click to reverse" behaviour:

The process behind writing E2E tests is pretty simple: Perform an action, get some data, then test that data.  Generally each action or query also involves finding a particular element on the page, either by CSS selector, ng-model name, or template binding.

First it opens the "index.html" file (which it finds relative to the baseUrl in the configuration file), finds the button by it's binding, clicks the button, then gets the button's text value and tests it.  Then we click the button again, get it's text value, and test that it's changed back to normal.

Running tests

Now for the payoff - the running of the tests.

First we'll start up the web and webdriver servers:

npm start
node node_modules/protractor/bin/webdriver-manager start

Then we tell protractor to run the tests according to our configuration file:

node node_modules/protractor/bin/protractor config/protractor.conf.js

If everything's gone well, you should soon be rewarded with the following result:

Finished in 2.061 seconds
2 tests, 2 assertions, 0 failures

And there you have it.  Webdriver tests for your AngularJS application with minimum pain.  If you already have angular-scenario based tests, converting them to Protractor should be a trivial "search & replace" exercise with the right regular expressions.