Category Archives: Code

Executing Gatling tests from Node.js

So, I’ve been playing with Gatling quite a bit recently. It’s a really neat open source load testing tool.

Anyway, what I wanted to do was to have a remote instance that I could trigger a Gatling test on and then get the results back, all over http. Node.js seemed the obvious lightweight solution to do this in….

Now, how to trigger a Gatling test from Node.js. Well, there are a few complexities but generally it is not too bad.

Executing a Test

Gatling runs as a command line tool, so the first step is to use an NPM package called “sh”. This is a package that allows execution of commands and executes a callback on completion.

By default Gatling will run in an interactive mode, awaiting user response to determine the tests that should be run. Obviously this is not viable when running headless so we need to add some switches to the base command

-s class.classname [where class.classname is the class of the test you wish to run]
-sf path/to/script/folder [this can be avoided if scripts are stored in default Gatling directory]

Executing the Gating command with these two switches via the following line of node will successfully execute a test and execute the callback script on completion

var command = this.config.gatlingRoot + ' -sf ' + this.config.rootfolder + ' -s ' + this.config.testClass;
sh(command).result(function(){test.completeTest()});

Capturing the Results

Executing a test though is no use without being able to gather the results and this was slightly more of a challenge. Gatling creates a new folder for every test execution, by default this will be another folder within its standard results folder. The issue this created was that there was no way, as far as I could tell, of getting the details of that folder from the command line response.

What you can do however is to define the root folder for the results in the command line by adding the -rf switch.

-rf path/to/results/folder

Gatling however will still create a subfolder within that for every test run. This where NPM comes to the rescue again with the “fs” package which allow monitoring of a folder and raising of an event on any changes. Therefore you can create a folder specifically for the holding of your test results then execute a test and be confident that the next event on that folder will be the creation of the results folder. The fs callback includes the name of that folder.

function setupResultFolder(test){
var resultfolder = test.config.rootfolder + "results/"
fs.mkdirSync(resultfolder);
fs.watch(resultfolder, {
persistent: true
}, function(event, filename) {
test.setResultFile(filename);
});
return resultfolder;
}

Then to get to the results you can just access the relevant files within that folder. I was only interested in the raw results so I was looking at simulation.log.

Notifying of test completion

To finish off I just added a simple event that is raised when the test is complete

this.completeTest = function(){
this.complete = true;
this.emit("testComplete");
}

Complete code

The complete code comes in at <50 lines.

var fs = require('fs');
var sh = require('sh');
var events = require('events');
var eventEmitter = new events.EventEmitter();

var GattlingTest = function(id, config) {
events.EventEmitter.call(this); 

this.id = id;
this.complete = false;
this.resultfile = "";
this.resultfolder = "";
this.config = config; 

this.start = function(){
var test = this;
this.resultfolder = setupResultFolder(this);
var command = this.config.gatlingRoot + ' -sf ' + this.config.rootfolder + ' -s ' + this.config.testClass + ' -nr -rf '+ this.resultfolder;
console.log(command);
sh(command).result(function(){test.completeTest()});
}
this.setResultFile = function(resultfile){
console.log("resultfile for " + id + " set to " + resultfile);
this.resultfile=this.resultfolder + resultfile + config.resultFileName;
console.log(this.resultfile);
}
this.completeTest = function(){
console.log("complete");
this.complete = true;
this.emit("testComplete");
}
this.results = function(){
return fs.readFileSync(this.resultfile, "utf-8");
}
}

GattlingTest.prototype.__proto__ = events.EventEmitter.prototype;
module.exports = GattlingTest;

function setupResultFolder(test){
var resultfolder = test.config.rootfolder + "results/"
fs.mkdirSync(resultfolder);
console.log("Tracking ... " + resultfolder);
fs.watch(resultfolder, {
persistent: true
}, function(event, filename) {
console.log(event + " event occurred on " + filename);
test.setResultFile(filename);
});
return resultfolder;
} 

Leave a comment

Filed under Code

Positive Technical Debt part 3: Investor demanding payout

This is a slightly different problem related to Technical Debt and one that is more difficult to manage.

In financial terms I would liken this to having an initial investor who gives the business capital with no expectation of repayment. This capital drives expansion, but a certain point the investor demands full repayment.

From a technical perspective this is when a technology selection is made because it offers benefits in terms of business progression that allow you to get to where you need to be in the most beneficial way (e.g. because of speed of development, easy availability of staff, availability of other technologies/partners to integrate with). The system can be developed in a correct manner with no technical debt but at a point when technology becomes unviable and the only way forward is a fundamental re-architecture of the system.

The sort of events that could lead to this are

  • You reach the limits of the technology in terms of capacity for the level of usage that you now have. An example of this would be Twitter and Rails or Facebook and PHP. On a smaller scale I have worked with companies who wrote their first system using Access as a database and as the company grew hit the capacity of that platform and had to re-architect to use another database.
  • The platform/technology used is no longer available. This is a scenario that has become more likely now that people are using more 3rd party cloud based services. It is possible to wake up one morning and discover that a core part of your system is actually no longer available.

Again this is a positive form of technical debt, as it is taken on board with a view to getting benefit from a system as quickly as possible.

The way to deal with this sort of technical debt is largely down to good system architecture. Systems should be architected in such a way as to abstract away the connection with technologies as much as possible, making swapping them out as simple a job as possible. Systems should be built to horizontally scale to allow capacity problems to be dealt with by throwing tin at the problem. It can also largely be dealt with by technology selection; the majority of systems we write have a predictable growth plan (not many of us write Twitter!) and we can make a reasonable estimate of the longevity of technologies and technology providers.

Other posts in this series:

Leave a comment

Filed under Code, Technical Debt

Json Dates: Don’t cross the serializers!

With Json becoming the lingua franca of web data these days we are all spending a lot of time serializing and de-serializing objects across varying platforms.

While doing this recently I came across an interesting gotcha that is worth mentioning in case anyone else sees similar problems.

The issue is caused when serializing date objects into Json.

By choice I will always use Json.net (http://james.newtonking.com/projects/json-net.aspx) for serializing and de- serializing Json. It is many times faster than the default Microsoft option so is the obvious choice for dealing with objects of any reasonable size. However with smaller objects it is often easier to just use the default Microsoft objects.

Therefore I created an API call that returned Json using the simple call

return Json(o, JsonRequestBehavior.AllowGet);

and de-serialized this using a simple Json.net call

JsonConvert.DeserializeObject<IDictionary<int, Promotion>>(s);

This should have been straightforward, however the data returned included some date objects. The difficulty arose in how the two serializers handled daylight saving. Dates that were serialized as “30/6/2012 0:00” were being de-serialized as “29/6/2012 23:00”.

I didn’t have time to dig into the whys and wherefores of which was right and seeing as I had control of both ends of the transaction it was an easy fix to change the API call to be

return new ContentResult { Content = JsonConvert.SerializeObject(d), ContentType = "application/json" };

The dates are now in sync across both applications.

I’m interested in hearing if anyone has a better solution to this problem; leave your comments below.

Leave a comment

Filed under Code

Persistent Navigation in JQuery Mobile

I have been doing quite a bit of work in mobile application development with JQM recently, and generally finding it a very useful product once you get your head around its idiosyncrasies.
One of the things I needed to do was to have a persistent navigation across the bottom of the page, to mimic the standard iPhone navbar.
Out of the box JQM offers a facility for persistent navbars. The documentation says…
“To tell the framework to apply the persistent behaviour, add a data-id attribute to the footer of all HTML pages in the navigation set to the same ID. It’s that simple: if the page you’re navigating to has a header or footer with the same data-id, the toolbars will appear fixed outside of the transition.”
And gives the following code example

<div data-role="footer" data-id="foo1" data-position="fixed">
 <div data-role="navbar">
 <ul>
 <li><a href="a.html">Friends</a></li>
 <li><a href="b.html">Albums</a></li>
 <li><a href="c.html">Emails</a></li>
 <li><a href="d.html" >Info</a></li>
 </ul>
 </div><!-- /navbar -->
</div><!-- /footer -->

And to set the current active button you use

<li><a href="d.html">Info</a></li>

This sounded a bit confusing, so I looked at the example code and it looked like they had simply pasted the same code into the bottom of every page and altered the active link. Thinking that I must be misreading the code I searched the internet for more information and it turns out that I wasn’t. The JQM solution to this problem was actually to mirror the footer on every page but with a different active link. The only way in which these were actually persistent was that if JQM saw that there was another footer on the incoming page with the same id then it would exclude that footer from the page transition.

The programmer in me would not let myself even consider that as a solution. Even early in development as I currently am, I have five files, some containing up to four pages. So that’s a minimum of ten places I’d have to maintain the navigation and that figure is only going to grow as development progresses. The thought of that made me actually feel sick. I simply wasn’t going to do it… simple as that.

Instead I decided to dynamically generate the navbar on first load and then inject into every page. The end solution to this is actually relatively straight forward.

In my index page I created a hidden div that contained the actual navbar html

<div id="hiddenFooter" style="display:none">
 <div data-role="footer" data-id="nav" data-position="fixed">
 <div data-role="navbar">
 <ul>
 <li><a href="page1.html">Page 1</a></li>
 <li><a href="page2.html">Page 2</a></li>
 <li><a href="page3.html">Page 3</a></li>
 <li><a href="page4.html">Page 4</a></li>
 <li><a href="page5.html">Page 5</a></li>
 </ul>
 </div><!-- /navbar -->
 </div><!-- /footer -->
</div>

The way the JQM works is to load any subsequent pages in using Ajax and keep them in a shared Javascript context. This means that events can be bound on the load of the first page and will apply to any subsequent pages called using JQM links.

Therefore I could read the contents on the hidden div into a Javascript variable

$("#index").live("pagebeforecreate", function () {
 Navigation.NavigationHtml = $("#hiddenFooter").html();
});

And this would be available to all subsequent pages loaded.

All that was needed then was to inject this html at the end of any pages that are created. To do that simply bind a pagebeforecreate event for all pages using the live rather than bind method (bind only binds events to items that exist at that time, live will bind to an item that comes into existence later).

$(":jqmData(role='page')").live("pagebeforecreate", function (event) {
 $("#" + event.target.id)).append(Navigation.NavigationHtml);
});

Now, when you navigate to any page (providing you have visited the index page first) it will display a fixed navigation bar at the foot of the page.
However there is one small problem…. the button you click is not correctly marked as the active tab. It is if you click it twice… but to me that is less than optimal, in fact that’s worse than it never being selected at all. The solution to this turned out to be trickier than expected.
The simple solution I came up with was to identify the url of the page and add the class “ui-btn-active” to the link. However getting the url of the current page was pretty tricky. Because JQM uses Ajax to load all the pages the current url (window.location) is always the first loaded page and because each file can contain multiple pages the JQM page object is only aware of those pages, not its parent file.

The solution I came up with was to use the pagebeforeload event which exposes a url, save that url and make it available to pagebeforecreate events defined earlier

$(document).bind("pagebeforeload", function (event, data) {
 Navigation.CurrentPageFilename = $.mobile.path.parseUrl(data.url).filename;
});

This meant that we needed to amend our earlier event to

$(":jqmData(role='page')").live("pagebeforecreate", function (event) {
 $("#" + event.target.id)).append(Navigation.NavigationHtml);
 $('a[href="' + Navigation.CurrentPageFilename + '"]').addClass("ui-btn-active");
});

I wrapped it al up in one Javascript object

Navigation = {
Navigation = {
 CurrentPageFilename: "",
 NavigationHtml: "",
 Initialise: function () {
 $("#index").live("pagebeforecreate", function () {
 Navigation.NavigationHtml = $("#hiddenFooter").html();
 });
 $(":jqmData(role='page')").live("pagebeforecreate", function (event) {
 $("#" + event.target.id).append(Navigation.NavigationHtml);
 $('a[href="' + Navigation.CurrentPageFilename + '"]').addClass("ui-btn-active");
 });

 $(document).bind("pagebeforeload", function (event, data) {
 Navigation.CurrentPageFilename = $.mobile.path.parseUrl(data.url).filename;
 });
 }
};

Then just needed to call

Navigation.Initialise();

From my index page.

As mentioned this solution only works if you go to the index page first. This was ok for my app as that should always happen so all I did for the sake of completeness was add the following code to every other page

if (typeof (Navigation) === 'undefined') {
 window.location = ("index.html");
}

To me this is an infinitely better solution that the original JQM suggested solution as it means I am managing my navigation is one place. It also gives me scope to dynamically manage the navigation contents if needed in future.

One last Gotcha on this subject. Be careful that you don’t have any pages across multiple files that have the same id. JQM keeps the DOM of some previous pages within its current DOM and duplicate page Ids can confuse it.

8 Comments

Filed under Code