Tag Archives: development

How (and why) to move performance testing earlier in the development cycle [WOPR22]

I recently attended the WOPR22 conference in Malmo. This focussed on discussions around how to move performance testing earlier in the process. This is a big subject and there is clearly no magic bullet solution, but I thought I’d share some of the key takeaways from the discussions.

Performance testing needs to be thought of as more than just load testing

A traditional approach of “finish development, go through a load testing process and approve/reject for go-live” really doesn’t work in a modern development environment. The feedback loop is just too slow.

We have to be providing feedback earlier. Load testing though is not ideal for this, there are a lot of drawbacks – scripts have to be maintained, environments available, datasets managed, load models understood. These are surmountable but quicker feedback can be provided by simpler processes that can be added to a standard CI solution or executed manually but regularly.

Examples are things such as

  • waterfall chart analysis of pages being returned
  • unit test timings
  • parallelisation of unit tests to get some idea of concurrency impacts
  • using perf monitors/APM on CI environments

These will not find true problems that only occur under load but they will give you some early indications that problems may be there.

Performance Testers need tighter integration with the development team

The performance testing team cannot be too distant from the development team – for practical and political reasons. There was a lot of discussion about whether a performance team works best as a distinct entity or as individuals integrated across the teams. There are pros and cons for each argument.

What is clear though is that when issues are identified there must be co-operation between performance testers and developers to share their knowledge to resolve the problem. Performance testers should not be people who just identify problems – they must be people who are part of the team that solves the problem.

Mais Tawfik has the policy of physically pairing the performance tester who has identified the problem with the developer who is working on the fix until a resolution is found.

Performance Testers still need space for analysis

One of the downsides to pushing performance testing earlier was that it often results in an additional demand for testing without provision of appropriate space for analysis. Performance testing is an area where analysis of the data is important – it is not based on black and white results.

It is often overlooked that data is not information. Human intelligence is required to convert data into information. An important role of the performance tester in improving any process is to ensure that there is not an acceptance of data over information because data can be provided more regularly. We must ensure that there is sufficient quality, not just quantity of performance testing during the development process.

Environmental advances can make this process easier

Cloud and other virtualised environments as well as automation tools for creating environments (e.g. Chef, Puppet, CloudFormation) have been game changers for earlier and more regular performance teasing. Environments can be reliably created on demand. To move testing earlier we must take advantage of these technologies.

Use automation to simplify the process

Automate the capture of metrics during the test to speed up the entire process. Using APM tooling helps in this respect. Automating this reduces the overhead associated with the process of running a test and analysing results.

Attendees of WOPR22 in Malmö, Sweden.

Attendees of WOPR22 in Malmö, Sweden.

Based on discussion with all WOPT22 attendees:

Fredrik Fristedt, Andy Hohenner, Paul Holland, Martin Hynie, Emil Johansson, Maria Kedemo, John Meza, Eric Proegler, Bob Sklar, Paul Stapleton, Neil Taitt, and Mais Tawfik Ashkar.


Leave a comment

Filed under Opinions

Executing Gatling tests from Node.js

So, I’ve been playing with Gatling quite a bit recently. It’s a really neat open source load testing tool.

Anyway, what I wanted to do was to have a remote instance that I could trigger a Gatling test on and then get the results back, all over http. Node.js seemed the obvious lightweight solution to do this in….

Now, how to trigger a Gatling test from Node.js. Well, there are a few complexities but generally it is not too bad.

Executing a Test

Gatling runs as a command line tool, so the first step is to use an NPM package called “sh”. This is a package that allows execution of commands and executes a callback on completion.

By default Gatling will run in an interactive mode, awaiting user response to determine the tests that should be run. Obviously this is not viable when running headless so we need to add some switches to the base command

-s class.classname [where class.classname is the class of the test you wish to run]
-sf path/to/script/folder [this can be avoided if scripts are stored in default Gatling directory]

Executing the Gating command with these two switches via the following line of node will successfully execute a test and execute the callback script on completion

var command = this.config.gatlingRoot + ' -sf ' + this.config.rootfolder + ' -s ' + this.config.testClass;

Capturing the Results

Executing a test though is no use without being able to gather the results and this was slightly more of a challenge. Gatling creates a new folder for every test execution, by default this will be another folder within its standard results folder. The issue this created was that there was no way, as far as I could tell, of getting the details of that folder from the command line response.

What you can do however is to define the root folder for the results in the command line by adding the -rf switch.

-rf path/to/results/folder

Gatling however will still create a subfolder within that for every test run. This where NPM comes to the rescue again with the “fs” package which allow monitoring of a folder and raising of an event on any changes. Therefore you can create a folder specifically for the holding of your test results then execute a test and be confident that the next event on that folder will be the creation of the results folder. The fs callback includes the name of that folder.

function setupResultFolder(test){
var resultfolder = test.config.rootfolder + "results/"
fs.watch(resultfolder, {
persistent: true
}, function(event, filename) {
return resultfolder;

Then to get to the results you can just access the relevant files within that folder. I was only interested in the raw results so I was looking at simulation.log.

Notifying of test completion

To finish off I just added a simple event that is raised when the test is complete

this.completeTest = function(){
this.complete = true;

Complete code

The complete code comes in at <50 lines.

var fs = require('fs');
var sh = require('sh');
var events = require('events');
var eventEmitter = new events.EventEmitter();

var GattlingTest = function(id, config) {

this.id = id;
this.complete = false;
this.resultfile = "";
this.resultfolder = "";
this.config = config; 

this.start = function(){
var test = this;
this.resultfolder = setupResultFolder(this);
var command = this.config.gatlingRoot + ' -sf ' + this.config.rootfolder + ' -s ' + this.config.testClass + ' -nr -rf '+ this.resultfolder;
this.setResultFile = function(resultfile){
console.log("resultfile for " + id + " set to " + resultfile);
this.resultfile=this.resultfolder + resultfile + config.resultFileName;
this.completeTest = function(){
this.complete = true;
this.results = function(){
return fs.readFileSync(this.resultfile, "utf-8");

GattlingTest.prototype.__proto__ = events.EventEmitter.prototype;
module.exports = GattlingTest;

function setupResultFolder(test){
var resultfolder = test.config.rootfolder + "results/"
console.log("Tracking ... " + resultfolder);
fs.watch(resultfolder, {
persistent: true
}, function(event, filename) {
console.log(event + " event occurred on " + filename);
return resultfolder;

Leave a comment

Filed under Code

It’s not just about being the fastest…

First published on Performance Calendar on 21st December 2013

I have been doing a lot of work this year about creating a performance culture within a company. This is an essential step on the route to creating good performance in your products and only when you start treating performance as a first class citizen will you start to get buy in to the time and effort needed to create performant systems, both from the developers and from the business as a whole.

This is a costly process and requires an investment of time and effort from a company to fully implement and the business will expect to see benefits back in return for this investment.

I would like to address two common problems that cause the value of this investment to be undermined.

1) Solving the technical challenge not the business problem

When I have talked to some developers who are getting into performance and trying to get buy in from their business and are struggling I often hear the same complaint – “why don’t they realise that they want as fast a website as possible?”.

To this I always answer – “because they don’t!”. The business in question does not want a fast website. The business wants to make as much money as possible. Only if having a fast website is a vehicle for them doing that do they want a fast website.

The key point I am making here is that it is easy as a techie to get excited by the challenge of setting arbitrary targets and putting time and effort into continually bettering them when more business benefit could be gained from solving other performance problems.

To address this we need to take a step back and address exactly the performance problems that are being seen and how they are impacting the business.

These may be slow page load when not under load. Equally likely will be that they suffer slowdowns under adverse conditions, they suffer intermittent slowdowns under normal load, they use excessive resources on the server necessitating an excessively large platform or many other potential problems.

All of these examples can be boiled down to a direct financial impact on the business.

As an example, one company we worked with determined that their intermittent slowdowns cost them 350 sales on average which would work out to £3.36m per year. This gives you instant business buy in to solve the problem, a direct problem for developers to work on and a trackable KPI to track achievement and know when you are done after which you can move on to the next performance problem.

Another company I worked with had a system that performed perfectly adequately but was very memory hungry. Their business objective was to release some of the memory being used by the servers to be used on alternative projects (i.e. reduce the hosting cost for the application). Again a direct business case, a problem developers can get their teeth into and a trackable KPI.

To sum up – start your performance optimisation with a business impact, put it into financial terms and provide the means to justify the value of the development efforts to the business.

2) Over optimisation results in technical debt

The second issue I would like to address is the idea that we should always build the most ultra performant system.

No – we should always build an APPROPRIATELY performant system.

Over optimising a system can be just as negative as under-optimising. Building a ultra performant, scalable web application takes many factors such as…


Building highly performant systems just takes longer


Highly optimised system tend to have a lot more moving parts. Elements such as caching layers, NoSQL databases, sharded databases, cloned data stores, message queues, remote components, multiple languages, technologies and platforms are things that may be introduced to ensure that your system can scale and remain performant. All these things take management, testing, development expertise and hardware.


Building ultra performant website is hard, it takes clever people to devise intelligent solutions, often operating at the limits of the technologies being used. These kind of solutions can lead to areas of your system being unmaintainable by the rest of the team. Some of the worst maintenance situations I have seen have been caused by the introduction of some unnecessarily complicated piece of coding designed to solve a potential performance problem that never materialised.


These system require financial support in terms of hardware, software and development/testing time and effort to build and support.


Solving performance issues often is done at the expenses of good practice or functionality elsewhere. This may be as simple as compromising on the timeliness of data by introducing caching but often maybe accepting architectural compromises or even coding good practice compromises to achieve performance.


The warning I want to give here is to understand your performance landscape, set your KPIs, define your performance non functional requirements, set performance acceptance targets or whatever method you use to determine how your application is expected to perform.

This action is essential to allow developers to be able to make reasonable assessments of the levels of optimisation that are appropriate to perform on the system they are developing.

Leave a comment

Filed under Opinions

Progvember – a month of coding in November

A year ago I came across the National Novel Writing Month which is basically a challenge to aspiring authors to dedicate themselves to complete the writing of a novel within a month. This seemed like a good challenge; after all, we all have a novel within us somewhere. However, what I like doing better than writing a novel is computer programming. Therefore my novel kept getting pushed back by bits of development I was doing.

At the same time I was writing an iPhone app for my son for his Christmas present. This was good for two reasons – 1) it was a present for him and 2) it was a chance to finally do some different things I had been wanting to try out in Objective C. So, I had the project, I had the motivation and most of all I had a fixed deadline – Christmas was not going to be pushed back because of an incomplete piece of software.

Like most developers I have a list of new technologies, languages and patterns that I want to try out and just like the great novel that is inside us all there are always more important things to be done. There is no defined project and no defined deadline so nothing really gets done.

What I liked about the “nanowrimo” concept was that it created a focus, it created a deadline to work towards and it created a sense of community of other people all working towards the same goal. So why not create something like that for development?

Hence, I came up with the concept of Progvember (programming in November – get it?!). This would be a month where developers set themselves a challenge to define and project and complete it – to take some of the ideas that have been there for a while and set a deadline to get them done. November seemed a good time of year to do it – long dark nights, cold, wet weather (apart from our southern hemisphere Provemberers) and all that most people are doing is sitting in, or maybe growing a moustache for Movember.

On the back of this I wanted to also create a sense of community of other developers who are also setting themselves the same challenge, and also to create a forum where people can ask/offer to help on other people’s projects.

And so I have created Progvember.com. Anyone is welcome to come and sign up.

Leave a comment

Filed under Progvember

Treat Performance as a First Class Citizen

Steve Souders wrote a very interesting blog post recently (http://www.stevesouders.com/blog/2013/08/27/web-performance-for-the-future/) about treating “performance as a discipline”.

The premise of this article was that performance is such a fundamental issue that a separate team should be created to focus purely on performance.

Seeing this view put in writing by one of the leaders in the performance arena was very refreshing to me. At Intechnica we have been pushing this message for a number of years and it is nice to finally see it gaining some traction within the industry. We formed the company to deliver this capability into other companies.

For me the battle that we face day to day is the battle to get people to treat performance as a First Class Citizen within the development industry. There does often still seem to be a sense that good performance is just something that developers should be able to achieve with more time or more kit.

The reality of course is that that is true up to a point. If you are developing an average complexity website with moderate usage and moderate data levels then you should be able to develop something that performs to an acceptable level. As soon as these factors start to ramp up then performance will suffer and will require expertise to solve the problems. This does not reflect on the competency of the developer, it is just a reflection that a specialised skill is required.

The analogy I would make to this would be to look at the security of a website. For a standard brochureware or low usage site then a competent developer should be able to deliver a site with sufficient security in place. However when the site ramps up to a banking site you would no longer expect the developer to implement the security, there would be an expectation that security specialists would be involved and would be looking beyond the code to the system as a whole. This is no negative reflection on the developer, just that the nature of security is so important to the system and so complex that only a specialist can fully understand the solution that is required. This is acceptable because security is regarded as a First Class Citizen in the development world.

Performance issues often require such a breadth of knowledge beyond simply looking at the code (APM tooling, load generation tools, network setup, system interaction, concurrency effects, threading, database optimisation etc.) that specialists are required to be able to solve them.

These are not better than developers, they just have different skills.

At Intechnica we have run projects where we have a performance scrum team. Leaving developers to deliver functionality with usual performance best practice but no specific defined KPIs. Then applying the acceptance KPIs afterwards and passing failures onto the the performance scrum team backlog.

We have also developed projects with performance engineers within each scrum team and applied KPIs as part of the feature we were developing and to the system as a whole.

Both are valid approaches. There are other valid approaches. As long as performance is treated as a First Class Citizen then you will be on the right track to performance success.

Leave a comment

Filed under Opinions

The Joy of Performance Improvements

I think that most developers get in to development for one of two reasons – they like solving problems, or they like building things. For me when I got into it is was the latter. I loved the fact that I would build things that people would then use (admittedly, I have often also built things that people didn’t use).

However as my career has progressed I have realised that by far a more enjoyable and satisfying element of development is working on performance improvements.

Since I moved into management I don’t get to work on development as much as I would like (which would be all the time), but recently I have built a system from scratch to delivery while also working on some performance improvements to an area of a site that had been declared “not fit for purpose”.

Comparing the two experiences, there was just no comparison. I’m talking about the experience of taking a failing system, investigating, manipulating, testing, making amendments, re-testing, assessing the impact of tiny changes, assessing the likely impact of large changes, finding out why elements that perform independently do not perform in combination, re-testing again, making another small improvement, re-testing, re-investigating and repeating it until performance is acceptable – then doing a few more rounds until performance is awesome. This was soooo much more enjoyable and rewarding than just building a system from scratch.

I’d recommend any developers to try to get involved in this side of development. That’s why I set up a performance management and improvement company!

By the way, if anyone is interested, the outcome of the performance improvements was taking a process that was taking 45 minutes to run down to running in 3 minutes. The target was 10 minutes.

1 Comment

Filed under Opinions

Technical Debt Management 101

First of all, make sure you read my previous technical debt management posts (how technical debt can be both good and bad) – this post is designed to address the issues described therein.

What can be done to prevent negative technical debt and ensure you are in control of positive technical debt? How do you prevent it becoming a problem?

The simple answer is that you can’t. All systems will have some planned and unplanned technical debt. Software development is a complex, fast moving business and developers are only human. Decisions and technology choices made in the best of faith will turn out to be wrong.

However there are some methods that can be used to minimize the risk and impact of technical debt. Here are some necessary elements of technical debt management:

  • Good system architecture
  • Abstract systems, so that switching out technologies or areas of the system can be done with minimal impact
  • Robust system management
  • Enough testing (automated or manual) and documentation to enable low risk re-engineering if needed
  • Shared responsibilities so that the system is not too dependent on individuals
  • Technical debt management
  • Awareness of areas of technical debt within the system and of the longer term impact of the compromises or decisions you are making now

Paying off debt

Dr._Strangelove_-_The_War_RoomSo, having followed all the good practices specified and having understood the nature of my technical debt, how should I pay it off when time comes?

Well, there are three ways and I will do my best to stretch the financial metaphor one last time to illustrate them…

Ongoing small payments

Make a commitment that with every feature introduced an area of technical debt will also be addressed. Make technical debt everyone’s ongoing problem.

Lump Sum Payment

Take everyone out of feature production for a set period and resolve the main areas of technical debt. Make technical debt everyone’s problem for a short time.

Dedicated payment plan

Create a dedicated technical debt management team who do nothing but resolve technical debt issues and run this alongside the development program. Create some people who have technical debt management as their only problem.

MVP vs Future Proofing

One of the buzz words at the moment is MVP (Minimum Viable Product) – the concept of only building the bare minimum that is needed to deliver the current requirement – like the XP concept of YAGNI (You Aren’t Gonna Need It).

Back in the mists of time when I first started developing, it was all about future proofing – building the framework into your software to allow for future changes to be introduced in a simpler way.

So which approach is right?

I often come across developers advocating elements of future proofing. The reasons advocating such are:

  • Future proofing builds a system based on solid foundations
  • Cutting costs by not doing this piece of work now will lead to increased costs later
  • The major impact of certain future events can be mitigated with a small amount of effort now

However, for me these advantages are outweighed by the advantages of the MVP approach:

  • The business will see benefit from the work much sooner
  • Future proofing is spending money now in case you need to spend it in future – the vast majority of future proofing efforts end up going unused
  • Future proofing can be made obsolete by non-technical issues
  • Changing business requirements
  • New technologies – how often have we spent days coding round an issue that then was included in the next release of the language?
  • Failure of third parties your system depends on

That said, I would still advocate good development practices as a way of minimizing the impact of change later on. Good practices like separation of concerns, loosely coupled objects, modular design, reasonable levels of abstraction, dependency injection etc. are all the building blocks that will enable you to create a system that is well placed for change.

Reduced functionality is no excuse for bad code.

Leave a comment

Filed under Opinions, Technical Debt