I recently attended the WOPR22 conference in Malmo. This focussed on discussions around how to move performance testing earlier in the process. This is a big subject and there is clearly no magic bullet solution, but I thought I’d share some of the key takeaways from the discussions.
Performance testing needs to be thought of as more than just load testing
A traditional approach of “finish development, go through a load testing process and approve/reject for go-live” really doesn’t work in a modern development environment. The feedback loop is just too slow.
We have to be providing feedback earlier. Load testing though is not ideal for this, there are a lot of drawbacks – scripts have to be maintained, environments available, datasets managed, load models understood. These are surmountable but quicker feedback can be provided by simpler processes that can be added to a standard CI solution or executed manually but regularly.
Examples are things such as
- waterfall chart analysis of pages being returned
- unit test timings
- parallelisation of unit tests to get some idea of concurrency impacts
- using perf monitors/APM on CI environments
These will not find true problems that only occur under load but they will give you some early indications that problems may be there.
Performance Testers need tighter integration with the development team
The performance testing team cannot be too distant from the development team – for practical and political reasons. There was a lot of discussion about whether a performance team works best as a distinct entity or as individuals integrated across the teams. There are pros and cons for each argument.
What is clear though is that when issues are identified there must be co-operation between performance testers and developers to share their knowledge to resolve the problem. Performance testers should not be people who just identify problems – they must be people who are part of the team that solves the problem.
Mais Tawfik has the policy of physically pairing the performance tester who has identified the problem with the developer who is working on the fix until a resolution is found.
Performance Testers still need space for analysis
One of the downsides to pushing performance testing earlier was that it often results in an additional demand for testing without provision of appropriate space for analysis. Performance testing is an area where analysis of the data is important – it is not based on black and white results.
It is often overlooked that data is not information. Human intelligence is required to convert data into information. An important role of the performance tester in improving any process is to ensure that there is not an acceptance of data over information because data can be provided more regularly. We must ensure that there is sufficient quality, not just quantity of performance testing during the development process.
Environmental advances can make this process easier
Cloud and other virtualised environments as well as automation tools for creating environments (e.g. Chef, Puppet, CloudFormation) have been game changers for earlier and more regular performance teasing. Environments can be reliably created on demand. To move testing earlier we must take advantage of these technologies.
Use automation to simplify the process
Automate the capture of metrics during the test to speed up the entire process. Using APM tooling helps in this respect. Automating this reduces the overhead associated with the process of running a test and analysing results.
Based on discussion with all WOPT22 attendees:
Fredrik Fristedt, Andy Hohenner, Paul Holland, Martin Hynie, Emil Johansson, Maria Kedemo, John Meza, Eric Proegler, Bob Sklar, Paul Stapleton, Neil Taitt, and Mais Tawfik Ashkar.