Continuing from part one, this performance testing blog will concentrate on how to test the System Under Test having already decided what needs to be tested and at what level which was covered in part one.
Will you be using third party tools or writing your own framework, or just writing to log files and then analysing them?
What are you going to use to do your testing and in particular will the performance monitoring be separate to the application or embedded within it. Even if you use third party tools it may still be necessary to embed links or functions within your code. Alternatively it may be possible to produce all the data you need to analyse your application using external tools. Some tools/alternatives to consider are:
- Write your own code/framework – how?
- Writing a simple log method, e.g. in java that can be called from anywhere (so a public static method) can work very well.
You also need to consider the production and testing environments. Do they need to match exactly, or is closely enough, or does it not matter? Do both the server and client(s) need to match? And OS? Browser version(s)? Network? Other environmental variables?
What server will be used to host the application. Will this be a VM or a physical machine? Does this match that which will be used in deployment. Is testing on a developers local machine a reasonable test, or is a separate custom built performance testing machine required. Even if a separate performance testing machine is available and used does this replicate the same constraints as the deployment hosting environment.
An important issue to consider is: How much does adding performance testing monitoring degrade the performance of the application under test? Is this another example of the more you observe the less you (can) know? Also even checking server logs can be inconsistent or provide ambiguous results. Sometimes the server has to be shutdown in order to flush the logs to file so that you can read them. On Windows machines when looking at the log file could lock the file and mean it can’t be written to. And also if you already have the file open, and then open it again it may show you the already open version rather than the updated version.
One area that has particularly not been covered is the definition of Acceptance Criteria for performance tests. Like any other tests we need to define what ‘done’ is, and in broad terms for performance testing this will be when the System Under Test is appropriately responsive. In practice this is likely to be converted into specific timings for specific functions. Exactly how long and which functions will necessarily be a part of the particular System Under Test. Naturally defining these criteria is left as an exercise for the reader.
More to do
I am sure there are many more factors that could influence performance in particular circumstances, and this list is by no means meant to be exhaustive. Hopefully it will help to suggest areas that need to be considered when thinking about performance testing, and help to stimulate ideas for different ways of approaching performance testing. There is also the question of how to incorporate this into regression testing for version 2.0 and onwards, once a System Under Test has been performance tested?