Seven Stages of Performance Testing Denial

As you may know, many of the ancient religions have such doctrines as “the 5 pillars of wisdom” or the “4 noble truths” that lead humble pilgrims to true enlightenment. Although I am not suggesting we start a new Performance Test religion that perhaps worships the god “Mercury”, I have noticed that there are the 7 levels of denial that developers/system architects/managers (aka pilgrims) seem to have to go through before they realise or admit that they have performance problems (i.e.  true enlightenment).

Like following a religion, this is a personal journey and a unique path is followed by each pilgrim, with no two making the same realisations or decisions at the same point in the project lifecycle, some having to repeat parts of the journey several times (as I am writing this, Mike is explaining again to another set of pilgrims how LoadRunner works).

My Experience suggests there are 7 levels of denial, as follows:

1) The load test tool must be wrong – you may be using the industrial standard performance test tool costing £100K, but the quick test the pilgrims did with the free tool downloaded from the internet was better. When you ask whether HTTP 500 status messages were trapped or if the data returned was validated the pilgrims look confused. So, take a deep breath, explain the benefits of a proper tool and move on.

2) The performance scripts or workload model must be wrong – you have only been a performance tester for 5 years and worked on countless projects, so its nice to be told you are stupid. Take a deep breath, walk them through the code and enjoy their look of surprise as they suddenly realise what correlation is.

3) The system is not finished so doesn’t need to be tested – pilgrims can believe that the last 5% of functionality will increase performance so that a poorly performing system will obviously get better in a minute.. Explain politely how adding new code won’t magically improve the performance of the old.

4) It’s not our system, it’s the network etc- blaming the supplier of the components is a common area of denial. To correct this misconception, feign a look of surprise and then arrange a test to show that the offending component runs super fast.

5) We JUST need to configure parameter X – the pilgrim often has the belief that the correct setting of a single magic parameter will solve any problem. What is annoying is the condescending tone often in which the pilgrim states that this is surely the problem and that you the tester must be a complete donkey for not setting this. Of course you smile politely, say lets give it a go and, when nothing changes be dutifully diplomatic. Often you will iterate on “number 5” as several pilgrims in the development team search for the “silver bullet” (or is it “Rocking horse pooh”). An obvious attraction to pilgrims of this approach is that the solution is “only a test cycle away”. Many a manager pilgrim has followed this route.

6) Throw hardware at the problem – although this does often have an effect, adding an extra processor to a DB server that is crippled by too many stored procedures doing table scans is as useful as a chocolate tea pot. Take another breath, particularly when you are told the lead time to order the components and re-install the software. Just stay alert because the pilgrim will be very happy believing they are solving the problem and can relax while they await the new hardware.

7) We JUST need to tune a small part of the system – there is often a hope that only a small store procedure or code element, once tuned, will yield the magical performance improvement or that all the performance problems can be found in one part of the architecture. At last the pilgrims are getting somewhere on their journey allowing you to progress too – some measurements at least have to be taken to identify the bad boy item. You can smile now; the journey’s nearly over..

Journey’s End – Wow! we do have a performance problem – at last your bunch of pilgrims have made the journey to true enlightenment. Like any religious journey it is full of self-doubt and distractions along the way, but at last you can finally start to solve the problem. Just hope now that a new project manager doesn’t get involved and you have to start at step 1 again.

Some projects have less potential areas for deviating from the true path, and some have more, but each pilgrim has to find his own way. Your job is to guide and educate!

Performance Test Best Practise

An old colleague asked if there are any standards identifying best practise in performance testing. I could not think of any but it started me thinking about what is best practise. Here are my thoughts on some areas of best practise in Performance Testing. They are NOT in any order of importance and the list is NOT exhaustive.

1.) Have a defined process and constantly refine it.
Before you start you should have a process defined and you should make sure you review this process to add improvements. The process needs to be flexible in order to accomodate different types of projects, from benchmarking a core application through to making sure an e-commerce site can handle the Christmas rush.

2.) Define the Goals up front.
This seems obvious, but you need to understand why are you testing and what the performance goals of the system under test are. (Note I use the word goals not requirements). Here, the move to ITIL may help where service design packages developed early on should include the performance requirements.

3.) Let Risk guide you.
The performance risk and consequences of failure should guide the type and amount of performance testing you do. Don’t just test what is easy to test.

4.) Don’t be afraid to say no.
If you are given responsibility for signing off on the performance of the system, you are the expert. If, subsequently, you are not given enough time or the correct tools then be prepared to say that you cannot test the system adequately. Remember that the caveats you place in your final report may never make it into the summary presented to the management board!

5.) Get the workload right.
If you don’t test the system with the correct workload it won’t matter if everything else is perfect – the results will be wrong. This means you need to understand user behaviours and their frequency. Don’t forget to include error scenarios as well.

6.) Develop Quality Scripts.
Make sure your scripts emulate user behaviour as much as possible and remember that users make mistakes, leave processes early and have comfort breaks! Also, make sure your script check what is returned to the user is what is expected.

7.) Select an appropriate test environment.
Is best practise using a production-sized test environment? Not sure, but make sure your test environment is sized and up to the job involved. Make sure you can collect the necessary data about the performance of that environment during the load test.

8.) Run your performance tests for long enough and often enough.
Make sure your tests are repeatable and that the results they produce are statistically valid.

9.) Participation.
Get all the people that need to be involved in the performance test working togther. Unless you are superhuman and multi-skilled you will need DBAs, administrators, developers, Project Managers, etc., to assist in the test. Remember, for the best results get these stakeholders involved in the process early on.

10.) Remember, people want results not data.
Don’t just present the canned report from the performance test tool; you need to analyse the results and present the key facts of the load test. And remember that different people will want different results from the load test – a manager will want to know if it passed whereas the DBA will want to know if the SGA is sized correctly, for example.

Small vs Large Scale Performance Test Environments

I have just added to the website a presentation that looks at sizing and extrapolation techniques for people considering building a small scale performance test environment instead of a large full scale performance test environment. In the paper several approaches are considered.
Factoring – This is where the architecture is easily scaled and therefore the performance test can be undertaken on a subset of the hardware.

Dimensioning – The architecture has known bottlenecks that drive the performance such as a central DB. The performance test environment must contain the bottleneck component but other components may not need to be representative of a full sized environment.

Modelling – This examines the use of modelling to take results from a small scale environment and predict the results for a larger scale environment.

Flipping – This looks at creating test environment that can be have the correct amount of resources allocated to them for a “full scale” performance test for example during off hours and then revert to a smaller scale performance test environment at other times.

Full Scale – Finally the advantages and disadvantages of a full scale performance test environment are discussed.

Finally the caveat for these techniques is that for any testing on a small scale performance test environment does not guarantee that all performance problems will be discovered due to application/scalability constrains that may only appear in the full sized environment!

You can download the presentation from here.

Calculating Concurrency from Performance Test Results

So you are on a performance test engagement and your boss asks how many people concurrently executing certain transactions like buying a book or doing a search. He wants is a measure of active concurrency – how many people are doing certain transaction. This should not be confused with Passive concurrency like how many people are logged in. Before we go anyfuther lets clarify that in this example a transaction is a request to the test system and a response back it does not include any think time. Now before you start getting out the virtual terminal server and incrementing counters at the start of the transaction and decrementing counters at the end. There is an easier way.

You can work this all out from your performance test results, without the need for code. Using a mathematical formula (it’s very simple so don’t panic) called Littles Law. Littles Law was first used to analyse the performance of telephone exchanges in 1969 by John Little.
Little’s law allows us to relate the mean number of items in the system in our case concurrent users with the mean time in the system (response time) as follows:

Number of Items in the system = Arrival Rate x Response Time

There is one rule to remember before you use little law you must make sure the system is balanced. That is the arrival rate into the system is the same at the exit rate.

I will begin with a none computer example the “Black Horse Pub” has a mean arrival rate of 5 customers per hour that stay for on average half an hour. Using Little’s law we can calculate the mean number of customers in the pub as Arrival Rate x Response Time = 5 x 0.5 = 2.5 customers.

To apply little law to a performance test we must first make sure that we are taking measurements from when the system under test is balanced. Remember a balanced system the rate of work entering the system matches the rate of work leaving the system. This for a typical load testing tool is after the ramp up period and the number of virtual users remains constant and response times have stabilised and the transaction per second graph is level. To capture this period of time in LoadRunner for example you would need to select the time period in the Summary report filter or under the Tools -> Options.
So record the average response time for the transaction of interest and the number of times per second the transaction is executed.

performanceresponsetimes

So from the example above the response time is 43.613 seconds. The arrival rate is the number of transactions executed divided by the duration. The duration for this example was a 10 minute period as can be confirm by the LoadRunner summary below.

LoadRunner Performance Testv Duration

This gives you an arrival rate of 2.005 calculated by taking the count 1203 divided by the duration 600.

So the concurrent number of users waiting for a search to return is 87.44
There you go from your performance test results you can easily calculate the concurrency for a particular transaction.

How do you know if your Load Test has a bottleneck

The bottleneck in a system may not be obvious. (Life would be easier but less fun if there where always easy to find). This is because there are two types “hard” and “soft”. Hard bottlenecks are the ones where a resource such as a CPU is working flat out which limits the ability of the system to process more transaction. While a soft bottleneck is some internal limit such at number of threads or connections that once all used limit the ability to process more transaction. Therefore, how do you find know if you have a bottleneck. If you are looking at the results from a single load test you may not know you will need to run multiple load tests at different numbers of virtual users and then see if you number of transactions per second increase with each increase in virtual users. The results can be seen in the two graphs below. The first shows how the throughput (transaction per seconds) increases and levels off when saturated and the second shows the response time. You will probably have heard the express below the knee of the curve and this is an the point that is to the left of the bend in the response time graph.

Throughput Graph
Throughput Graph

Response Time Graph

The graphs above where actually generated using a spreadsheet model for the performance of a closed loop model. This is like LoadRunner and other testing tools where the are a fixed number of users that use the system then wait and return to the system. The reality is that the performance graphs may look different from the expected norm. An example is shown below from a LoadRunner test the first graph shows how the number of VUser where increased during the test and the second graph shows the increase in response times. In this case the jump in response time is dramatic. However, in some cases the increase in response time will be less dramatic as the system will start to error at high loads which will distort the response time figures.

Example LoadRunner VUser Graph

Example LoadRunner Graph Showing Increasing Response Times

Having discovered there is a bottleneck in the system then you have to start looking for it.

Scalability

Scalability can be defined in many ways. However, in general it is the relationship of how an output increases with a change in input. Typically we may think of how throughput changes as we increase the number of CPUs. In a perfect world we would like to have linear scaling. . I came across a good example of non-linear scaling. It is from a presentation presented by Peter Hughes. It is where you are having a dinner party and have 1 meter square tables each table seats 4 people. As it is a dinner party you want to have everybody facing each other as much as possible. So with one table you can sit 4 people.

scalability11

To increase the number of guests you need 4 tables but you can now only sit 8 people.

 

scalability2

To increase the number of guests again you need 9 tables but you can now only sit 12 people.

  

scalability3

 

If you plot the relationship between guests and tables on a graph is looks like the one below.

 scalability-graph 

 

 

 

Extrapolation of Load Test Results is it worth it?

It is a common problem that performance testing is often carried out on smaller scale test environments but project managers want to know that the system will scale and response times will not be degraded. Therefore can the performance test results be extrapolated? My view on extrapolation is it is a great technique when used properly but it does not guarantee that the system you tested will work well on the full sized production environment. The two main reasons for failure are

1) You have made a mistake in the creation of your model. These mistakes could be simply just a poorly built model or a bad assumption. However, with plenty of time and expertise you can overcome some of these limitations by building a good model.

2) There are “soft” bottlenecks in the system that are only detected at high load. A common example might be a piece of software may be limited to a certain number of threads that once all used, limit scalability. Some of these “soft” limits might be know by developers before hand and can be investigated with the model and the test environment but it the unknown unknowns that will be the problem on go live day

However, this does not mean that extrapolation is bad or should be avoided. Where as it cannot guarantee that the system will work in production is can be used to show that the system will fail and as we all know avoiding a costly failure is often worth the effort. Using modelling techniques you can estimate the needed hardware configuration for the production system which can be compared to what is expected to be deployed and if the deployed hardware is undersized you have a made a friend with the project manager.

Welcome to my Performance Engineering Blog

Hi this is a blog that I have started for “fun” about my work as a performance engineer. For some a performance engineer is a performance tester that can help fix performance problems. A wider definition of a performance engineer is one that can help achieve the performance goals of a project throughout the lifecycle of the project through development and into production. I surpose I like to feel I am more the latter type of performance engineer. I particularly like performance modelling and prediction. However, we must remember “performance prediction is easy, getting it right that the hard part” (Thanks to Dr Ed Upchurch for that quote)

You may also wonder why I have called this blog 1202 performance. This is because of the 1202 error code that was generated for the computer overload on the Apollo 11 decent to the moon.  Want to know more try watching this https://youtu.be/z4cn93H6sM0