Category Archives: performance testing

Performance Testing

Extracting data from a LoadRunner results DB

I collegue wanted to record the resource usage from the weekly performance test automatically into an excel spreadsheet.  So this is how you do it. From excel choose Data -> Import External -> New Data Query and select a MS Access database from the dialogue box. Remember this is the access database created from running and saving the analysis, the default output.mdb produced at the end of a load test if for errors. Next you will need to open the access DB that was created when you saved the results analysis. After you have done this you will be able to use MS query to create the query.

You will need to join the Host, Event_map and Monitor_meter tables to construct the query. The equiry used is shown below, where it provides the resource average for an 1 hour of the test after the first 10 minutes.

SELECT Host.`Host Name`, Event_map.`Event Name`, Avg(Monitor_meter.Aminimum) AS ‘Avg ‘
FROM `C:\….\filename`.Event_map Event_map, `C:\….\filename`.Host Host, `C:\….\filename`.Monitor_meter Monitor_meter
WHERE Host.`Host ID` = Monitor_meter.`Host ID` AND Event_map.`Event ID` = Monitor_meter.`Event ID` AND ((Monitor_meter.`End Time`>=600 And Monitor_meter.`End Time`<=4200))
GROUP BY Host.`Host Name`, Event_map.`Event Name`

Once this is working in the query editor you can return back to excel and the data will be added to the spreadsheet

Why do performance tests fail?

I have just been thinking recently about some of the reasons performance testing fails to stop performance problems occuring in the production environment. Below is a list of some of the reasons why performance testing can fail to spot these problems. Hopefully, the list below will provide, as a reminder of things to check next time you have to write a performance test plan. However, we must remember that like all testing, performance testing is about reducing the risk of failure and can never prove 100% that there will be no production performance problems. Indeed it may be more cost effective for some problems to occur in production than during test! Though your customer may not fully appreciate this approach.

So here is my list:

1) Ignoring the client processing time, performance test tools a designed to test the performance of the backend servers by emultating the network traffic coming from clients. They do not consider the delay induced by the client such as rendering and script execution.

2) Ignoring the WAN, again test labs often inject the load across a LAN ignoring any outside the data center network delays. This is a particular problem for chatty application when it comes to network traffic.

3) Load test scripts that do not check for valid responses, performance testing is not functional testing but it is important that for the test script you write they check they are receiving correct responses back. The classic problem has been tools that just check that a valid HTML code is returned. The problem with this is that the “We are busy” page has the same valid code as the normal page.

4) Poor workload modeling. If we can not estimate the user workload correctly the load test will never be right. You might do a great test testing for 10,000 users but that is no real help if 20,000 user arrive on day one. Don’t under estimate the need to get a good workload model.

5) Assuming perfect users, alas users are not perfect and they make mistakes, cancel order before committing and forget to log off. This leads to a very different workload than if all the users where perfect, putting a different load on the environment.

6) Bad Test Environments, a test environment should be as representative as the production environment as possible. I have seen failures particularly when the test environment has been undersized but also where is has not been configured in a similar fashion to production.

7) Neglecting Go-live+10 days performance issues, Performance testing typically focuses on testing the peak hour and a soak test. What is difficult to do in a performance test is to represent how the system will be after several days of operations. Systems can ground to a halt as logs build up and nobody has got round to running the clean up scripts or transactions slow as SQL cannot cope with the increased rows in tables.

8) Unexpected user behavior, Very difficult to mitigate this one as it unexpected! However, in many cases a lack of end user training has resulted in users doing the unexpected like the car part salesman that didn’t know how to use the system and did a wild card search to return the complete part catalogue and then scrolled through it to find the part manually each time! Caused a killer performance issue.

9) Lack of statistical rigor. You don’t need to statistical guru to run a performance test but you should at least run the test long enough and enough times to be confident that the results are repeatable.

10) Poor test data, like the test environment the test data should be as representative as possible. Logging in all the virtual users with the same user id may put a different load on the system then if each had their own user id.

Performance Test Best Practise

An old colleague asked if there are any standards identifying best practise in performance testing. I could not think of any but it started me thinking about what is best practise. Here are my thoughts on some areas of best practise in Performance Testing. They are NOT in any order of importance and the list is NOT exhaustive.

1.) Have a defined process and constantly refine it.
Before you start you should have a process defined and you should make sure you review this process to add improvements. The process needs to be flexible in order to accomodate different types of projects, from benchmarking a core application through to making sure an e-commerce site can handle the Christmas rush.

2.) Define the Goals up front.
This seems obvious, but you need to understand why are you testing and what the performance goals of the system under test are. (Note I use the word goals not requirements). Here, the move to ITIL may help where service design packages developed early on should include the performance requirements.

3.) Let Risk guide you.
The performance risk and consequences of failure should guide the type and amount of performance testing you do. Don’t just test what is easy to test.

4.) Don’t be afraid to say no.
If you are given responsibility for signing off on the performance of the system, you are the expert. If, subsequently, you are not given enough time or the correct tools then be prepared to say that you cannot test the system adequately. Remember that the caveats you place in your final report may never make it into the summary presented to the management board!

5.) Get the workload right.
If you don’t test the system with the correct workload it won’t matter if everything else is perfect – the results will be wrong. This means you need to understand user behaviours and their frequency. Don’t forget to include error scenarios as well.

6.) Develop Quality Scripts.
Make sure your scripts emulate user behaviour as much as possible and remember that users make mistakes, leave processes early and have comfort breaks! Also, make sure your script check what is returned to the user is what is expected.

7.) Select an appropriate test environment.
Is best practise using a production-sized test environment? Not sure, but make sure your test environment is sized and up to the job involved. Make sure you can collect the necessary data about the performance of that environment during the load test.

8.) Run your performance tests for long enough and often enough.
Make sure your tests are repeatable and that the results they produce are statistically valid.

9.) Participation.
Get all the people that need to be involved in the performance test working togther. Unless you are superhuman and multi-skilled you will need DBAs, administrators, developers, Project Managers, etc., to assist in the test. Remember, for the best results get these stakeholders involved in the process early on.

10.) Remember, people want results not data.
Don’t just present the canned report from the performance test tool; you need to analyse the results and present the key facts of the load test. And remember that different people will want different results from the load test – a manager will want to know if it passed whereas the DBA will want to know if the SGA is sized correctly, for example.