"Could we have seen that coming?"

Micro Focus Frequent Contributor
Micro Focus Frequent Contributor
4 1 3,732

In a previous blog post, I talked about the importance of using SLAs in your performance testing--which is great when making sure that your AUT performs within business requirements. The drawback of that is that it will only show if the AUT is still performing as expected, it won’t tell you anything about the long-term success of your AUT. Now imagine the following scenario…

If that question was directed to you, and your response would have been negative, then that's where the usage of performance trending would have come into play.

Why Use Performance Trending?
Using trend reports within performance testing might be a bit under-utilized by some, but it is a very powerful tool when it comes to keeping track of and providing visibility into performance trends over time. Re-running the same test after any code or infrastructure change allows for not only seeing short-term improvements or regressions but will also allow for long-term back-tracking of issues to more specific dates once a negative trend has been spotted.

Trend reports can not only track, for example, transaction response times over time, they should also be used to track most other measurements that are being monitored during a test run, e.g. CPU and memory resources.

Short-term and Long-term Benefits
Short-term usage of trend reports will provide feedback how the latest code push or patch installation has affected the AUT. This can be especially useful when using CI/CD tool like Jenkins to do overnight automatic builds, and the results can then be seen when people arrive in the morning, providing an instant feedback on the impact of the changes and will help in detecting performance issues early-on.

Long-term, the benefits are not only how any changes affect the AUT over a longer period and to detect positive or negative trends over time, but if the test is running against a live environment, it can also highlight the impact of an increase of users in the system or how a steady growth of data volume affects it. These two cases are quite common, often due to how the application was written in the first place, or how it was deployed, i.e. there might be performance bugs that weren’t discovered earlier simply due to the data volumes were too low to have a negative impact. With performance trending, such cases can now be spotted before the user experience or business goals see a major impact. These are items that might not be detected if only SLA checks are done, and once those SLA checks are starting to break, it might not be an easy task to back-track to exactly where or when the system started to perform worse.

Using time as example, let’s say that in our environment, six months ago, we needed 100% hardware to keep up with demands. Since then, our user-base has grown 50%, but we needed a 100% increase in hardware to support that.


 So, if we are using trend reports, it might be possible to extrapolate the future hardware needs, and we can then spot a lot earlier (compared to not using trending) that adding more hardware to handle the user growth won’t be realistic in the long run.

Let’s use the above image again to illustrate another example. Let's say we suspect that the above-linear graph, representing an increase in response times, is due to a bug that was introduced in a build some time ago. We can then use trend reports to follow the trend back in time to narrow down the time window of which of the builds introduced the bug.

Compare to Baseline or to Previous Run?
When setting up the trend reports, there’s the option to compare the last test run either to a baseline run or to the previous run. Which one to choose depends on some of the reasons for doing the comparisons, and as mentioned above, we are using trend reports to spot short- and long-term issues.

“Compare to previous run” is often used when looking for immediate issues in new code builds, infrastructure changes or new patches, often when you know exactly what to look for and what the expected result should be. We can also use it when tuning the AUT where each new build is meant to improve performance over the last one.

“Compare to baseline” is generally for identifying long-term issues, which might have been introduced by the changes mentioned above, but also for when looking for trends caused by an increase in users, transactions or data volume. The baseline run used is often the final build of the last major version, but we can also have multiple baselines in multiple reports, etc. showing the trends of each quarter’s first build.

Usage in Performance Center (LoadRunner Enterprise)
Access to the trend reports in PC/LRE is under Performance Trending in the top menu. From there, you can add test runs to a report that are in a “Finished” state, i.e. runs that have been Analysed.

The trend reports in PC/LRE can include the following measurements:

  • Transaction Response Time - trends average and 90% response time values
  • Transaction Pass/Fail Summary - trends amount of pass, fail and stopped transactions
  • Transaction per Second - trends average transaction per second
  • Transaction Percentiles - trends median, 75%, 90% and 95% response time values
  • CPU Utilization - trends average CPU utilization of monitored AUT machines
  • Disk Utilization - trends average disk utilization of monitored AUT machines
  • Available Memory - trends available MBytes of monitored AUT machines
  • Web Resources - trends average hits per second and throughput overlaid with maximum running Vusers
  • Errors Statistics - trends average amount of errors per second

Complete information on how to create and modify the trend reports is available the PC/LRE online Help Center.

Usage with Jenkins and PC/LRE
If you are using Jenkins as a build server to automatically run performance tests with new builds, then it is possible to automatically add the test run to an existing trend report in PC/LRE.

It is also possible to view the resulting trend report from Jenkins by using the Plot Plugin, which is also explained in the above link.

Usage in StormRunner Load (LoadRunner Cloud)
Trend reports are a bit more limited (for now) in SRL/LRC, where we have access to a graph that shows the transaction response time percentile trend, which is available on the Trends pane of a Load Test. Full instructions are available in the online Help Center.

Comments or questions? Please leave them below! 

1 Comment
Micro Focus Frequent Contributor
Micro Focus Frequent Contributor

Well written Anders!

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.