Repositioning Performance Testing Analytics with LoadRunner Cloud’s New Analysis

Micro Focus Frequent Contributor
Micro Focus Frequent Contributor
2 0 2,279

We are aware of the complexity in each of the steps involved in executing performance testing. All the way from planning the test, executing it and of course, analyzing the results. This can be an extremely tough task to accomplish, particularly when most of the efforts are channeled into looking at details, since these will determine our conclusions and the next steps to take in a specific direction.

The whole process can be tedious and it starts with the need to have a deep understanding of the relevant business logic that comprise the framework of our test. In fact, reaching this understanding is a key factor to defining the correct logic to be tested and transforming that into well-built scripts that can efficiently represent real user behavior.

There are several obstacles to creating and executing well-built performance tests and LoadRunner Cloud (LRC) constantly invests efforts into finding the best ways to provide our customers with the most innovative and cutting-edge technological solutions. We have already come a long way, observing our users’ needs from different angles and achieving solutions that are adapted to their needs, as well as to a constantly changing environment.

LoadRunner Cloud’s advanced cloud leveraging capabilities enable our customers to forget about purchasing and maintaining their own load generators and to leave it all to us. This fact not only lessens responsibility and reduces costs related to hardware maintenance on the customer’s side, but also enables reaching huge scalability levels in just a couple of minutes, without any installation and complicated configuration processes. But, we don’t stop there!

From our experience, root-cause analysis can be regarded as the tip of an iceberg, while a lot needs to happen “under the water” (such as scripting and researching) before being able to reach any kind of conclusions. Frequently, this involves the collaboration of cross-boarding teams, when all efforts are directed at getting insights that will eventually help decision-makers do their job.

If you have managed to successfully complete the entire preparation of your test, overcoming any potential organizational blockers, and you have the test running, it is now time to analyze data. In order to analyze your test results efficiently and extract the test data in the most accurate way possible, we have implemented our New Dashboard and New Report, that were designed to make the whole analysis process, both during and after the run, easier, more comfortable and highly rich in functionality, out-of-the-box.

With the implementation of our New Analysis, we aim to give you an enhanced user experience, and make your whole performance testing process a pleasant journey. It has existed as a beta version for a while and now you can use it as the default analysis tool for your tests.

The New Analysis is a huge jump from its previous version in terms of data visualization and manipulation. Its new layout enables you to work with a colossal amount of data while still being able to manage it all in a simple way. So let’s go over its main capabilities and new features that can make the difference the next time you sit in front of your computer and start analyzing your test results with LoadRunner Cloud:

Expandable Metrics Tree

Your tests may be full of scripts and contain thousands of transactions that are run by millions of virtual users in different regions around the world, while emulating extreme network conditions, and you need to find the needle in the haystack. No worries!

As part of the New Dashboard we have built a tree that displays all metrics as separated branches, so all the layers are always well organized. All the different metrics, such as Throughput, Hits per second, Running VUsers, transaction data, etc., are in the tree and with just one click you can add any metric into a new or existing pane. If needed, several metrics can be included in one pane only, which enables you to easily compare and correlate between them.

dashboard_add-multiple-metrics_blogs.gif

Multiple customizable panes

The size of your test doesn’t really matter, since you can always customize your layout and add as many panes as you want. In the New Dashboard you can add the number of panes that you need and you can split, resize, maximize, and close them as necessary. In addition, you can add them directly to the New Report from the dashboard itself.

Each pane contains a legend with the numeric values of each metric that can also be customized by including or excluding columns. This will definitely help to better understand what is being displayed in each graph, especially while working with several metrics at the same time. But that is not all, the Time-picker will make things even easier by allowing you to zoom in to and out of specific time frames both during and after the run.

manage-layout_blogs.gif

Transaction Summary table

While leveraging the given dashboard flexibility that enables managing, in run time, several metrics in the form of graphs, having the possibility to centralize all essential transaction data in one single pane is a huge advantage. With this in mind, we have recently implemented the new ‘Transactions Summary Table’ on the New Dashboard which is available during and after the run. It can be opened as a whole new tab in order to take advantage of the entire screen, or if you need to compare it with a particular graph, it can be opened as a new pane from within the tree.

The table contains metrics such as transaction response time, passed and failed transactions, as well as success rate, transactions per second and standard deviation of each.

transactions-summary_blogs.gif

Comparing between runs

Frequently, more than a few runs are triggered in a particular test, while either fixing the script or making changes to the application under test. Run comparison enables taking a deep look at changes that have taken place over time and understanding trends between past runs, or possibly based on one particular benchmark run.

LoadRunner Cloud provides you with the possibility to compare your current run to a previous run or to the test benchmark. In order to do this, just click the “Compare” button in the tools bar, and a dialog box is opened. Select the run to which you want to compare your current run (this can be any past run or an already marked benchmark) and then click “Compare”.

Immediately, the selected run is added to the tree and its results are reflected in each relevant graph to be compared with the main run.

compare-runs_blogs.gif

Tab management for different layout configurations

Sometimes you may need to create different dashboard layout versions for different stakeholders who may access the test, or perhaps, you just need to create a convenient configuration of your graphs based on your own needs. By adding new tabs to the dashboard, it is possible to better manage data and organize metrics for easier manipulation.

In order to add a new tab, click the plus sign “+” at the bottom of the screen. This will add a new tab with a default configuration. Now you can start to configure it from scratch based on your requirements. From the new tab you will still be able to see your events notifications and the summary of the run, while the Time-picker will affect each tab individually.

tabs_blogs.gif

Network breakdown metrics

Many variables can affect the responsiveness of an application, such as database and web server. But in addition, it is extremely important to consider the network quality and stability as well as the web browser itself, since they can unquestionably affect the transaction measurements during a test. Consequently, in LRC we provide an out-of-the-box feature that helps you to analyze the transaction network breakdown. This feature needs to be activated during the test configuration, prior to the start of the run. Once activated, you will see a new branch in your tree: “Breakdown”. By clicking on it, a new graph will be added to the dashboard with several metrics, such as DNS, SSL, Client TRT, Wait (average amount of time taken for all HTTP(s) waiting for a response from the server), and more.

network-breakdown-teaser_blogs (1).gif

Export rich reports

In order to better analyze your test data in an external tool, and to enable a more convenient solution for post data manipulation, we included several reports that are generated with each run and that can easily be obtained. Even for runs that are triggered from a CI tool such as Jenkins or Azure DevOps, reports are available immediately after the end of each run.

In the run summary menu, click Export / Download to open a dropdown with several reports that can be exported or downloaded, such as LG scripts and IPs report, Logs report, a run summary report, or for even more details, raw data reports such as Transactions and Script Errors reports. For the raw data reports, be sure to activate the relevant option in your test configuration prior to the test run.

download-reports-teaser_blogs.gif

These are just some of the capabilities provided in our New Analysis. The rest are there for you to discover and leverage.

Our focus on transforming Performance Engineering from complex to simple leads us to understand the importance of even the smallest details in the context of analytics. This helps us to continue building the most flexible, robust, and user friendly analysis tool for our customers.

- Lior Urbani, LRC Product Owner

The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.