3 minute read time

Network Emulation: Truly evaluating the impact of network conditions

by Micro Focus Employee in Application Delivery Management

Ryan Hardoon.jpg


Guest post by Ryan Hardoon, Software Engineer, StormRunner RnD, ALM RnD

Load Testing  is used to examine the behavior of a system when it is subjected to both normal and extreme load conditions. Site Developers need to know how well their application can handle loads, so they run the system through this rigorous testing. However, there is another issue that needs to be addressed—how the network affects the application. Developers also need to understand the differences in user experience when accessing the site from a mobile device, Wi-Fi, or a typical WAN network?

This issue is very important to understand as issues such as latency, bandwidth, and packet loss rate can sometimes have serious implications on the application. As a result of these issues, your application may receive poor reviews and be abandoned by users.

To address this issue, StormRunner Load (SRL) cloud load testing tool supports adding emulations to each location, with the option of setting the distribution percentage for each type of emulation. To run the emulations, SRL uses Micro Focus Network Virtualization technology that hooks up to the network hardware interface of the machines running the scripts, and transforms the output in a way that emulates the required network.

Currently SRL supports five different types of predefined emulations:

  • WAN-Good: A high bandwidth WAN network with very low latency
  • WAN-Typical: A typical WAN network
  • WiFi: A Wi-Fi network with some packet loss and relatively high latency
  • Mobile Typical: A typical mobile network
  • Mobile–Busy: A busy mobile network with high latency and low bandwidth

In the future, SRL will support customized emulations where performance engineers can define a custom emulation with specific latency, packet loss, and bandwidth parameters.


Configuration of the network emulation is done in the Distribution tab and is per location.
For example, if you would like your load test to simulate users running from Virginia, where 50 percent of the users are using a typical mobile network and another 20 percent are using a typical WAN network, you can configure it as shown below.Load Tests configuration.png

Note: The remaining 30 percent will not be emulated i.e. the load will run directly from the machines in Virginia without extra interference.

Of course, you can set a different set of emulated distribution for each location, as shown below. The percentage of each emulation is relative to the total distribution of each region. For example, in the case below, if you are running a total of 1000 Vusers, 400 will run in Frankfurt, out of which 40 percent (160 Vusers) will run through a good WAN emulation.Load tests edit location.png

Web, mobile, and cloud network conditions are dynamic and vary by provider, location, and time of day. So your test environment must accurately recreate multiple network scenarios to analyze application performance and the effect of network conditions on different user populations


In the dashboard, all results can be split or filtered by the different emulations. In this way, you can visualize the different behavior of each type of emulation.
Below is an example of the dashboard for a test run whose distribution is set according to the example above (Frankfurt and Virginia).
By using the split functionality of the dashboard graphs, each line represents a specific emulation/location. Analysis Emulation.png


load tests hits per second.png

 If you would like to look only at the data running from Virginia using mobile, you would use the following filter: Load test script filter.png

 The result would only show the mobile results:


hits per second.png

 Final Note: The value of Network Virtualization

One of the biggest challenges of performance testing, is trying to simulate as much as possible with real-world scenarios—and not just load testing. Network Virtualization, is a step forward toward this goal.

From the surveys we have performed, we have seen a major improvement in production incidents that required remediation per month.
Before using Network Virtualization, over 50 percent of the respondents indicated the need to remediate at least four production incidents per month. 20 percent of respondents indicated the need to remediate between seven and 15 production incidents per month.
After using Network Virtualization, the average number of incidents occurring in production and requiring remediation is 3.7. Two-thirds (66 percent) of respondents indicated the need to remediate three or fewer incidents.

The greatest differential reported was a reduction of five network performance incidents per month.

 NV incidents.PNG



Performance Testing