Assuming you have defined a workload with a certain amount of virtual users and you have a pool of agents you can choose from. But when it comes to selecting agents, it is often hard to tell how many agents you need to execute the specified amount of virtual users. This is where Silk Performer's Evaluate Agent Capacity feature becomes useful. It will tell you, based on an evaluation run, how many virtual users a particular agent can handle.
In this blog article, we will discuss how the capacity is actually evaluated and on which criteria the resulting number is based on. Then we'll walk you through a real-world example and we'll try to find out how many agents we need to execute a load test with a certain amount of virtual users.
How the capacity is evaluated
Before we walk you through a real-world example, let's first discuss the theory and learn how Silk Performer actually performs the evaluation. Take a look at the graphic above. It shows three charts, one for the virtual users (blue), one for the CPU usage (purple), and one for the responsiveness (yellow). When Silk Performer starts an evaluation run, the number of virtual users gradually increases, which means that the blue graph rises. At the same time the two key metrics are measured. So, over time, the purple graph rises and the yellow graph declines. Note the dotted lines in the charts. These illustrate the stopping criteria for the evaluation run. The run stops ...
- if the number of virtual users reaches the maximum capacity (in the blue chart)
- if the CPU usage exceeds 95% (in the purple chart)
- if the responsiveness falls below 95% (in the yellow chart)
Once one of these criteria is reached, the evaluation run stops and the current number of virtual users, shown by the blue graph, is the estimation for the tested agent.
Adjusting the workload
Now, let's jump into Silk Performer and actually execute such an evaluation run. We've prepared a browser-driven load test and recorded a short script using one of our demo websites. Browser-driven load tests are usually quite demanding, so we need to make sure that the agent we're planning to use can handle the set amount of virtual users. We click Adjust Workload on the Workflow bar and configure an Increasing load test (1) with a maximum number of 35 virtual users (2).
Assigning an agent
Now we're going to assign the agent we've prepared: We click Next to proceed to the Assign agents dialog, and click Assign agents manually, and Configure agent pool. Take a look at the following screenshot: This (1) is the agent we've prepared. We currently do not have more agents available, so we need to make sure that this single agent can handle the 35 virtual users we've specified just a minute ago.
To get a rough idea about the agent's capacity, we click Properties (2) and Capabilities (3). This dialog shows a list of capabilities for the various technologies. For browser-driven load tests, the agent can handle 40 virtual users (4). So, based on this list, the agent should be able to handle the 35 virtual users. But we have to be aware, that this list shows maximum values, which can only be reached in ideal situations. We certainly do not have an ideal situation, so we need to get a more realistic value for this agent.
Evaluating the agent capacity
We go back to the Assign agents dialog and click Evaluate Agent Capacity. Silk Performer opens a dialog that shows 6 charts in total. On the left side, we can see the three charts (1) we've already discussed in the theory section above: the virtual users, the CPU usage, and the responsiveness. Silk Performer actually shows three more metrics for the evaluation run (2): errors, memory usage, and the transaction busy time. While these are not crucial for the evaluation run, they can be of great value. If, for example, a number of errors occur during the evaluation run, you might want to stop it (3) and fix problems in your script at first.
Before we start the evaluation, we make sure the correct user type (4) and agent (5) are selected. Note that the dialog also displays the maximum capacity (6) we saw in the Properties dialog. Now we can click Start (7).
Discussing the results
Once the evaluation run is started, you can watch how the graphs evolve over time. After a little while the run in our example stops. Here is a quick analysis of the values and information the dialog presents: The run stopped because the responsiveness fell below 95% (1). The agent could handle 10 virtual users up to this point (2). This is considerably lower than the 35 virtual users we are planning to execute, so one agent will definitely not be sufficient. No errors occurred during the run (3), and both the CPU usage (4) and the memory usage (5) are reasonably low, which indicates that the run is pretty valid.
Note however that the result (10 virtual users) is a first reference value. You may want to execute the evaluation several times, because the estimation can vary due to a variety of reasons. If you get similar results after a couple of evaluation runs, you will get a good feeling about how valid the estimation is.
Adding cloud agents
Now that we know that one agent is not sufficient for the planned load test, we have three options:
- We can reduce the number of virtual users for the load test.
- We can use additional on-premise agents.
- We can use cloud agents using Silk Performer CloudBurst.
Since we have just one on-premise agent available, we're going for the cloud option. On this occurrence, we will execute an evaluation run for a cloud agent as well. This is necessary anyway, since we don't know what the capacity of a cloud agent for a browser-driven load test is. So, on the Assign agents dialog, we click Evaluate Agent Capacity again.
Evaluating the capacity of a cloud agent
We select the cloud agent (1) from the list and click Start (2). Note that the cloud agent's base capacity is 20 virtual users (3). After a couple of minutes, the run stops. This time, because the maximum number of users (4) was reached. Now we know that we need to start up two cloud agents. In combination with the one on-premise agent, which can handle 10 virtual users, the two cloud agents, which can handle 20 virtual users each, can execute a total of 50 virtual users. With this capacity we should be on the save side for our load test.
Executing an evaluation run is a straightforward task. Just remember to execute the run a couple of times to get a good feeling of how valid the results are. Also remember to keep an eye on the metrics the evaluation dialog provides. If the errors graph or the transaction busy time graph rise considerably, you definitely have to further investigate. Interpreting results always requires a certain amount of experience and knowledge.
Watch the Evaluating Agent Capacities video
Also watch the following video to even better understand the workflow around evaluating agent capacities:
To learn more about all new features and enhancements Silk Performer 17.0 provides, take a look at the following blog posts:
- Released: Silk Performer 17.0
- HTTP Live Streaming
- Performance Explorer Enhancements
- Generating Clean Scripts
- Improved CloudBurst Workflow
The Silk Performer Help is another comprehensive source of information:
If this article was useful for you, leave us a comment or like it. We appreciate any of your feedback.