Server Engine connectionMax

Server Engine connectionMax

[[Other Thread Pool Dispatcher Tuning|Back]]

The previous articles focused on the tuning of thread management properties to optimize performance and improve stability. However, another important factor that you must always consider during performance tuning is the connection management configuration. In the following articles, you will learn about VisiBroker's connection management properties.

First, let us see how the "vbroker.se.<se_name>.scm.<scm_name>.manager.connectionMax" property can be tuned to improve the stability of the application.

The "echo_service_cpp" [[Explanation of Example|example]] is used in this demonstration.

Scenario

  • Start 10 Clients which create 2 threads each to call 1 Server concurrently.
  • Each client thread calls the Server once.
  • Each invocation blocks for 10 seconds at the Server side.
  • Monitor the Server's resource consumption and invocation performance with the default configuration.
  • Observe the effect of setting Server Engine "connectionMax=5".

How many sockets are established concurrently at the Server side to service the concurrent invocations from the 10 Clients?

Preparation

Configure the Client by modifying c.sh:

  • Set the following properties:
    • server_sleep_time 10
    • vbroker.agent.enableLocator=false
  • Disable all the other properties.

Configure the Server by modifying s.sh:

  • Set the following properties:
    • vbroker.se.iiop_tp.scm.iiop_tp.dispatcher.threadMax=100
    • vbroker.agent.enableLocator=false
    • vbroker.se.default.local.manager.enabled=false
  • Disable all the other properties.

Execution

Make sure you have set up the necessary VisiBroker environment and build the example before running this demonstration.

  • Start the Server Monitor script:
    • mon_s.sh
  • Start the Server:
    • s.sh 1
  • Start the 10 Clients:
    • c.sh 10 1 2
  • Monitor the Server's resource consumption (printed by Server Monitor). E.g.:

MEMORY(KB) THREADS SOCKETS
12544      34      10

  • Note the total time taken by the last few Clients to complete all invocations (printed by Clients). E.g.:

. . . . . .

Total Time taken for 2 threads in PID 7193 to complete all invocations is 10.1308 seconds

Total Time taken for 2 threads in PID 7195 to complete all invocations is 10.0235 seconds

Total Time taken for 2 threads in PID 7196 to complete all invocations is 10.0265 seconds

  • Stop the 10 Clients and Server.
  • Set the following property at the Server side by modifying s.sh:

vbroker.se.iiop_tp.scm.iiop_tp.manager.connectionMax=5

  • Re-start the Server:
    • s.sh 1
  • Re-start the 10 Clients:
    • c.sh 10 1 2
  • Monitor the Server's resource consumption (printed by Server Monitor). E.g.:

MEMORY(KB) THREADS SOCKETS
12296      15      5

  • Note the total time taken by the last few Clients to complete all invocations (printed by Clients). E.g.:

. . . . . .

Total Time taken for 2 threads in PID 7455 to complete all invocations is 19.0479 seconds

Total Time taken for 2 threads in PID 7456 to complete all invocations is 19.0853 seconds

Total Time taken for 2 threads in PID 7458 to complete all invocations is 19.1963 seconds

Observations

Compare the resource consumption and invocation performance measurement before and after  tuning “vbroker.se.iiop_tp.scm.iiop_tp.manager.connectionMax=5” property at the Server side.

Server Memory (KB)  Server Threads Server Sockets Avg Total Time Taken (sec)
Before Tuning 12544 34 10 10
After Tuning 12296 15 5 19

Key observations after tuning:

  • The number of sockets created by the Server is lesser.
  • The memory usage by the Server is lesser.
  • Total time taken by the last few Clients to complete all invocations are longer.

Contrast the Server socket resource usage with an earlier [[Thread Pool Dispatcher unlimitedConcurrency|scenario]]. What is the main factor in the scenario contributing to the difference?

Explanation

By default, the Server does not limit the number of incoming connections from the Clients (i.e. vbroker.se.<se_name>.scm.<scm_name>.manager.connectionMax=0). That's why all the 10 Clients can establish connections to the Server successfully, and get their requests serviced in parallel. But after "connectionMax" is set to 5, only 5 Clients managed to connect to the Server in parallel. The remaining 5 Clients have to wait for the first 5 Clients to complete their invocations before they can connect to the Server successfully, thus resulting in a delay in servicing their requests. This also means that the inactive connections from the first 5 Clients must be closed to maintain the maximum connection limit of 5. What side effect does this has on the first 5 Clients making subsequent invocations to the Server? They must re-establish connections to the Server. The TCP connection establishment handshake protocol may cause some noticeable delay, depending on the network bandwidth and traffic. This re-connection overhead can increase the latency of the Clients' invocations.

In this simple demonstration, you may not fully appreciate the benefits of limiting the number of connections at the Server side since you only see it's downside instead of it's advantage. However, this feature is very important in the situation where a large number of Clients attempt to connect to the Server simultaneously. Without any connection limit, the Server may accept too many Client connections, and this may exceed the resource limit of the process. This may destabilize the whole application or even crash the Server process.

A process's socket resources (or file descriptors) are shared by other non-VisiBroker modules, such as application specific database query, non-CORBA inter-process communication and flat file access. If the number of file descriptors used by the process has reached it's limit, then VisiBroker will not be able to accept any more new connections. Furthermore, some application specific modules that need file descriptor resources may encounter errors too. To workaround this issue, you can try increasing the process's file descriptor soft limit using the "ulimit" system utility. If this is not sufficient, most operating systems allow you to tune the file descriptor hard limit too.

To find the optimal "connectionMax" value for your application, you must estimate the potential Client load and know your system resource limitations. Setting it too small will cause excessive re-connections and reduce invocation performance, but less resources are needed. Setting it too large may improve invocation performance, but more resources are required. You should perform benchmark and stability tests to know the overall application behavior under peak load condition. It is inevitable that you have to make some trade-off between resource usage (i.e. sockets and memory) and performance (i.e. throughput and stability) given a finite system resource.

The "connectionMax" property is also applicable to VBJ too.

In the next article, you will lean how to tune the Server Engine "connectionMaxIdle" property to conserve system socket resources.

[[Other Thread Pool Dispatcher Tuning|Back]]  |  [[Server Engine connectionMaxIdle|Next]]

Labels (1)

DISCLAIMER:

Some content on Community Tips & Information pages is not officially supported by Micro Focus. Please refer to our Terms of Use for more detail.
Top Contributors
Version history
Revision #:
2 of 2
Last update:
‎2020-03-13 21:06
Updated by:
 
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.