The last few articles provide some insights into some of the common Server and Client side ORB connection management properties. In this article, we will briefly discuss a few other properties.
The "vbroker.ce.iiop.ccm.connectionMaxIdle" property specifies the duration (in seconds) by which an unused client connection can remain idle before it is eligible for closure. It's default value of 0 means that an unused connection can remain idle forever. This is another property which you can be used to conserve file descriptor resources usage. This property is applicable to both VBC and VBJ, but their behavior are slightly different.
For VBJ, the routine to close idle connections is triggered periodically by an internal garbage collector thread. This period is controlled by the "vbroker.orb.gcTimeout" property which has a default timeout of 30 seconds. That is why the actual timing to close an idle connection is slightly longer than the "connectionMaxIdle" setting.
VBC does not have an internal garbage collector thread to periodically trigger the cleanup of idle cached connections. The cleanup operation is only triggered whenever any unused connection is being cached. The caching occurs when any CORBA Object is being destroyed, and it's associated connection is no longer referenced by any other Object.
But what if you need to retain the CORBA Objects in memory, but still want to close the unused connections? To achieve this, the application code can call the following API from "vutil.h":
static CORBA::Long release_idle_connections(CORBA::ORB_ptr orb, CORBA::Long howmany);
For example, the following statement tries to release 10 unused connections from the ORB:
In general, closing idle connections will increase the latency of subsequent invocations to the same Servers because the Client Engine needs to re-establish the TCP connections again before sending the requests. This is the trade off you have to make to reduce file descriptor resources consumption.
When TCP No Delay is "true", the TCP socket buffering is disabled so that all the outgoing data packets are sent as soon as they are written to the socket. When it is "false", the TCP socket buffering is enabled so that small data packets are combined into a larger packets before sending.
At the Server side ORB, this option can be configured using the Server Engine property "vbroker.se.<se_name>.scm.<scm_name>.connection.tcpNoDelay". The default value for VBC is "false", while the default value for VBJ is "true".
At the Client side ORB, this option can be configured using the Client Engine property "vbroker.ce.iiop.connection.tcpNoDelay". The default value for VBC is "false", while the default value for VBJ is "true".
When this option to "true", it may benefit the invocation performance if your application sends small messages frequently. The side effect is that it may generate more network traffic. So it is important to perform some benchmark tests and monitor the network traffic when changing this option.
When TCP Keep Alive is "true", the TCP socket will sent Keep Alive probe packets to it's peer at a predefined regular interval to verify whether the connection is still alive. The time interval is usually configured at the Operating System level. When it is "false", this connection verification process is not performed.
At the Server side ORB, this option can be configured using the Server Engine property "vbroker.se.<se_name>.scm.<scm_name>.connection.keepAlive". The default value for VBC is "true", while the default value for VBJ is "false".
At the Client side ORB, this option can be configured using the Client Engine property "vbroker.ce.iiop.connection.keepAlive". The default value for VBC is "true". This property is currently not applicable to VBJ, but the default TCP Keep Alive option is "false" in most JDK Socket implementation.
When this option is "true", it can help to prevent a firewall from silently closing an idle connection. However, in most Operating Systems, the first Keep Alive probe packet is usually sent after the TCP socket has idled for a long period of time (e.g. 2 hours for Solaris OS). This time may be longer than your firewall idle timeout setting, resulting in your firewall closing the connection silently. So you may want to implement your own application level keep alive strategy by pinging your Server periodically or reconfigure the Operating System's TCP Keep Alive interval.