Highlighted
Micro Focus Expert
Micro Focus Expert
844 views

(SM) Support Tip: Memory Sizing

Memory sizing

  

Sizeing the memory pools of Service Manager is an important topic for reliability, capacity management and scalability.

A single Service Manager process (or "servlet) can address up to 4GB virtual address space (VAS), which contains different memory pools:

  • The operating system allocates some part of the VAS, but insignificant on 64-bit Operating system,
  • Some files like executables get loaded
  • The shared memory is mapped into the VAS,
  • The Java Heap Size is reserved in VAS
  • The rest is referred to as native heap and each session request memory from here. 

Only shared memory and Java Heap pool can be sized by configuration in sm.ini (shared_memory) respective in sm.ini or sm.cfg (JVMOption<n>:-Xmx), and the native heap is the only variable sized pool remaining - so you actually increase available size of native heap, when you reduce shared memory or Java Heap and vice versa. 

Whenever there is a request for memory inside one of these pools that cannot be served, the requesting session will terminate by failure. So there are different out-of-memory scenarios. As these pools are shared by all sessions running on the same servlet, respective for shared memory all sessions running on the same host, a servlet may fail because misbehaviour of another session.

 

This is the reliability aspect:

  • All pools must not be sized too small.
  • And sessions should not request too high memory in short time (i.e. using extreme sized XML requests boosting up Java Heap requiremets i.e in web services (instead use web service pageing if possible), respective memory-unaware JS implementation causeing high native heap allocation).

 

Native pool size stored the session private data, so the capacity management aspect is:

  • How many sessions can be run concurrently on one servlet (parameter: threadsperprocess)?
  • How much memory do the sessions consume by average respective in typical scenarios (parameter appthreadspersession, and implementation-dependent)?

 

The scalability aspect derives from that:

  • How many servlets do I require to run with the given capacity?
  • How many hosts do I require to run this amount of servlets?
  • How much physical memory do I require on each host?

The less SM processes I need to start, the better for my overall memory requirements: While shared memory is only allocated once, also the native heap allocation depends on the number of sessions started and not the number of Processes, the Java heap is allocated for each SM process.

For this reason it is memory-aware configuration to increase threadsperprocess, and to pool multiple background schedulers into single SM process by defining startup records in info dbdict instead of starting each scheduler directly as single SM process from sm.cfg

 

Right-sizing shared memory

 

The initial size required for shared memory can by calculated by this formular:

          48 MB + 1MB per 10 users + IR cache size 

It is advisable to monitor the shared memory usage during run-time: The free memory should allways be between 25 and 75 percent of the shared memory size without IR cache size. 

Note:

The command sm -reportshm is used to output shared memory size and Free space. The percentage calculated by the report is for total shared memory size and therefore may falsely indicate that shared memory size increase is required: It is advisable to calculate the 25 and 75 percentage alert points in forehand and compare with these values.

The IR cache size can be configured by ir_max_shared parameter. It is recommended to set this parameter. The higher the amount of IR index that can be hold in IR cache, the better the IR performance. For that reason reducing IR index size by specifying what is indexed by IR and keepting the stop word list up to date is important to keep shared memory small.

https://docs.software.hpe.com/SM/9.52/Codeless/Content/performance/shared_memory/shared_memory_sizing.htm

  

 

Right-sizing Java Heap

 

Minimum size for Service Managaer Servlets is considered 96M. This is the default size for servlets running background schedulers, and system state reports. For servlets communicating with SM clients or external applications, the default is 256M, as these produce and process XML documents.

In Service Manager 9.x the reasons to size java heap different from these default are rare. It is best practice to size Java Heap per servlet in sm.cfg file and not per default in sm.ini.

  

Native Heap

 

After shared memory and Java heap are sized, the remaining space in VAS is available for Java heap and so shared for all sessions running on this servlet.

One sizable part of the java heap is the RAD stack: This is a call stack in frames of 32 bytes and required for RAD application procession. The number of frames allocated is configured by agstackl parameter.

Recursive RAD implementations may cause failure because RAD stack is exceeded. A RAD stack is created for each application thread - so there are up to threadsperprocess * appthreadpersession RAD stacks generated.

 

 

Memory monitoring

 

Service Manager implements memory monitoring features for java heap, native memory and RAD stack controlled by sm.ini parameter memorypollinterval. By default, each servlet tests every 15 seconds available space in both pools. When allocation increases beyond 90 percent, the servlet will move into low memory mode until allocation falls again below 70 percent. In low memory mode, the servlet will not accept new sessions and block opening more application threads.

https://docs.software.hpe.com/SM/9.52/Codeless/Content/serversetup/concepts/monitoring_memory_in_service_manager_processes.htm

 

Shared memory can by analysed by running a system report sm -reportshm. As this generates another process, the execution frequency should not be too frequent (say, no more than once every 5 minutes).

 

Log review for memory allocation issues

 

Search log files for these strings:

"Process Low on Java Memory"

"JavaMemory"

"NativeMemory"

 Examples:

    JRTE D JavaMemory Max(123928576) Used(1011872) %Used(0.0)

    JRTE D NativeMemory Max(2147352576) Used(358121472) %Used(16.0)

    JRTE W Process Low on Java Memory. Max(123928576) Used(119603800) PercentUsed(96.0)

    JRTE W Send error response: Server is running low on memory try again.

    RTE I Process Java Heap Memory is back to normal range.

These message refer to memory monitoring, please refer to documentation link above.

  

"-Memory"

Example:

    RTE I -Memory : S(20133571) O(6239881) MAX(26373452) - MALLOC's Total(37655779)

This messages are printed into the log file when native heap allocation exceeds specific limits. RTM:2 prints additional "-Memory" messages to the log providing delta information. Not each "-Memory" message is relevant. Look at the MAX() value: If it exceeds 40M, it is worth analysing the root cause of this allocation.

  

"RAD Stack"

 Example: 

    RTE I RAD stack is 71% used, please exit out of current application.

This is a memory monitoring message for RAD Stack. As typical the RAD stack is sufficiently sized (agstackl parameter), this message indicate a recursive RAD implementation that should be reviewed.

Labels (2)
0 Replies
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.