scuda20 Super Contributor.
Super Contributor.
686 views

Large 50+ Multi-Instance Steps java dump

OO 10.51/CSA4.6

Performing a simple task of selecting multiple nics from VM's, matching to an input and then setting them to not connected. Providing a list of 43 vm's to Multi-Instance completes in 57 seconds, however, above a list of 50 and OO java dumps continously creating multiple 13G .hprof files as well as corrupts some database tables. Mainly the public.oo_execution_states.

Using a list iterator serially this takes approximately 15 minutes or about 11 seconds/vm so I would like to figure out a solution to do them in parrallel.

Has anyone had experience with large scale mulit-instance implementations and if so, is there further JAVA tunning that can be performed to handle this?

 

Labels (1)
0 Likes
5 Replies
anaik
New Member.

Re: Large 50+ Multi-Instance Steps java dump

Hello ,

Have you tried tweaking the  below config in the central-wrapper.conf or ras-wrapper.conf

Dcloudslang.worker.numberOfExecutionThreads=20

Dcloudslang.worker.inBufferCapacity=200

by default it will be 20/200 you can try changing it to 40/400 ,check the OO Tuning guide .

Thanks

0 Likes
scuda20 Super Contributor.
Super Contributor.

Re: Large 50+ Multi-Instance Steps java dump

Yes, it's

Dcloudslang.worker.numberOfExecutionThreads=300

Dcloudslang.worker.inBufferCapacity=500

as well as maxThreads="1600" in server.xml but still the same results. I fell back to iterating through instead of multi-instance. Slower but works.

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Large 50+ Multi-Instance Steps java dump

Hi, 

From the behvior you are describing it seems that central crashed with out of memmory errors. Judging by the statement of having multiple 13gb *.hprof  files i am guessing you have set the java max memory to 13 gb. Now in order bypass this limitation there are 2 ways to go about:

1) Increase the memory further

2) Trhottle the multi instance step to a value that worked.

If you throttle the multi-instance step to 43 execution the change in run time from the original 57 seconds would be at worst another 57 seconds (still way less than the iterative 15 minute time). You can play with the values for the throttle to find the sweet spot between resource usage and execution time. 

If you want there is another alternative to the iterative model in which you trigger the content of the multi instance step as an external flow and get its results (if needed) back into the original flow. If you want to go with this route tell me and i can further elaborate.

Regards,

Vlad

0 Likes
scuda20 Super Contributor.
Super Contributor.

Re: Large 50+ Multi-Instance Steps java dump

I would be interested to understand what you mean by external flows, no return info/variables expected. I already have my java set to 16384 or higher depending on the lab as we will deploy out 5000+ nodes for exercises. There are other ways to do it but I was trying to utilze what was in OO. I could dump a list and then run a powershell script to cycle through. In this case, it's pretty error resistant so I am firing and forgetting using the VMWare api.

It's a known (to us) bottleneck > 49 so we egineered around it. XML is small but when you scale it adds up.

0 Likes
Micro Focus Expert
Micro Focus Expert

Re: Large 50+ Multi-Instance Steps java dump

In high level the external flow solution is composed of having all the steps that would normally run as part of the multiinstance packaged as a flow. Then you take the list of inputs that splits the multi-instance step and feed it to an http client call for  /post executions API or to the hpe solutions  dynamically launch flow operation and trigger the flows as their own entities. Once you've triggered the initial batch of flows you monitor them to see if they are complete and as soon as you get an opening you trigger some more. I will try to make a video of the whole process top show you (sadly i can't post content packs, only image files)

It sounds really complicated at first, but i have managed to do this successfully in the past (granted the initial example i used this method for was just writing to file some server statuses with timestamp).

Your solution of having powershell handle it is also quite valid. There are times when even though there is a way to do it purely in OO, it is better (both performance wise and time wise) to have the targeted systems do the heavy lifting since sometimes they are better suited to handle some jobs than OO. In such cases you just Have OO tell the target what to do.

Regards,

Vlad

 

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.