Worker Group - File read Issues
I have two servers in a worker group. The flow I run reads in a list one line at a time from a file share. I found that the flow started and used one worker in the group and when almost completed with the whole list, a second worker took over and started the list back from the top. Thoughts or work arounds?
Thank you for contacting HPE Forum.
My name is Carlos, I am from the OO/CSA team.
The expected behavior will be the second worker continue the flow from the last step that was executed by the first worker. I have a question here, is there any particular reason for the change of the worker? It's the first worker going down for any reason?
Moreover, there is not much information about your environment, and the operations used, but I can think that when the first worker goes down,, this specific step hasn't completed and therefore the information of the last line read is not saved in the database, so, when the second worker takes this flow, it will start this step from the beggining.
Carlos Roberto Rojas Chaves | SW Technical Support Consultant.
Operations Orchestration / Cloud Service Automation.
San Jose, Costa Rica
Hewlett Packard Enterprise
Customer Support Engineer
If you find that this or any other post resolves your issue, please be sure to mark it as an accepted solution.
If you are satisfied with anyone’s response please remember to give them a KUDOS by clicking on the STAR at the bottom left of the post and show your appreciation.
Carlos is perfectly right. There are other situations in which you might end up with worker change besides a worker going down but however we describe it at the end it relates to more information needed from your flow.
What I would suggest though is to stick with one worker if you want to write to a particular file, unless this is not an option in which case "more information needed" and flow based discussion is required.
Thank you Carlos and Andreal.
Unless it's for a very short and undetectable time, there is no evidence to suggest that the worker that started the flow is going down triggering the second worker to pick it up and start the file read from the top of the file. Basically you have two Centrals in two different data centers in a GLB configuration along with two RASes on the same networks respectivly. I have the centrals in Central_Group1, and the RASes in RAS_Operator_Path. Through a firewall I have two more standard RASes deployed and connected to Central via GLB address. These RASes are members of RAS_Group2 intended to reach targets within their own network compartment. The flow starts and uses a worker in RAS_Operator_Path to read the file from the file share and perform some other task such as lookup info in CMDB, etc.. Based on that info, it triggers a remote command shell to a target passing the JRASOveride as RAS_Group2 to the subflow that contains the actual remote operation. When its done, it returns its results to the parent flow where a worker in RAS_Operator_Path writes the result to a file, and then it iterates to the next line read in the input file. It can do this successfully several times in a row and then suddenly switch to the other RAS in the RAS_Operator_Path for that read/write where, as I stated, the read file will start at the top of the file and dupliate what was already done. So far the only way I have managed to change this behavior is to remove one of the RASes from RAS_Operator_Path . Not ideal as that then defeats the HA/DR aspects of the RASes configuration.