Idea ID: 1641489

Multi-Instance steps should act like thread pools

Status : Delivered
over 3 years ago


If you use a multi-instnace step and set the "throttling" value to something other than 0, OO kicks off a number of branches/instances that are equal to the throttling value.  For purposes of this dicussion, let's pretend we've set throttling = 10.

This means that 10 instances are created and kicked off.  OO then waits for that block of 10 instances to all complete before it kicks off another block of 10.  For workloads with differing durations, this means that any block of 10 instances will only be as fast as the slowest branch in that block of 10 - which is very inefficient.

What should be happening is that when one instance completes, another should be spawned to take it's place, so you get more work done concurrently.


  • Increased efficiency/reduced flow runtime


Make it work like a thread pool, not a batch system.

  • This feature was delivered in OO 2019.11 release.

    Please find additional information related to the performance improvements achieved with this feature here:'s_new_in_OO_2019.11_version

    Thank you for raising it!

  • The idea received enough support from the community to be considered for prioritization in our future development planing.

    We also believe it would add value for OO users so we are moving the idea to Under Consideration.

    We will continue to monitor the idea so please expect further updates.

  • Yes, I agree!  It was like that in 9.x where when 1 finished it would start another right away.  I am not sure why they went away from that model in 10.

    It needs to be like it was already!  We are finding a lot of differences like this in 10 that is making it hard to adopt and even harder on our customer base to manage their flows when they used to things working a certain way.  

  •  - I, too, tried to work around it by:

    1. Creating a master "controller" flow that queried records, iterated through them and spawned subflows. and then re-ran another instance of itself when it ran out of records.  The problem here was the throttling mechanism didn't work.  WHen you spawn a flow and say "wait for it" with a max timer, it doesn't actually wait the "max time" before just moving on.  I could never get consistent results here.  I also had it loop through and query OO for status.  That caused high load on the system and generated 2GB of logs a day.  
    2. Created a "Sempahore" content pack (to replace the one that's available in the marketplace) which actually worked really well.  No busy wait, you could define the concurrency and it only added an overhead of about 0.2s to the flow execution time (to acquire/release locks).  The issue with it was in resuming a flow.  I found that when OO was heavily loaded (200 flows in the inbuffer), sometimes OO would corrupt "something" in the DB and the flow could not be resumed - some of the data required to resume the flow was missing.  This was on SQL Server and I suspect it's because the transaction to store the pause data failed because of a deadlock (which was subsequently rolled back).  Either way, I was left with a few flows that could never be resumed.

    In the end, my project opted to split the records that needed processing up into 3 worker flows, that each use multi-instance.  Basically, we ran out of money trying to architect/develop around this limitation and we had to accept the performance hit that we are now getting from multiple flows + multi-instance.