DRAGOPS
DRAGOPS
DocumentationGuidesUse parallel execution

Use parallel execution

Process array items concurrently with For Each Parallel and control throughput with Throttle.

Sequential processing works fine for small datasets, but when you need to fetch data from 50 URLs, process 200 records, or check the status of 100 services, running each operation one at a time is slow. DRAGOPS provides For Each Parallel for concurrent processing and Throttle for controlling how many operations run at the same time.

For Each Parallel

The For Each Parallel node iterates over an array and runs the body logic for every item concurrently instead of one at a time.

Input pins

PinTypeDescription
ArrayArrayThe collection to iterate over

Output execution pins

PinDescription
BodyRuns once for each item in the array, concurrently
CompletedRuns after all items have been processed

Output data pins

PinTypeDescription
ItemAnyThe current item being processed
IndexIntegerThe index of the current item (starting at 0)

How it works

When For Each Parallel executes, it starts the Body branch for every item in the array at the same time. Each iteration runs independently with its own Item and Index values. When all iterations complete, the Completed branch fires.

This is different from the standard For Each node, which processes items one at a time in order.

When to use parallel vs sequential

Use parallel when...Use sequential when...
Operations are independent (fetching data from different URLs)Each iteration depends on the result of the previous one
Order of completion does not matterOrder of processing matters
You want faster execution for large arraysYou are modifying a shared resource (use variables with caution in parallel)
The external service can handle concurrent requestsThe API has strict rate limits that require one-at-a-time access

Important: Variables are not safe to modify concurrently inside a For Each Parallel body. If multiple iterations write to the same variable, the results are unpredictable. Use variables only for reading inside parallel loops, or use the Throttle node to limit concurrency to 1 (which effectively makes it sequential).

Example: Fetch data from multiple URLs in parallel

This pattern receives a list of URLs, fetches each one concurrently, and logs the results.

Step 1: Create the pattern

  1. Open the DRAGOPS dashboard and select New Pattern.
  2. Name it "Parallel URL Checker" and select Create.

Step 2: Set up the trigger

  1. Remove the default On Start node.
  2. Right-click on the canvas and search for "On Webhook". Add the On Webhook node.

Step 3: Extract the URL list

  1. Add a Get Property node. Set the Key to urls.
  2. Wire On Webhook's Body output pin to Get Property's Object input pin.
  3. Wire the execution flow from On Webhook to Get Property.

Step 4: Add For Each Parallel

  1. Right-click on the canvas and search for "For Each Parallel".
  2. Add it to the canvas, to the right of Get Property.
  3. Wire Get Property's Value output pin to For Each Parallel's Array input pin.
  4. Wire the execution flow from Get Property to For Each Parallel.

Step 5: Build the parallel body

For each URL, make an HTTP request and log the status:

  1. Add an HTTP Request node. Set the Method to GET.
  2. Wire For Each Parallel's Item output pin to HTTP Request's URL input pin.
  3. Add a Format node. Set the template to {0}: status {1}.
  4. Wire For Each Parallel's Item output pin to Format's first input.
  5. Wire HTTP Request's Status Code output pin to Format's second input.
  6. Add a Log node and wire Format's Result to Log's Message input pin.
  7. Wire the execution flow: For Each Parallel's Body pin to HTTP Request, then to Log.

Step 6: Log completion

  1. Add another Log node with the message All URLs checked.
  2. Wire For Each Parallel's Completed pin to this Log node.

Step 7: Test

  1. Select Run in the toolbar.
  2. Enter test data:
{
  "body": {
    "urls": [
      "https://api.github.com",
      "https://httpbin.org/status/200",
      "https://httpbin.org/status/404"
    ]
  },
  "headers": {},
  "query": {},
  "method": "POST"
}
  1. The console shows the HTTP requests running concurrently. Each URL is fetched independently, and the completion log appears after all requests finish.

Throttle node

The Throttle node limits how many concurrent operations run inside a For Each Parallel body. This is essential when calling APIs with rate limits or when you want controlled parallelism.

Input pins

PinTypeDefaultDescription
Max ConcurrentInteger5Maximum number of operations running at the same time

How to use Throttle

  1. Right-click on the canvas and search for "Throttle".
  2. Add it to the canvas, inside the For Each Parallel body — between the Body execution pin and the first operation node.
  3. Set the Max Concurrent value to the desired limit.

Example: Limit to 3 concurrent requests

With this configuration, For Each Parallel starts processing all items, but Throttle ensures only 3 HTTP requests are in flight at any given time. When one request completes, the next item begins processing.

Choosing the right concurrency limit

ScenarioSuggested limit
No rate limits, independent operations10-20
API with known rate limits (e.g., 10 req/sec)Match the rate limit
Writing to a shared service1-5
Large array (hundreds of items)5-10 to avoid resource exhaustion

Start conservative and increase the limit after verifying the external service handles the load.

Error handling in parallel execution

Errors in one iteration do not stop other iterations. If an HTTP request fails for one URL, the other URLs continue processing. The For Each Parallel node collects all results (successes and failures) before firing the Completed branch.

To handle errors within each iteration, wrap the body logic in a Try / Catch node:

This ensures that a failure in one iteration is caught and logged without affecting the others.

What is next?

On this page