DRAGOPS
DRAGOPS
DocumentationTutorialsScheduled data pipeline

Scheduled data pipeline

Build a pattern that fetches, filters, and transforms API data on an hourly schedule.

In this tutorial, you build a data pipeline that runs every hour, fetches records from an external API, filters out entries that do not meet a condition, transforms each remaining item into a formatted summary, and logs the results. This is a common pattern for syncing data, generating reports, and monitoring external systems.

What you will build

A single pattern with the following flow:

  1. On Schedule — fires every hour
  2. HTTP Request — fetches a list of to-do items from a public API
  3. JSON Parse — converts the response body from a string to an array
  4. For Each — iterates over each item in the array
  5. Branch — filters out completed items
  6. Format — builds a summary string for each incomplete item
  7. Log — writes each summary to the console

When the pattern runs, the console displays output like:

TODO #1 (userId 1): delectus aut autem
TODO #3 (userId 1): fugiat veniam minus

This tutorial uses the JSONPlaceholder public API, which returns sample data and requires no authentication.

Before you begin

Make sure you have:

Step 1: Create the pattern

  1. Open the DRAGOPS dashboard.
  2. Select New Pattern.
  3. Enter the name "Data Pipeline" and select Create.

Step 2: Add the schedule trigger

  1. Remove the default On Start node by selecting it and pressing Delete or Backspace.
  2. Right-click on the canvas to open the node search menu.
  3. Search for "On Schedule" and add it to the canvas.
  4. Select the On Schedule node to open the Inspector Panel on the left side.
  5. Set the Cron Expression to 0 * * * *.

This expression fires the pattern at the top of every hour. For reference on cron syntax, see Schedule a recurring task.

Step 3: Fetch data from the API

  1. Right-click on the canvas and search for "HTTP Request". Add it to the canvas, to the right of On Schedule.
  2. Select the HTTP Request node and configure it in the Inspector Panel:
    • Method: GET
    • URL: https://jsonplaceholder.typicode.com/todos
  3. Wire the execution flow from On Schedule to HTTP Request.

This endpoint returns an array of 200 to-do items, each with an id, userId, title, and completed flag.

Step 4: Parse the response

The HTTP Request node returns the response body as a raw string. You need to parse it into an array before you can iterate over it.

  1. Right-click on the canvas and search for "JSON Parse". Add the node to the canvas.
  2. Wire HTTP Request's Response Body output pin to JSON Parse's String input pin.
  3. Wire the execution flow from HTTP Request to JSON Parse.

JSON Parse's Value output pin now holds the array of to-do items.

Step 5: Iterate with For Each

Use a For Each node to process each item in the array individually.

  1. Right-click on the canvas and search for "For Each".
  2. Add the For Each node to the canvas, to the right of JSON Parse.
  3. Wire JSON Parse's Value output pin to For Each's Array input pin.
  4. Wire the execution flow from JSON Parse to For Each.

The For Each node has two execution output pins:

  • Body — fires once for each item in the array. The current item is available on the Element output pin.
  • Completed — fires once after all items have been processed.

Step 6: Filter completed items

Inside the loop, use a Branch node to skip items where completed is true.

  1. Add a Get Property node. Set the Key to completed.
  2. Wire For Each's Element output pin to Get Property's Object input pin.
  3. Add a Branch node.
  4. Wire Get Property's Value output pin to Branch's Condition input pin.
  5. Wire the execution flow: For Each's Body output pin to Get Property, then Get Property to Branch.

Branch's True path leads to completed items — you do not need to do anything with these. Branch's False path leads to incomplete items — this is where you add the formatting and logging logic.

Step 7: Extract item fields

For each incomplete item, extract the fields you need for the summary.

  1. Add a Get Property node. Set the Key to id. Wire For Each's Element output pin to its Object input pin.
  2. Add a Get Property node. Set the Key to userId. Wire For Each's Element output pin to its Object input pin.
  3. Add a Get Property node. Set the Key to title. Wire For Each's Element output pin to its Object input pin.

All three nodes read from the same Element output pin on For Each. You do not need to chain them — each reads the current item independently.

Step 8: Format and log the summary

  1. Add a Format node. Set the Template to TODO #{0} (userId {1}): {2}.
  2. Wire the id Get Property's Value output pin to Format's first input.
  3. Wire the userId Get Property's Value output pin to Format's second input.
  4. Wire the title Get Property's Value output pin to Format's third input.
  5. Add a Log node. Wire Format's Result output pin to Log's Message input pin.
  6. Wire the execution flow: Branch's False output pin to Log.

Step 9: Log a completion summary

After the loop finishes, log a summary of the pipeline run.

  1. Add a Log node after the For Each loop.
  2. Set its Message input to Pipeline complete — processed all items.
  3. Wire the execution flow from For Each's Completed output pin to this Log node.

Step 10: Review the full layout

Your canvas should now have this flow:

Step 11: Test in the editor

  1. Select Run in the toolbar.
  2. The On Schedule trigger does not require event data, so select Run immediately.
  3. The console should display a series of formatted to-do summaries for incomplete items, followed by the pipeline completion message.

If the API is unreachable during testing, you see an error at the HTTP Request node. Consider wrapping it in a Try / Catch for production use (see Handle errors).

Step 12: Deploy

  1. Select Deploy in the toolbar.
  2. DRAGOPS activates the schedule. The pattern now runs automatically every hour.

To verify, open the Deployments page from the dashboard, select the "Data Pipeline" deployment, and wait for the next hourly execution to appear. Select the execution to see the full console output.

Extending the pipeline

This tutorial builds a foundation you can customize for real-world use cases:

  • Change the data source. Replace the JSONPlaceholder URL with any REST API that returns JSON arrays — monitoring dashboards, CRM records, inventory systems.
  • Add authentication. Use Set Property to add API key or Bearer token headers. See Make HTTP requests for details.
  • Forward results. Instead of logging, send the processed data to another service with a second HTTP Request node — post to a Slack channel, write to a spreadsheet API, or trigger another pattern.
  • Add error handling. Wrap the HTTP Request in a Try / Catch to handle network failures gracefully.

What you learned

  • How to trigger patterns on a recurring schedule using On Schedule and cron expressions
  • How to fetch data from external APIs with HTTP Request
  • How to parse JSON response bodies into usable data structures
  • How to iterate over arrays with For Each and access each element
  • How to filter items with Branch based on a property value
  • How to extract multiple fields from the same object using parallel Get Property nodes

What is next?

On this page