Skip to main content
Database triggers watch external database tables for changes and automatically execute functions when new or updated rows are detected. This is poll-based Change Data Capture — no setup required on the source database. How it works:
  1. A separate CDC service polls the configured table at a fixed interval
  2. It queries rows where poll_column > last_bookmark (e.g., updated_at > '2026-03-01T10:00:00')
  3. If new rows are found, they are batched into a single function call
  4. The bookmark advances to the highest value in the batch
  5. On first activation, the bookmark is set to MAX(poll_column) — no backfill of existing data
Key properties:
PropertyDescription
nameUnique trigger name (per user)
database_connection_idWhich database connection to poll
schema_nameDatabase schema (default: public)
table_nameTable to watch
operations["INSERT"], ["UPDATE"], or both
function_namespace / function_nameFunction to execute when changes are detected
poll_columnMonotonically increasing column used as bookmark (e.g., updated_at, id)
poll_interval_secondsHow often to poll (1–3600, default: 10)
batch_sizeMax rows per poll (1–10000, default: 100)
is_activeEnable/disable without deleting
last_poll_valueCurrent bookmark (managed automatically)
error_messageLast error, if any (visible in UI)
The poll_column must be a column whose value only increases — timestamps, auto-increment IDs, or sequences. The column type is detected automatically and comparisons are cast to the correct type. Function input payload: When changes are detected, the target function receives all new rows in a single call:
{
  "table": "public.orders",
  "operation": "CHANGE",
  "rows": [
    {"id": 123, "status": "paid", "amount": 99.50, "updated_at": "2026-03-02T10:30:00Z"},
    {"id": 124, "status": "pending", "amount": 45.00, "updated_at": "2026-03-02T10:30:01Z"}
  ],
  "poll_column": "updated_at",
  "count": 2,
  "timestamp": "2026-03-02T10:30:05Z"
}
If a poll returns zero rows, no function call is made. Error handling: On failure, the trigger logs the error to error_message and retries with exponential backoff (up to 60 seconds). The trigger continues retrying until deactivated or the issue is resolved. Limitations:
  • Cannot detect DELETE operations (poll-based limitation)
  • Changes are detected with a delay equal to the poll interval
  • The poll_column must never decrease — resetting it will cause missed or duplicate rows
Endpoints:
POST   /api/v1/database-triggers              # Create trigger
GET    /api/v1/database-triggers              # List triggers
GET    /api/v1/database-triggers/{name}       # Get trigger
PATCH  /api/v1/database-triggers/{name}       # Update trigger
DELETE /api/v1/database-triggers/{name}       # Delete trigger
Declarative configuration:
databaseTriggers:
  - name: "customer_changes"
    connectionName: "prod_database"
    tableName: "customers"
    operations: ["INSERT", "UPDATE"]
    functionName: "sync/process_customer"
    pollColumn: "updated_at"
    pollIntervalSeconds: 10
    batchSize: 100