Build on top

Drain pipeline

createDrainPipeline wraps any drain in batch + retry + buffer overflow protection. Required for non-trivial production volume; supports fanout to multiple drains in parallel.

Every drain in production should be wrapped in the drain pipelinecreateDrainPipeline(). It batches events, retries on transient failures, and drops the oldest events when the buffer overflows. Without it, you make one HTTP request per emitted event, which doesn't scale beyond local dev.

drain pipeline · batch + retry·BUFFERING
app emits#0
never blocks the response
in-memory buffer0 / 10
·
·
·
·
·
·
·
·
·
·
fills until size or 5s interval
flush · http POST0 ok
onDropped · maxBufferSize · drain.flush()
emitted0
batches sent0
dropped0

Wrap an evlog drain in batch + retry pipeline

Canonical guide

Full reference at Drain pipeline:

  • createDrainPipeline() API + options (batch size / interval, retry attempts / backoff, buffer overflow)
  • Wrapping a single drain
  • Fanout to multiple drains in one pipeline
  • Lifecycle (flush(), dispose()) on shutdown

This page exists in the build-on-top section as a pointer — same content, classified by axis.

Quick example

import { createDrainPipeline } from 'evlog/pipeline'
import { createAxiomDrain } from 'evlog/axiom'

const pipeline = createDrainPipeline<DrainContext>({
  batch: { size: 50, intervalMs: 2000 },
  retry: { maxAttempts: 3 },
})

const drain = pipeline(createAxiomDrain())

// Use `drain` wherever you'd register a drain (Nitro hook, initLogger, etc.)

Fanout pattern

const drain = pipeline(
  createAxiomDrain(),
  createDatadogDrain(),
  createSentryDrain(),
  createFsDrain(),
)

All four destinations receive every event in the same batch. See Fanout & multi-drain for the full recipe.

Common pitfalls

  • Don't forget drain.flush() on shutdown — buffered events are lost otherwise
  • Tune batch.size to match your provider's recommended payload — too small wastes overhead, too big risks rejection

Going further