Background jobs on the same platform as your APIs
Return in milliseconds; let pipelines and the job queue own the long tail.
Situation
Request-thread overload
Long HTTP calls hit gateway timeouts and frustrate users.
Retry storms duplicate side effects without idempotency.
Why workarounds fail
Ad hoc queues hurt
Every service inventing its own Redis consumer group diverges operationally.
Without shared observability, async work becomes a black box.
How Inquir fits
Unified surface
Functions become steps; the platform tracks executions similarly to synchronous invokes.
Reuse secrets and networking decisions across online and offline work.
Capabilities
Patterns
Fan-out
Split one event into many tasks with clear ownership.
Compensation
Model rollback or alerting paths for partial failures.
Backpressure
Tune concurrency when downstream systems are fragile.
Steps
How to design background jobs on Inquir Compute
Define payload
Version schemas so upgrades do not break in-flight jobs.
Make idempotent
Guard writes with stable keys.
Observe
Alert on DLQ-like states if your deployment exposes them.
Code example
Handoff
HTTP handler reads JSON from event.body (string on gateway routes), then returns 202.
export async function handler(event) { const body = JSON.parse(event.body || '{}'); await enqueue({ type: 'render_pdf', userId: body.userId }); return { statusCode: 202, body: JSON.stringify({ accepted: true }) }; }
Fit
Choose async when…
When to use
- > few seconds of work
- Spiky workloads
- External APIs with variable latency
When not to use
- Truly instantaneous reads that fit comfortably in SLA
FAQ
FAQ
Is exactly-once delivery realistic for background jobs?
Aim for idempotent handlers and deduplication keys; true exactly-once across networks and storage is rare—design for at-least-once with safe replays.
When should HTTP return 202 Accepted?
When the user-facing work is enqueued and you can point to a job or execution ID—better than holding a socket open until a long export finishes.
How do pipelines relate to schedules and webhooks?
Pipelines can start from schedule, HTTP, manual, or event triggers. A webhook handler can return quickly and enqueue async jobs or start a pipeline—different entry points, same orchestration code.