Use case

Background jobs on the same platform as your APIs

Return in milliseconds; let pipelines and the job queue own the long tail.

Request-thread overload

Long HTTP calls hit gateway timeouts and frustrate users.

Retry storms duplicate side effects without idempotency.

Ad hoc queues hurt

Every service inventing its own Redis consumer group diverges operationally.

Without shared observability, async work becomes a black box.

Unified surface

Functions become steps; the platform tracks executions similarly to synchronous invokes.

Reuse secrets and networking decisions across online and offline work.

Patterns

Fan-out

Split one event into many tasks with clear ownership.

Compensation

Model rollback or alerting paths for partial failures.

Backpressure

Tune concurrency when downstream systems are fragile.

How to design background jobs on Inquir Compute

1

Define payload

Version schemas so upgrades do not break in-flight jobs.

2

Make idempotent

Guard writes with stable keys.

3

Observe

Alert on DLQ-like states if your deployment exposes them.

Handoff

HTTP handler reads JSON from event.body (string on gateway routes), then returns 202.

http.mjs
export async function handler(event) {
  const body = JSON.parse(event.body || '{}');
  await enqueue({ type: 'render_pdf', userId: body.userId });
  return { statusCode: 202, body: JSON.stringify({ accepted: true }) };
}

Choose async when…

When to use

  • > few seconds of work
  • Spiky workloads
  • External APIs with variable latency

When not to use

  • Truly instantaneous reads that fit comfortably in SLA

FAQ

Is exactly-once delivery realistic for background jobs?

Aim for idempotent handlers and deduplication keys; true exactly-once across networks and storage is rare—design for at-least-once with safe replays.

When should HTTP return 202 Accepted?

When the user-facing work is enqueued and you can point to a job or execution ID—better than holding a socket open until a long export finishes.

How do pipelines relate to schedules and webhooks?

Pipelines can start from schedule, HTTP, manual, or event triggers. A webhook handler can return quickly and enqueue async jobs or start a pipeline—different entry points, same orchestration code.

Inquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.