AI agents backed by Inquir functions—not a mystery monolith
Each tool is its own Lambda-style function: gateway auth, workspace-bound secrets, layers for heavy SDKs, warm pools for burst traffic, pipelines when work outlasts HTTP.
Situation
What breaks without a real backend
Demos collapse a whole agent into one process. Production needs authenticated tool calls, rate limits, secrets that never touch the model context, and a clear story when step seven fails and step eight should not run.
Stuffing every side effect into one giant synchronous LLM round-trip does not scale. Small functions with explicit inputs and outputs are easier to test, easier to retry, and easier to explain to security.
Sharp edges
Where lightweight stacks struggle
Notebooks and one-off scripts rarely give you durable deploys, structured logs, and a shared secret model with the rest of your API surface.
A generic cron job on a VM can call a script, but you still own packaging, rollback, and isolation between “low risk housekeeping” and “touches customer money”.
How Inquir fits
What Inquir Compute adds
Each tool is a function with a real HTTP contract on the gateway, running in an isolated container—so heavy or untrusted dependencies do not share memory with unrelated features.
Warm pools help when the model calls tools in quick succession; pipelines absorb work that genuinely cannot finish before the gateway times out.
Capabilities
Capabilities agents rely on
Isolated tool execution
Run untrusted or heavy dependencies in separate containers instead of sharing one brittle process.
HTTP surface for tools
Expose typed endpoints the agent can call with predictable auth and routing.
Scheduling and async work
Trigger periodic sync jobs or kick off background pipelines when user-facing latency matters.
Observability
Trace executions when a model picked the wrong tool or an upstream API misbehaved.
Steps
How to run agent tools as Inquir Compute functions
Model proposes an action
Your orchestration layer maps the action to a function ID and input payload.
Function executes with secrets
The runtime injects environment configuration and returns structured JSON to the caller.
Continue or compensate
On failure, retry, branch, or enqueue a cleanup step using the same platform primitives.
Code example
Tool endpoint shape
Gateway invokes with API Gateway–like fields: queryStringParameters and pathParameters (or null when empty), body as a string for POST. Return { statusCode, body } or a plain JSON value—both are accepted.
export async function handler(event) { const q = event.queryStringParameters?.q ?? ''; const results = await searchIndex(q); return { statusCode: 200, body: JSON.stringify({ results }) }; }
Fit
Fit check
When to use
- You already separate “orchestration brain” from “tool execution”.
- You need multi-step async flows with production logging.
When not to use
- You only call one third-party API with no isolation or scheduling requirements.
FAQ
FAQ
Do agents have to use HTTP?
HTTP is a simple contract for tools; your orchestrator can wrap local calls during dev and remote calls in production.
How are secrets handled?
Bind secrets to the workspace or function in the product UI. They appear as environment variables at runtime, so API keys never belong in prompts, client bundles, or committed files.
Can I mix languages per tool?
Yes. Different functions can target Node.js, Python, or Go depending on library support.
What about long-running jobs?
Return quickly from the tool’s HTTP handler when you can, then continue with a pipeline or async job so the user-facing path stays responsive and retries stay predictable.