Skip to main content
This guide shows how to wire up an AI agent with a persistent workspace disk and a bash tool backed by serverless execution. The agent can ls, cat, grep, write files, run scripts — anything bash can do — against data that lives in your S3 bucket. The whole thing is about 30 lines.

1. Create a workspace disk

import { Archil } from "@archildata/client/api";

const archil = new Archil();

const { disk } = await archil.disks.create({
  name: "agent-workspace",
  mounts: [{ type: "s3", bucketName: "agent-bucket" }],
});
The disk is immediately usable — the file system grows and shrinks with your bucket and survives every agent run.

2. Wrap disk.exec as a tool

Using the Vercel AI SDK, an exec tool is a handful of lines:
import { generateText, tool } from "ai";
import { z } from "zod";

const bash = tool({
  description: "Run a shell command inside the workdir.",
  inputSchema: z.object({
    command: z.string(),
  }),
  execute: async ({ command }) => {
    const { stdout, stderr, exitCode } = await disk.exec(command);
    return `exit ${exitCode}\n${stdout}${stderr ? `\n${stderr}` : ""}`;
  },
});
A few things worth calling out about the tool shape:
  • Keep the description minimal. The agent doesn’t need to know it’s on Archil, or that this is a “workspace disk” — those are implementation details that cost reasoning tokens. "Run a shell command inside the workdir." is enough.
  • Return a single string, not an object. Prefix with exit N and let the model read the output. Splitting stdout and stderr into fields forces the model to think about which field to read first; most of the time it doesn’t matter.
  • Don’t over-describe the argument. z.string() without a .describe() is fine — the model knows what a shell command is.
Under the hood, every call spins up a container with the file system as the working directory, runs the command, and returns the result.

3. Run the agent

const result = await generateText({
  model: "openai/gpt-5.2",
  prompt:
    "List everything on the disk, then find every line in app.log that mentions ERROR.",
  tools: { bash },
  maxSteps: 10,
});

console.log(result.text);
The agent will:
  1. Call bash({ command: "ls" }) to discover what’s there.
  2. Call bash({ command: "grep ERROR app.log" }) to find the errors.
  3. Summarize the result.
Because the disk persists, running the same agent tomorrow sees the same files plus anything it wrote. You can also mount the same disk from a local laptop (archil mount ...) to inspect what the agent has been doing.

Patterns

Persistent agent memory

Write to a known path and the next agent run sees it:
await disk.exec("echo 'learned fact' >> memory.jsonl");

Fan-out across the bucket

A map-reduce across every file in a directory:
const files = (await disk.exec("ls logs")).stdout.trim().split("\n");

const results = await Promise.all(
  files.map((f) => disk.exec(`grep -c ERROR logs/${f}`)),
);
Each exec gets its own container — this scales horizontally without touching your local compute.

Running a local agent, too

If the agent runs on your laptop but wants the same workspace, mount the disk alongside:
archil mount aws-us-east-1/agent-workspace /mnt/data
Reads and writes from the local mount are read-after-write consistent with anything the exec-backed bash tool does.

Next steps