Skip to main content

What are sub-agents?

Workshop’s main agent can spin up helper agents to handle specific tasks inside the same workflow. These are called sub-agents. Each sub-agent runs in its own separate context window. It reads files, runs tools, and does its work without consuming the main agent’s context. When it’s done, only a short summary comes back to your main conversation. Sub-agents don’t all need to use the same model. They can use different models, from different providers — and in Workshop Desktop, they can even run in different environments, including local models on your device. Three sub-agents researching weather APIs, React patterns, and Tailwind layouts in parallel — each running on Claude Haiku 4.5 with live activity feeds Three sub-agents researching in parallel, each with a live progress indicator.

Why this matters

You no longer need one model to do everything. Different parts of a workflow can be delegated to the models best suited for those parts.

Better performance

Some models are better at certain types of tasks than others. By delegating subtasks to better-fit models, the overall workflow produces better results than any single model could.

Lower cost

Not every task needs a top-tier model. Simpler or narrower subtasks can be routed to more affordable models, reducing overall cost without sacrificing quality where it matters.

Faster workflows

Some tasks benefit more from speed than from maximum intelligence — reading large volumes of text, scanning files, repetitive processing. These can be delegated to faster models to keep things moving.

More usable context over time

Sub-agents work without consuming the main agent’s context in the same way. You can keep building in the same conversation without hitting limits as quickly.

Privacy and control

Workshop can combine cloud models with local models. That means you can keep your main workflow powered by a frontier cloud model, but route a sensitive task to a local model running on your device. This is especially useful when a task involves:
  • Private data or sensitive internal files
  • Content you don’t want sent to an external inference provider
  • Compliance requirements around data residency
Local model routing requires Workshop Desktop. Workshop Cloud uses cloud-hosted models only.

The real differentiator

It’s not just “multiple models.” Workshop lets you orchestrate multiple models, from multiple providers, across cloud and local environments, in the same workflow — frontier cloud models, open-source models, and local models working together in one product experience. Use the best AI for each part of the job, instead of forcing one model to do everything.

How sub-agents work

As conversations grow longer, the agent’s performance gradually declines — it may lose track of earlier decisions, repeat work, or miss details it would have caught earlier. This happens because every AI conversation has a context window: the total amount of information the agent can work with at once. Everything counts — your messages, files it reads, tool outputs, its own responses. You can see the context window filling up as a percentage in the UI. As it fills, quality drops. Eventually, you’ll need to either start fresh or use /compact to summarize and continue. Sub-agents solve this by working outside your main conversation — each one gets its own fresh context window, does its work, and returns only a short summary. That keeps your main conversation lean and focused.

Why sub-agents matter

Take on bigger projects

Without sub-agents, every file read and tool call fills your context window, limiting how complex a task can be. Sub-agents handle the heavy lifting in their own space, so your main conversation stays focused on the big picture and stays sharp for longer.

Get higher quality results

A sub-agent with a single clear objective — “review this diff for security issues” or “explore how the payment flow works” — dedicates its full attention to that one job, without being distracted by the rest of your conversation.

Run work in parallel

Up to 3 sub-agents run simultaneously, each potentially on a different model. A quick search uses a fast model; a thorough code review uses a more capable one. You get speed and specialization without managing any of it.

Built-in agents

Workshop includes six specialized agents, each with a focused toolset:
AgentWhat it doesDefault modelWhyBest for
exploreSearches code, reads files, traces call chains. Read-only.Haiku 4.5Fast read-only searches don’t need a heavy model”Where is this defined?”, “How does auth work?”, pattern searches
planAnalyzes codebase, produces structured implementation plans with steps, critical files, and risks.Inherits yoursBenefits from the same model you’re reasoning withPlanning before coding — especially multi-step features
verificationAdversarial checker that reviews your changes for regressions, missing tests, and edge cases. Returns PASS/FAIL verdict.Inherits yoursShould match the reasoning depth of your main workflowSanity check after implementation, before you move on
reviewerCode review specialist. Hunts for bugs, security issues, correctness problems. Can run linters and tests.GPT-5.4Strong at code review and reasoning about diffsPR-style review of a diff or changed files
web-searchSearches the web for docs, best practices, API references, error solutions.Haiku 4.5Speed matters more than depth for lookupsLooking up documentation, debugging error messages
frontendFull-stack frontend dev. Can read, write, edit files and run dev servers/build tools.GLM-5.1Optimized for frontend and long-horizon UI tasksBuilding or fixing UI components, CSS, frontend code
You can override the default model for any built-in agent mid-conversation — see Choosing models below.

How to trigger sub-agents

Tell Workshop to use sub-agents in your prompt. The key phrase is “use sub-agents” or “delegate” — Workshop will recognize the intent and spin up the right agents:
Build me a landing page. Use sub-agents to research color palette
trends, look up hero section best practices, and find open-source
icon libraries — all in parallel before you start coding.
You can also name specific agents:
Have the reviewer agent check the changes we just made.
Use the explore agent to trace how the payment flow works.
Workshop may also delegate on its own when it judges a task benefits from specialization — but being explicit gives you more control over what gets delegated and how. Multiple sub-agents can run simultaneously. Each one appears as a collapsible card in your chat with a live progress indicator. All 3 agents complete in 73.9s — the main agent immediately synthesizes the research into a plan All three research agents finished in 73.9 seconds. The main agent synthesizes the results and starts building.

Limits

  • Sub-agents cannot spawn their own sub-agents (depth limit of 1)
  • Each sub-agent starts fresh — it cannot see your conversation history, so include everything it needs in the delegation prompt
  • Sub-agents complete autonomously — they cannot ask you questions mid-task

Choosing models for sub-agents

Each built-in agent has a default model chosen for its task type — shown in the Built-in agents table above. You can override any of them mid-conversation by just asking.

Changing a model mid-conversation

You don’t need to configure anything upfront. Just tell Workshop which model you want a sub-agent to use, and it will apply that for the delegation:
Have the reviewer agent check these changes, but use Gemini 3.1 Pro.
Run the explore agent on Sonnet for this one — I need deeper analysis.
Use Claude Opus for the plan agent on this refactor.
Workshop passes the model assignment through to the sub-agent when it delegates. The available models are:
ModelProviderBest forSpeedCost
haikuAnthropicFast, focused tasks — searches, file scans, checklistsFastestLow
sonnetAnthropicBalanced — code review, refactoring, documentationFastMedium
opusAnthropicComplex reasoning — architecture, multi-file analysisSlowerHighest
gpt-5.4OpenAIBackend coding, code review, general-purpose reasoningFastMedium
gpt-5.4-miniOpenAILighter tasks at lower costFastLow
gemini-3.1-proGoogleFrontend dev, long-context analysis (1M token window)FastMedium
glm-5.1Z.aiFrontend dev, long-horizon tasks, cost-efficient codingFastLowest

Create your own agents

Workshop’s six built-in agents handle most workflows out of the box — you don’t need to create custom agents to use sub-agents. This section is for teams who want to define their own reusable agents for specialized or repeated tasks.
Beyond the built-in agents, you can define custom agents — reusable markdown files that give Workshop specialized personas, toolsets, and instructions for tasks you run repeatedly. Workshop implements the Agent Skills open standard — the same format used by Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenAI Codex, and many other tools. Agents you create for Workshop work in any compatible client, and vice versa.

Quick start

Create an agents/ directory in one of the supported paths and add a markdown file:
your-project/
├── .workshop/agents/    # Workshop-native (recommended)
├── .agents/agents/      # Cross-client standard path (works in all compatible tools)
└── .claude/agents/      # Claude Code compatible
Pick whichever path fits your team. If you want your agents to work across Workshop, Claude Code, Cursor, and other tools, use .agents/agents/. If you only use Workshop, .workshop/agents/ is the simplest choice. Each file is one agent. The filename (minus .md) becomes the agent name.
---
description: Reviews code changes for quality, test coverage, and common pitfalls.
model: haiku
tools:
  - Read
  - Grep
  - Bash
---

You are a QA reviewer. When given a branch or set of changes:

1. Read every changed file
2. Check for missing error handling, edge cases, and test coverage gaps
3. Verify naming conventions match the surrounding code
4. Report findings as a numbered list with file:line references

Agent file format

Each agent file uses YAML frontmatter followed by markdown instructions:
---
description: What this agent does (required — this is how Workshop decides when to use it)
model: haiku
tools:
  - Read
  - Grep
  - Bash
maxTurns: 10
---

Your detailed instructions go here. This is the agent's system prompt —
tell it what role to play, what steps to follow, and what output format to use.
Frontmatter fields:
FieldRequiredDescription
descriptionYesWhat the agent does. Workshop uses this to match your requests to the right agent. Be specific.
modelNoModel to use: haiku, sonnet, or opus. Defaults to your conversation’s model. Use haiku for fast, focused tasks.
toolsNoList of tools the agent can use (e.g., Read, Bash, Grep, Edit). Defaults to all tools.
maxTurnsNoMaximum number of turns before the agent stops. Useful for keeping agents focused.
disallowedToolsNoTools the agent cannot use, even if otherwise available.
hiddenNoIf true, the agent won’t appear in the agent list but can still be used by name.
memoryNoMemory scope: user, project, or local. Controls where the agent stores learned context.
The description field is crucial — it’s how Workshop decides which agent to delegate to. Write it like a job title and responsibility summary, not a vague label.

Where to put agents

Place agents in .workshop/agents/ at your project root. These are shared with your team via version control — great for project-specific workflows like deployment checks, code review standards, or domain-specific analysis.
your-project/
└── .workshop/
    └── agents/
        ├── deploy-check.md
        ├── api-reviewer.md
        └── db-migration-validator.md
Project agents take precedence over personal agents when names conflict.

Cross-client compatibility

Workshop implements the Agent Skills open standard, which means your custom agents are portable across a growing ecosystem of AI development tools — including Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenAI Codex, Roo Code, and more. Workshop discovers agents from three directory paths in priority order:
PathPurpose
.workshop/agents/Workshop-native (recommended)
.agents/agents/Cross-client standard path — works in any compatible tool
.claude/agents/Claude Code compatibility
If the same agent name exists in multiple directories, the first match wins.
If your team uses multiple AI tools, put your agents in .agents/agents/ — they’ll be discovered by Workshop, Claude Code, Cursor, and every other tool that supports the Agent Skills standard. One set of agents, every tool.

Writing effective agents

The quality of your agent depends on three things: a precise description, the right model, and clear instructions.

Descriptions that trigger reliably

The description field is how Workshop decides which agent to delegate to. Front-load the key action and context:
# Too vague — when would Workshop use this?
description: Helps with code

# Too broad — matches everything
description: A general purpose assistant for development tasks

# Missing the "when" — what triggers it?
description: Knows about our API conventions

Choosing the right model

Workshop supports models from multiple providers. Refer to the full model table above for the complete list with speed and cost guidance. When setting the model field in your agent file:
  • Use haiku for fast, focused tasks (searches, file scans, checklists)
  • Use sonnet for balanced tasks (code review, refactoring, documentation)
  • Use opus for complex reasoning (architecture decisions, multi-file analysis)
  • For frontend-heavy agents, glm-5.1 or gemini-3.1-pro are strong choices
  • For backend code review, gpt-5.4 excels
If you omit model, the agent inherits your conversation’s current model.

Structuring instructions

Write instructions like you’re briefing a capable colleague who has no context. Include:
  1. Role — Who the agent is and what lens it uses
  2. Steps — Numbered sequence of what to do
  3. Constraints — What to avoid or what boundaries to respect
  4. Output format — How to structure the response
---
description: Reviews PRs for security issues including injection, auth bypass, and data exposure.
model: haiku
tools:
  - Read
  - Grep
  - Bash
maxTurns: 12
---

You are a security reviewer. For each changed file:

1. Read the diff with `git diff main...HEAD`
2. Check for:
   - SQL injection (string concatenation in queries)
   - XSS (unescaped user input in templates)
   - Auth bypass (missing permission checks)
   - Secret exposure (hardcoded keys, tokens, passwords)
3. For each finding, cite the exact file and line

## Output

Return findings as:

| Severity | File:Line | Issue | Fix |
|----------|-----------|-------|-----|
| HIGH | src/api.ts:42 | Raw SQL interpolation | Use parameterized query |

If no issues found, say "No security issues detected" — don't fabricate findings.

Agents vs skills

Workshop supports both agents and skills. They serve different purposes:
AgentsSkills
What they areAutonomous sub-conversations with their own model, tools, and context windowInstruction documents loaded into your current conversation
How they runIn a separate context — can’t see your chat historyInline — become part of your conversation
Best forDelegated tasks: “review this”, “check that”, “research X”Persistent knowledge: conventions, patterns, reference material
File formatSingle .md file with frontmatterDirectory with SKILL.md and optional supporting files
Location.workshop/agents/, .agents/agents/, .claude/agents/.workshop/skills/, .agents/skills/, .claude/skills/
Use an agent when you want to hand off a complete task. Use a skill when you want Workshop to know something while working with you.

Example agents

An agent that runs pre-deployment safety checks with a structured PASS/FAIL verdict:
---
description: Runs pre-deployment safety checks including test status, migration review, and environment variable validation.
model: haiku
tools:
  - Read
  - Grep
  - Bash
maxTurns: 15
---

You are a deployment readiness checker. Run through this checklist
and report a PASS/FAIL verdict for each item:

## Checklist

1. **Tests passing** — Run the test suite and confirm 0 failures
2. **No debugger statements** — Grep for `debugger`, `binding.pry`,
   `console.log` in changed files
3. **Migration safety** — If any new migrations exist, verify they
   have a rollback path
4. **Environment variables** — Check that any new env vars referenced
   in code are documented in `.env.example`
5. **No secrets in code** — Scan for hardcoded API keys, tokens,
   or passwords

## Output format

Return a summary table:

| Check | Status | Details |
|-------|--------|---------|
| Tests | PASS/FAIL | ... |
| ... | ... | ... |

End with a clear **DEPLOY** or **DO NOT DEPLOY** recommendation.
An agent that reads your code and generates API documentation:
---
description: Generates API documentation from source code. Reads route handlers, extracts endpoints, parameters, and response shapes, and produces markdown documentation.
model: sonnet
tools:
  - Read
  - Grep
maxTurns: 20
---

You are an API documentation writer. Given a codebase:

1. Find all route/endpoint definitions (look for decorators like
   @app.get, @router.post, or framework-specific patterns)
2. For each endpoint, extract:
   - HTTP method and path
   - Request parameters (path, query, body)
   - Response shape and status codes
   - Authentication requirements
3. Organize by resource (e.g., /users/*, /projects/*)

## Output format

For each endpoint:

### `METHOD /path`

> Brief description

**Auth:** Required / Public
**Parameters:**
| Name | In | Type | Required | Description |
|------|----|------|----------|-------------|

**Response:** `200 OK`
```json
{ "example": "response" }
Sort endpoints alphabetically within each resource group.
</Accordion>
<Accordion title="Codebase onboarding">
A read-only agent that helps new team members understand a codebase:

```markdown
---
description: Explores and explains a codebase for new team members. Maps architecture, identifies key patterns, and answers questions about how things work.
model: haiku
tools:
  - Read
  - Grep
maxTurns: 25
---

You are a codebase guide helping a new team member get oriented.

Start by understanding the project structure:
1. Read the top-level directory and key config files
2. Identify the tech stack (languages, frameworks, build tools)
3. Map the main modules and their responsibilities
4. Find the entry points (main files, route definitions, etc.)

Then answer the user's specific questions with:
- File paths and line numbers for every claim
- Diagrams (ASCII) for complex relationships
- "I don't know" when you genuinely can't determine something

Never guess. If you're unsure, say so and suggest where to look.
An agent that generates tests for existing code:
---
description: Writes tests for existing code. Reads the implementation, identifies edge cases, and generates comprehensive test files following the project's testing conventions.
model: sonnet
tools:
  - Read
  - Grep
  - Edit
  - Bash
maxTurns: 20
---

You are a test engineer. When asked to write tests:

1. Read the target file and understand its public interface
2. Grep for existing test files to learn the project's testing
   conventions (framework, assertion style, file naming)
3. Identify test cases:
   - Happy path for each public function/method
   - Edge cases (empty input, null, boundary values)
   - Error cases (invalid input, missing dependencies)
4. Write the test file following the project's existing patterns
5. Run the tests to verify they pass

## Rules

- Match the existing test style exactly (don't introduce new patterns)
- Use real implementations over mocks when possible
- Each test should be independent — no shared mutable state
- Name tests descriptively: what's being tested and what's expected

Best practices

Keep agents focused

One agent, one job. An agent that “reviews code, writes tests, and deploys” will do all three poorly. Create separate agents for each concern.

Use haiku for speed

Most custom agents don’t need the most powerful model. haiku is fast, cheap, and capable enough for grep-and-report tasks. Save sonnet and opus for complex reasoning.

Limit turns

Set maxTurns to prevent agents from spiraling. A focused agent should finish in 10-15 turns. If it needs more, the instructions probably aren’t specific enough.

Restrict tools

Only give agents the tools they need. A read-only reviewer doesn’t need Edit or Bash. Fewer tools means fewer ways to go wrong.

Specify output format

Tell the agent exactly how to structure its response — tables, checklists, JSON. Without this, you’ll get inconsistent free-form text.

Version control your agents

Commit project agents to your repo. They’re documentation of your team’s workflows, reviewable in PRs, and automatically available to anyone who clones the project.