Skip to main content
Workshop has two categories of settings: Agent Settings control how the AI behaves during conversations, and App Settings configure platform-level capabilities and integrations.

Agent Settings

Access Agent Settings by clicking the wrench icon in the interface. These affect how Workshop operates during conversations.

Custom instructions

Custom Instructions set consistent guidelines that Workshop follows across all your conversations. They act as extensions to Workshop’s system prompt, reflecting your personal preferences.
Every visualization should have a transparent background and white labels.
When building web applications, prioritize clean, accessible design and responsive layouts.
Use TypeScript for all JavaScript projects.
Always commit working code after each major milestone.
Best practices:
  • Be specific but not restrictive. "Include proper error handling in APIs" is good. "Only use Express 4.18.2 with these exact middleware packages" is too rigid.
  • Focus on consistent preferences. Instructions that apply across conversations work best — project-specific rules belong in .workshop/rules.md.
  • Match your skill level. Beginners: "Explain technical concepts in simple terms." Advanced: "Focus on efficiency and assume familiarity with development tools."

Model selector

Choose which AI model powers your conversations. The model selector appears as a chip in the message input toolbar. Models are grouped by provider. Each model shows interactive ratings for speed, intelligence, reliability, and cost so you can pick the right tradeoff for your task. Anthropic (Claude)
ModelBest forRelative cost
Claude Haiku 4.5Fast tasks, rapid iteration$
Claude Sonnet 4.6Everyday development — the default$$$$
Claude Opus 4.7Complex problems requiring deep reasoning$$$$$
Claude Opus 4.6Deep reasoning with high reliability$$$$$
OpenAI (GPT)
ModelBest forRelative cost
GPT-5.4 NanoLightweight, low-cost text tasks$
GPT-5.4 MiniBalanced speed and capability$$
GPT-5.4Complex tasks requiring high intelligence$$$$
Google (Gemini)
ModelBest forRelative cost
Gemini 3.1 Flash-LiteFastest option, great for simple tasks$
Gemini 3.1 ProCapable reasoning with strong intelligence$$$$
Open Source
ModelBest forRelative cost
GLM-5.1High-intelligence open-source (text only)$
GLM-5Capable open-source alternative (text only)$
GLM-5 TurboFast open-source option (text only)$
GLM-5V TurboOpen-source with image understanding$
Kimi K2.5Multimodal open-source — attach screenshots$$
Open Source models are significantly cheaper than premium models — 4–8× lower cost per turn. They’re a good choice for straightforward tasks where you want to conserve credits.
You can switch models at any time during a conversation. A common pattern: Pro/Fast for exploration → Balanced for building → Genius for hard problems → Pro/Fast for follow-ups.

Code execution control

Control when Workshop executes code on your system.
Workshop executes code blocks automatically without asking. Recommended once you’re comfortable with Workshop’s behavior.

Max turns

Limit how many autonomous actions Workshop takes before stopping for your input. By default, Workshop continues working until it completes your request. When enabled, use the slider to set a limit from 1 to 25 turns.
Use caseTurnsWhy
Learning / reviewing1–3See each step of the implementation
Moderate oversight5–10Regular check-ins without micromanagement
Full automation (default)OffLet Workshop complete tasks without interruption

Thinking mode

When enabled, Workshop works through problems step-by-step before jumping to solutions, showing its reasoning process.
LevelLabelDescription
1LightSimple problem breakdown, quick decision explanations
2BalancedThorough analysis, multiple approaches considered
3DeepComprehensive exploration, detailed trade-off analysis
4MaxExtensive reasoning, multiple solution paths explored
Despite using more tokens per message, Thinking Mode often reduces overall credit consumption by producing more accurate solutions on the first attempt.
For complex problems, pairing Genius model with Thinking Level 3–4 produces exceptional results, though at higher credit cost.

Local model setup (Desktop only)

Connect Workshop Desktop to a local Anthropic Messages API-compatible server, such as llama.cpp or Ollama with an adapter.
1

Enter base URL

Provide the URL of your local model server (e.g., http://127.0.0.1:8080).
2

Set model name (optional)

Enter a display name for the model (e.g., qwen3-coder-30b). This is for your reference only.
3

Test connection

Click Test Connection to verify Workshop can reach your local server. On success, the local model is automatically enabled.
For a full guide on setting up and using local models, see Local Models.

Skills

Skills are a cross-platform core concept in Workshop. In Desktop, use the Skills tab in Settings to install, enable, disable, and import skills that extend the agent’s capabilities.
For the complete skills guide (concept + Desktop workflows), see Core Concepts → Skills.

Task Management

When enabled, Workshop creates and maintains structured task lists when implementing project plans. Workshop automatically breaks down plans into discrete, trackable tasks, updating their status as work progresses. Task lists are stored as JSON files in your project’s .workshop/tasks/ directory, so you can review progress at any time — even outside of Workshop.

Quick reference by experience level

  • Custom Instructions: "Explain technical concepts clearly and provide step-by-step guidance"
  • Code Execution: Manual approval for learning
  • Max Turns: 3–5 for oversight
  • Thinking Mode: Level 2 for educational benefit
  • Custom Instructions: Focus on tech stack preferences and coding standards
  • Code Execution: Automatic for speed
  • Max Turns: 10+ or off
  • Thinking Mode: Level 3–4 for complex problems only
  • Custom Instructions: Include team coding standards and review practices
  • Code Execution: Manual for shared environments
  • Max Turns: 5–7 for reviewable progress
  • Context management: Workshop automatically compacts when your context window fills up, but you can also /compact proactively at milestones

App Settings

Access App Settings through the main Settings menu (gear icon). These apply across all projects and conversations.

Secrets management

Secrets are sensitive values like API keys, database passwords, or access tokens that your applications need. Workshop provides a built-in system to store and use them securely. How secrets work:
  • Secure storage: Workshop uses your operating system’s native keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service) to store values. Raw values are never stored in Workshop’s own files.
  • Local access: Secrets remain on your machine and are only accessed when needed by processes Workshop initiates on your behalf.
  • Key tracking: Workshop maintains a list of secret names you’ve stored, but actual values are encrypted and managed by your system’s secure storage.
Managing secrets:
1

Open Settings

Go to Settings (gear icon) and select the Secrets tab.
2

Add a secret

Enter a descriptive key name (e.g., OPENAI_API_KEY, DATABASE_PASSWORD), enter the value, and click Add Secret.
3

Use in conversations

Reference secrets by their key name in your prompts:
Use the API key stored as 'STRIPE_API_KEY' to configure the payment gateway.
Security details:
  • Case insensitive: Workshop finds secrets regardless of case — OPENAI_API_KEY, openai_api_key, and OpenAI_API_Key all work.
  • Permission prompts: The first time Workshop accesses a secret, your OS may ask for permission. You can approve permanently or per-session.
  • Scope: Secrets are available to all projects within Workshop on that machine.

MCP servers (Desktop only)

The Model Context Protocol is a standard for connecting AI models to tools, data sources, and services. Workshop supports both using and building MCP servers. Managing MCP servers: Go to Settings (gear icon) > MCP tab to browse three sections:
SectionDescription
Configured ServersView, enable/disable, and manage your installed MCP servers
Server DirectoryBrowse and install curated, Workshop-tested servers (Neon, Netlify, GitHub, Context7, and more)
Add Custom ServerInstall any MCP server from the broader ecosystem via GitHub URL or local path
Using MCP in conversations: Workshop integrates with enabled MCP servers naturally — often requiring no special instructions:
Use playwright to check if this website loads correctly and take a screenshot.
Use the Neon MCP to create a new table for user profiles in our database.
Building MCP servers: Workshop can build MCP servers from natural language descriptions, test them, and add them to your available tools — all in a single conversation. Best practices:
  • Only enable servers you’re actively using to reduce context overhead
  • Test new servers in non-critical projects first
  • Use Secrets for server authentication credentials
  • Disable unused tools within servers to keep context focused

Control Center

The Control Center provides quick-action buttons for common development tasks, accessible from within your project. Version control — Initialize Git, make commits, and manage your repository with single clicks. Workshop often handles this automatically during development; the Control Center lets you trigger it at specific moments. Application management — Start, stop, and monitor your applications. Workshop analyzes your project structure and creates custom startup scripts, handling environment activation, dependency loading, and complex startup sequences. Documentation generation — Generate or improve your project’s README.md from the current project structure with one click. Interactive Terminal — Workshop handles command-line tools that require interactive input — scaffolding wizards, interactive installers, deployment setup. Terminal sessions are visible in the Control Center.

Privacy Mode

Your code is never stored anywhere other than your machine. With Privacy Mode enabled, your conversations are also never stored externally. When disabled, Workshop may collect usage and telemetry data to improve the product. Privacy Mode is available to Build and Scale plan users. Manage it from Settings > Other.

Sound settings

Toggle whether Workshop plays a sound notification when the agent finishes a task. Useful for knowing when a long-running operation completes while you’re doing other work.