Skip to main content

Overview

Agent Settings give you fine-grained control over Workshop’s behavior. Access them by clicking the wrench icon in the interface. These settings affect how Workshop operates during conversations and can be adjusted at any time.

Custom instructions

Custom Instructions set consistent guidelines that Workshop follows across all your conversations. They act as extensions to Workshop’s system prompt, reflecting your personal preferences and requirements.

What they do

Custom Instructions ensure Workshop behaves consistently without you repeating the same guidance in every conversation. They apply globally across all projects in both Workshop Cloud and Workshop Desktop.

Examples

Every visualization should have a transparent background and white labels for presentations.
When transcribing audio files, default to MLX Whisper rather than Base Whisper for performance.
Always commit working code after each major milestone and explain what was accomplished.
When building web applications, prioritize clean, accessible design and responsive layouts.
Use TypeScript for all JavaScript projects.
Keep me updated on progress using humor and casual language. Don't be overly formal.

Best practices

  • Be specific but not restrictive. "Include proper error handling in APIs" is good. "Only use Express 4.18.2 with these exact middleware packages" is too rigid.
  • Focus on consistent preferences. Instructions that apply across conversations work best — project-specific rules belong in .memex/rules.md.
  • Match your skill level. Beginners: "Explain technical concepts in simple terms and provide step-by-step instructions." Advanced: "Focus on efficiency and assume familiarity with development tools."

How to set them

  1. Click the wrench icon to open settings
  2. Navigate to the Custom Instructions section
  3. Enter your guidelines in the text area
  4. Click Save to apply across all future conversations

Model selector

Choose which AI model powers your conversations, balancing speed, capability, and cost.
Workshop currently uses the Claude model family to power all tiers. The specific models behind each tier are upgraded over time to always use the best available fit.

Available tiers

TierModelBest forCredit cost
FastClaude HaikuQuick tasks, simple questions, rapid iteration$
BalancedClaude SonnetMost everyday development work$$
GeniusClaude OpusComplex problems requiring deep reasoning$$$

Switching models

The model selector appears as a chip in the message input toolbar (CPU icon with the model name). Click it to open the dropdown and select a different tier. You can switch models at any time during a conversation. A common pattern:
  1. Fast for initial exploration and brainstorming
  2. Balanced for building features and standard development
  3. Genius for complex architectural decisions or challenging bugs
  4. Fast for quick follow-up questions

Code execution control

Control when Workshop executes code on your system.
Workshop executes code blocks automatically without asking. This is the fastest workflow and is recommended once you’re comfortable with Workshop’s behavior.
Toggle between Manual and Automatic using the Code Execution switch in Agent Settings.

Max turns

Limit how many autonomous actions Workshop takes before stopping for your input.

How it works

By default, Workshop continues working autonomously until it completes your request. When Max Turns is enabled, Workshop stops after the specified number of turns and waits for your guidance.
  • Toggle Max Turns On or Off with the switch
  • When enabled, use the slider to set the limit: 1 to 25 turns
  • The counter resets when you send a new message
Use caseTurnsWhy
Learning / reviewing1–3See each step of the implementation
Moderate oversight5–10Regular check-ins without micromanagement
Full automation (default)OffLet Workshop complete tasks without interruption

Thinking mode

Enable extended reasoning for better problem-solving and transparency.

How it works

When enabled, Workshop works through problems step-by-step before jumping to solutions, showing its reasoning process. This includes problem analysis, consideration of different approaches, and reasoning behind technical decisions. Toggle Thinking On or Off with the switch. When enabled, select a Thinking Level with the slider:
LevelLabelDescription
1LightSimple problem breakdown, quick decision explanations
2Balanced (default)Thorough analysis, multiple approaches considered
3DeepComprehensive exploration, detailed trade-off analysis
4MaxExtensive reasoning, multiple solution paths explored

Why use thinking mode

  • Better outcomes: Structured reasoning leads to more thoughtful solutions
  • Learning opportunity: See how expert-level thinking approaches technical problems
  • Quality assurance: Transparent reasoning helps you evaluate the approach before implementation
  • Debugging aid: Reasoning traces help identify where issues occurred
Despite using more tokens per message, Thinking Mode often reduces overall credit consumption by producing more accurate solutions on the first attempt.
For complex problems, pairing Genius model with Thinking Level 3–4 produces exceptional results, though at higher credit cost.

Long Context Mode

Extend the context window from the standard 200k tokens to 1M tokens for longer conversations and larger projects.

When to enable

  • Working on large projects with extensive codebases
  • Planning extended development sessions
  • Projects that require maintaining detailed context across many turns
Usage beyond the standard 200k context window may result in higher credit usage per turn, in alignment with Claude Sonnet 4.6 long context pricing.

Local model setup (Desktop only)

Connect Workshop Desktop to a local Anthropic Messages API-compatible server, such as llama.cpp or Ollama with an adapter.
1

Enter base URL

Provide the URL of your local model server (e.g., http://127.0.0.1:8080).
2

Set model name (optional)

Enter a display name for the model (e.g., qwen3-coder-30b). This is for your reference only.
3

Test connection

Click Test Connection to verify Workshop can reach your local server. On success, the local model is automatically enabled.
Once configured, you’ll see a Configured badge next to the Local Model Setup section.
For a full guide on setting up and using local models, see Local Models.

Task Management (Beta)

An experimental feature that enables Workshop to create and maintain structured task lists when implementing project plans. When enabled, Workshop automatically breaks down plans into discrete, trackable tasks.
This feature is in beta. For more reliable activation, explicitly ask Workshop to “use task management” in your prompt after enabling it.

Quick reference by experience level

  • Custom Instructions: "Explain technical concepts clearly and provide step-by-step guidance"
  • Code Execution: Manual approval for learning
  • Max Turns: 3–5 for oversight
  • Thinking Mode: Level 2 for educational benefit
  • Custom Instructions: Focus on tech stack preferences and coding standards
  • Code Execution: Automatic for speed
  • Max Turns: 10+ or off
  • Thinking Mode: Level 3–4 for complex problems only
  • Custom Instructions: Include team coding standards and review practices
  • Code Execution: Manual for shared environments
  • Max Turns: 5–7 for reviewable progress
  • Long Context: Enabled for project continuity