ComfyUI MCP Server
- ComfyUI MCP Server
- Let Your AI Install This For You
- What is This?
- Key Features
- Quick Start Guide
- Installation
- Tools Reference
- Configuration
- How It Works
- Example Conversations
- Development
- Troubleshooting
- Contributing
- License
- Acknowledgments
An MCP (Model Context Protocol) server that enables AI assistants like Claude to interact with ComfyUI for generating images, audio, video, and more.
Let Your AI Install This For You
Copy and paste this prompt to your AI assistant (Claude, Cursor, etc.) to have it set everything up:
I want to generate images using ComfyUI. Please help me set up the ComfyUI MCP server.
1. First, add the ComfyUI MCP server to my configuration. The Docker config is:
- Command: docker
- Args: run -i --rm --pull always -e COMFYUI_URL=http://host.docker.internal:8000 ghcr.io/shawnrushefsky/comfyui-mcp:latest
2. Once configured, use get_status to check if ComfyUI is running and connected.
3. If ComfyUI isn't installed, use get_install_guide to help me install it.
4. Use list_models to see what models I have available.
5. Use search_templates to find the right workflow for my model.
6. Use get_prompting_guide to learn the correct prompting style for my model.
7. Use get_template to build a workflow and run_workflow to generate a test image.
Tip: If you're using the ComfyUI Desktop app, it runs on port 8000. If you installed ComfyUI manually, change the port to 8188.
What is This?
This MCP server acts as a bridge between AI assistants and ComfyUI, the powerful node-based interface for Stable Diffusion and other generative AI models. It allows Claude and other MCP-compatible AI assistants to:
- Run complex workflows with full control over every node and parameter
- Compose custom workflows using node discovery and building tools
- Create videos using AnimateDiff, Stable Video Diffusion, and other video models
- Generate audio using Stable Audio and other audio models
- Manage your queue - view, cancel, and interrupt jobs
- Help you set up - download models, install ComfyUI, and configure everything
Key Features
Self-Configuring
The server automatically discovers your ComfyUI installation and detects what models and features are available. No manual configuration of capabilities required.
Works Without ComfyUI Running
Even if ComfyUI isn't installed or running, the server provides tools to:
- Guide you through installation
- Download models directly
- Fetch example workflows from documentation
Workflow-First Architecture
All generation happens through run_workflow, giving you full control over the ComfyUI workflow. The server provides comprehensive tools for:
- Templates: Pre-built workflows for common tasks
- Node composition: Build custom workflows node by node
- Validation: Check workflows before running
70+ Example Workflows
Comprehensive library of example workflows from the official ComfyUI documentation, split into easily discoverable entries:
- Flux: Dev, Schnell, Checkpoint variants, Kontext, Fill, Redux, Canny, Depth, ControlNet
- SDXL: Base, Refiner, ReVision (image-guided)
- SD3.5: Separate encoders, Checkpoint, Medium, Turbo, ControlNet
- ControlNet: Scribble, Depth, T2I-Adapter, Pose, Multiple combined
- Inpainting/Outpainting: Basic, dedicated models, various techniques
- Video: SVD, Mochi, LTX-Video, Hunyuan Video, Cosmos, Wan
- And more: Stable Cascade, HiDream, Qwen Image, Audio generation
Template System
Three sources of workflow templates:
- Built-in templates: Standard txt2img for SD1.5, SDXL, and Flux
- Example workflows: 70+ from official ComfyUI docs
- Custom templates: Save and reuse your successful workflows
Workflow Composition Tools
Build custom workflows programmatically:
build_node: Generate valid node JSON with proper defaultsget_node_info: Detailed node inputs/outputs with examplesfind_nodes_by_type: Discover nodes by what they accept/producevalidate_workflow: Check validity before running
Quick Start Guide
Step 1: Install ComfyUI
Option A: Desktop App (Recommended for most users)
- Go to comfy.org/download
- Download for your platform (macOS, Windows, or Linux)
- Install and launch the application
- ComfyUI will automatically set up Python and dependencies
Option B: Manual Installation
# Clone the repository
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
# Install dependencies (use a virtual environment recommended)
pip install -r requirements.txt
# Run ComfyUI
python main.py
Step 2: Download a Model
You need at least one checkpoint model. Here are popular options:
SDXL (Recommended - high quality, 1024x1024)
- Download
sd_xl_base_1.0.safetensorsfrom HuggingFace - Place in
ComfyUI/models/checkpoints/
Flux (State of the art quality)
- Download
flux1-schnell.safetensorsfrom HuggingFace - Place in
ComfyUI/models/unet/ - Also need CLIP encoders from flux_text_encoders
SD 1.5 (Classic, fast, many LoRAs available)
- Download
v1-5-pruned-emaonly.safetensorsfrom HuggingFace - Place in
ComfyUI/models/checkpoints/
Step 3: Configure Your AI Assistant
Add the ComfyUI MCP server to your AI assistant's configuration.
Claude Desktop (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json, Windows: %APPDATA%\Claude\claude_desktop_config.json):
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
}
Note: The ComfyUI Desktop app uses port 8000. If you're running ComfyUI manually, change the port to 8188.
Step 4: Start Generating!
- Make sure ComfyUI is running (http://localhost:8188 should show the UI)
- Restart Claude Desktop
- Ask Claude to generate an image:
Generate an image of a sunset over mountains using Flux
Claude will automatically:
- Connect to your ComfyUI instance
- Search for the right template
- Build and validate a workflow
- Execute and display the result
Installation
Prerequisites
- ComfyUI (desktop app recommended) or manual installation
- One or more checkpoint/model files
- Docker (recommended) or Node.js 18+
Option 1: Docker (Recommended)
Works with any MCP-compatible AI assistant. The Docker image automatically pulls updates.
Claude Desktop
Config file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
}
Claude Code (CLI)
Add to .mcp.json in your project root:
{
"mcpServers": {
"comfyui": {
"type": "stdio",
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
}
Or add globally via CLI:
claude mcp add comfyui --transport stdio -- docker run -i --rm --pull always -e COMFYUI_URL=http://host.docker.internal:8000 ghcr.io/shawnrushefsky/comfyui-mcp:latest
Cursor
Add to Cursor's MCP settings (Settings → MCP Servers):
{
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
}
Cline (VS Code Extension)
Add to Cline's MCP settings in VS Code:
{
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"-e", "COMFYUI_URL=http://host.docker.internal:8000",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
Linux (Any Client)
On Linux, use --network=host instead of host.docker.internal:
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": [
"run", "-i", "--rm", "--pull", "always",
"--network=host",
"ghcr.io/shawnrushefsky/comfyui-mcp:latest"
]
}
}
}
Port Configuration
- ComfyUI Desktop app (macOS/Windows): Uses port
8000by default - Manual ComfyUI installation: Uses port
8188by default
Adjust the COMFYUI_URL environment variable accordingly:
- Desktop app:
http://host.docker.internal:8000 - Manual install:
http://host.docker.internal:8188
Option 2: From Source
git clone https://github.com/shawnrushefsky/comfyui-mcp.git
cd comfyui-mcp
npm install
npm run build
Then configure your MCP client to use the built server:
Claude Desktop:
{
"mcpServers": {
"comfyui": {
"command": "node",
"args": ["/path/to/comfyui-mcp/dist/index.js"]
}
}
}
Claude Code (.mcp.json):
{
"mcpServers": {
"comfyui": {
"type": "stdio",
"command": "node",
"args": ["/path/to/comfyui-mcp/dist/index.js"]
}
}
}
Tools Reference
Setup & Status Tools
get_status
Get the current status of ComfyUI connection and installation.
What's the status of ComfyUI?
get_install_guide
Get platform-specific installation instructions. Recommends the desktop app for most users.
| Parameter | Type | Description |
|---|---|---|
platform | "auto" | "macos" | "windows" | "linux" | Target platform |
How do I install ComfyUI on my Mac?
get_model_guide
Get detailed guidance on downloading and installing models.
| Parameter | Type | Description |
|---|---|---|
modelType | "all" | "checkpoint" | "flux" | "sdxl" | "sd15" | "lora" | "controlnet" | "vae" | Type of model |
How do I set up Flux models?
Template & Workflow Tools
search_templates
Search for workflow templates across built-in, example, and custom sources.
| Parameter | Type | Description |
|---|---|---|
modelType | "sd15" | "sdxl" | "sd3" | "flux" | "any" | Filter by model type |
taskType | "txt2img" | "img2img" | "inpaint" | ... | Filter by task type |
category | string? | Filter by category |
query | string? | Free text search |
includeBuiltIn | boolean? | Include built-in templates (default: true) |
includeExamples | boolean? | Include example workflows (default: true) |
includeCustom | boolean? | Include saved custom templates (default: true) |
Find templates for Flux txt2img
get_template
Build a workflow from a template with your parameters.
| Parameter | Type | Description |
|---|---|---|
templateId | string | Template ID from search_templates |
parameters | object? | Parameters to apply (prompt, model, etc.) |
Get the flux_schnell_txt2img template with prompt "a sunset over mountains"
save_template
Save a workflow as a reusable custom template. Use descriptive names!
| Parameter | Type | Description |
|---|---|---|
name | string | Descriptive template name |
description | string | What this template does |
workflow | object | The workflow JSON |
modelType | string? | Model type (sd15, sdxl, flux, etc.) |
taskType | string? | Task type (txt2img, img2img, etc.) |
category | string? | Category for organization |
tags | string[]? | Tags for searching |
Save this workflow as "portrait_lighting_studio"
delete_template
Delete a custom saved template.
| Parameter | Type | Description |
|---|---|---|
id | string | Template ID to delete |
list_examples
List official ComfyUI example workflows. Over 70 workflows organized by model and use case.
| Parameter | Type | Description |
|---|---|---|
category | string? | Filter by category (basics, sdxl, flux, video, audio, etc.) |
Show me example workflows for Flux
get_example_workflow
Fetch an example workflow from the ComfyUI documentation.
| Parameter | Type | Description |
|---|---|---|
name | string | Example name (e.g., "Flux Schnell Checkpoint") |
variant | number? | Variant index if multiple (default: 0) |
Get the Flux Schnell Checkpoint workflow
extract_workflow
Extract workflow JSON from a ComfyUI-generated PNG image.
| Parameter | Type | Description |
|---|---|---|
source | string | Path to PNG file or URL |
Extract the workflow from this image: /path/to/comfyui_output.png
get_download_url
Get download URL for a model by name.
| Parameter | Type | Description |
|---|---|---|
modelName | string | Model name to look up |
Where can I download flux1-schnell?
Prompting Guide Tools
get_prompting_guide
Get prompting best practices for different model architectures.
| Parameter | Type | Description |
|---|---|---|
modelType | "sd15" | "sdxl" | "sd3" | "flux" | "all" | Model type (default: all) |
Returns detailed guidance on:
- Prompt structure and style for each model
- Recommended keywords and techniques
- Negative prompt usage (or lack thereof for Flux)
- Common mistakes to avoid
How should I write prompts for Flux?
Generation Tools
run_workflow
Run a ComfyUI workflow (API format JSON). This is the primary generation tool.
| Parameter | Type | Description |
|---|---|---|
workflow | object | ComfyUI workflow JSON |
outputMode | "base64" | "file" | "auto" | Output mode |
name | string? | Descriptive name for later retrieval (e.g., "sunset_portrait_v2") |
sync | boolean? | Wait for completion (default: false, async) |
timeout | number? | Timeout in ms (default: 300000) |
Run this workflow: [paste JSON]
validate_workflow
Validate a workflow before running. Checks node types, connections, and required inputs.
| Parameter | Type | Description |
|---|---|---|
workflow | object | The workflow to validate |
Returns:
valid: Whether the workflow is validerrors: Critical issues that will cause failureswarnings: Non-critical issues to be aware ofinfo: Helpful information about the workflow
Check if this workflow is valid before I run it
Workflow Composition Tools
build_node
Generate valid node JSON with proper defaults. Includes tips for certain nodes (e.g., SaveImage filename guidance).
| Parameter | Type | Description |
|---|---|---|
nodeType | string | Node class_type (e.g., "KSampler") |
nodeId | string | ID for this node in the workflow |
inputs | object? | Input values to set |
Returns:
node: The node JSON to add to your workflowoutputs: Output references for connecting to other nodesmissingConnections: Inputs that need to be connectedtips: Best practices for this node type
Build a SaveImage node with ID "9"
get_node_info
Get detailed information about a node including inputs, outputs, example JSON, and tips.
| Parameter | Type | Description |
|---|---|---|
node | string | Node class_type (e.g., "KSampler", "CheckpointLoaderSimple") |
Returns:
- Input specifications with types, defaults, and valid options
- Output types and slot indices
- Example JSON showing how to use the node
- Connection guide for each input type
- Tips for certain node types
What are the inputs for KSampler?
find_nodes_by_type
Find nodes by their input or output types. Useful for workflow composition.
| Parameter | Type | Description |
|---|---|---|
inputType | string? | Find nodes that accept this type (e.g., "MODEL", "LATENT") |
outputType | string? | Find nodes that produce this type |
What nodes can output a MODEL?
list_nodes
List available ComfyUI nodes.
| Parameter | Type | Description |
|---|---|---|
category | string? | Filter by category |
search | string? | Search term |
What ControlNet nodes are available?
Discovery Tools
get_capabilities
Get the detected capabilities of the connected ComfyUI instance.
What can this ComfyUI do? What models does it have?
list_models
List available models in ComfyUI.
| Parameter | Type | Description |
|---|---|---|
type | "all" | "checkpoints" | "loras" | ... | Model type filter |
What checkpoints do I have installed?
Task & Queue Management
get_task
Get the status of an async generation task.
| Parameter | Type | Description |
|---|---|---|
taskId | string | The task ID |
get_task_result
Get the result of a completed generation task.
| Parameter | Type | Description |
|---|---|---|
taskId | string | The task ID |
list_tasks
List all generation tasks, optionally filtered by status.
| Parameter | Type | Description |
|---|---|---|
status | "working" | "completed" | "failed" | "cancelled"? | Filter by status |
name_generation
Assign a descriptive name to a generation for easy retrieval.
| Parameter | Type | Description |
|---|---|---|
taskId | string | The task ID to name |
name | string | Descriptive name (e.g., "landscape_sunset_warm") |
get_generation_by_name
Retrieve a generation by its assigned name.
| Parameter | Type | Description |
|---|---|---|
name | string | The name assigned to the generation |
get_queue
Get the current ComfyUI queue status.
What's in the generation queue?
cancel_job
Cancel a queued or running job.
| Parameter | Type | Description |
|---|---|---|
promptId | string? | Job ID (empty = cancel all) |
Cancel the current job
interrupt
Interrupt the currently running job.
Stop the current generation
get_history
Get generation history.
| Parameter | Type | Description |
|---|---|---|
promptId | string? | Specific job ID |
limit | number? | Max entries (default: 10) |
Show recent generations
Agent Memory Tools
These tools help AI agents remember learnings across sessions.
save_note
Save a note about something learned during image generation.
| Parameter | Type | Description |
|---|---|---|
topic | string | Topic/category (e.g., "flux-models", "prompting-tips") |
content | string | The note content |
tags | string[]? | Optional tags for searching |
Remember that Flux works best with natural language prompts
get_notes
Retrieve saved notes, optionally filtered by topic.
| Parameter | Type | Description |
|---|---|---|
topic | string? | Filter by topic |
limit | number? | Max notes to return |
search_notes
Search notes using full-text search.
| Parameter | Type | Description |
|---|---|---|
query | string | Search query |
limit | number? | Max notes to return |
Configuration
Environment Variables
| Variable | Description |
|---|---|
COMFYUI_URL | Override ComfyUI URL (skips auto-discovery) |
Config File
Location:
- macOS:
~/Library/Application Support/comfyui-mcp/config.json - Windows:
%APPDATA%/comfyui-mcp/config.json - Linux:
~/.config/comfyui-mcp/config.json
{
"comfyui": {
"url": "http://localhost:8188",
"apiKey": null
},
"outputDir": "./outputs",
"workflowsDir": "./workflows",
"outputSizeThreshold": 1048576
}
How It Works
Auto-Discovery
The server discovers ComfyUI in this order:
COMFYUI_URLenvironment variable- Config file URL
- ComfyUI Desktop app configuration files
- Port scanning: localhost:8188, 8189, 8190
Capability Detection
On connection, the server queries ComfyUI's /object_info endpoint to detect:
- Model Architectures: SD 1.5, SDXL, SD3, Flux, Cascade (based on available checkpoints/UNETs)
- Extensions: LoRA, ControlNet, IP-Adapter, AnimateDiff, etc. (based on available nodes)
- Features: Video generation, audio generation, upscaling, inpainting
- Samplers & Schedulers: Reads available options from KSampler node
Workflow Execution
When you call run_workflow, the server:
- Validates the workflow structure
- Queues the workflow via WebSocket for real-time progress
- Tracks the task (async by default, or waits if sync=true)
- Retrieves and returns the output images
Example Conversations
First-Time Setup
User: I want to use ComfyUI but I don't have it installed
Claude: [Uses get_install_guide] Here's how to install ComfyUI...
Claude: [Uses get_model_guide] Here's how to download and set up models...
Generate Images with Templates
User: Generate a pirate husky with Flux
Claude: [Uses list_models] Found flux1-schnell-fp8.safetensors...
Claude: [Uses search_templates] Found "flux_schnell_txt2img" template...
Claude: [Uses get_prompting_guide('flux')] Flux uses natural language prompts...
Claude: [Uses get_template with parameters] Built workflow...
Claude: [Uses validate_workflow] Workflow is valid...
Claude: [Uses run_workflow] Generated image!
[Image displayed]
Custom Workflow Composition
User: I want to build a custom workflow with ControlNet
Claude: [Uses list_nodes(category="controlnet")] Here are the ControlNet nodes...
Claude: [Uses get_node_info("ControlNetApply")] Here's how to use it...
Claude: [Uses build_node] Building each node...
Claude: [Uses validate_workflow] Checking the workflow...
Claude: [Uses run_workflow] Running your custom workflow...
Development
Building
npm install
npm run build
Running Locally
npm start
Docker Build
docker build -t comfyui-mcp .
docker run -it --network=host comfyui-mcp
Testing with MCP Inspector
npm run inspector
Troubleshooting
ComfyUI not detected
- Make sure ComfyUI is running
- Check if it's accessible at http://localhost:8188
- Set
COMFYUI_URLenvironment variable if using non-default port
Models not found
- Ensure models are in the correct ComfyUI subdirectory
- Restart ComfyUI after adding new models
- Use
list_modelsto see what's detected
Generation fails
- Use
validate_workflowto check for issues - Check
get_queuefor error messages - Verify the model exists with
list_models - Try simpler parameters (smaller size, fewer steps)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT
Acknowledgments
- ComfyUI by comfyanonymous
- Model Context Protocol by Anthropic
