Polybrain MCP Server
An MCP (Model Context Protocol) server for connecting AI agents to multiple LLM models. Supports conversation history, model switching, and seamless Claude Code integration.
Features
- Multi-model support (OpenAI, OpenRouter, custom endpoints)
- Conversation history management
- Switch models mid-conversation
- Extended thinking/reasoning support (configurable by provider)
- Pure MCP protocol (silent by default)
- Automatic server management
Installation
npm install -g polybrain
# or
pnpm add -g polybrain
Quick Setup
1. Configure Models
Option A: YAML (recommended)
Create ~/.polybrain.yaml:
models:
- id: "gpt-4o"
modelName: "gpt-4o"
baseUrl: "https://api.openai.com/v1"
apiKey: "${OPENAI_API_KEY}"
provider: "openai"
- id: "gpt-5.1"
modelName: "openai/gpt-5.1"
baseUrl: "https://openrouter.io/api/v1"
apiKey: "${OPENROUTER_KEY}"
provider: "openrouter"
Set env vars:
export OPENAI_API_KEY="sk-..."
export OPENROUTER_KEY="sk-or-..."
Option B: Environment variables
export POLYBRAIN_BASE_URL="https://api.openai.com/v1"
export POLYBRAIN_API_KEY="sk-..."
export POLYBRAIN_MODEL_NAME="gpt-4o"
2. Coding Agent Integration
Claude Code
Run the following command to add Polybrain to Claude Code:
claude mcp add -s user -t stdio polybrain -- polybrain
OpenAI Codex
Open ~/.codex/config.toml and add:
[mcpServers.polybrain]
command = "polybrain"
Usage
You can now ask your coding agent to consult specific models. For example:
"Ask deepseek what's the best way to install python on mac"
Or:
"What models are available from polybrain?"
Configuration Reference
Environment Variables
POLYBRAIN_BASE_URL- LLM API base URLPOLYBRAIN_API_KEY- API keyPOLYBRAIN_MODEL_NAME- Model namePOLYBRAIN_HTTP_PORT- Server port (default: 32701)POLYBRAIN_LOG_LEVEL- Log level (default: info)POLYBRAIN_DEBUG- Enable debug logging to stderrPOLYBRAIN_CONFIG_PATH- Custom config file path
YAML Config Fields
httpPort: 32701 # Optional
truncateLimit: 500 # Optional
logLevel: info # Optional
models: # Required
- id: "model-id" # Internal ID
modelName: "actual-model-name" # API model name
baseUrl: "https://api.url/v1" # API endpoint
apiKey: "key or ${ENV_VAR}" # API key
provider: "openai" # Optional: provider type for reasoning support
Supported Providers
The provider field enables provider-specific features like extended thinking/reasoning. If not specified, reasoning parameters will not be passed to the API (safe default).
| Provider | Reasoning Support | Valid Values |
|---|---|---|
| OpenAI | YES | "openai" |
| OpenRouter | VARIES | "openrouter" |
Examples:
- Use
provider: "openai"for OpenAI API models (GPT-4, o-series) - Use
provider: "openrouter"for OpenRouter proxy service (supports 400+ models) - Omit
providerfield if your endpoint doesn't support reasoning parameters
Example with reasoning:
models:
- id: "gpt-o1"
modelName: "o1"
baseUrl: "https://api.openai.com/v1"
apiKey: "${OPENAI_API_KEY}"
provider: "openai" # Enables reasoning support
- id: "gpt-5.1"
modelName: "openai/gpt-5.1"
baseUrl: "https://openrouter.io/api/v1"
apiKey: "${OPENROUTER_KEY}"
provider: "openrouter" # Enables reasoning support
To use reasoning, set reasoning: true in the chat tool call. If the model and provider support it, you'll receive both the response and reasoning content.
Development
Setup
pnpm install
Build
pnpm build
Lint & Format
pnpm lint
pnpm format
Type Check
pnpm type-check
Development Mode
pnpm dev
Project Structure
src/
├── bin/polybrain.ts # CLI entry point
├── launcher.ts # Server launcher & management
├── http-server.ts # HTTP server
├── index.ts # Main server logic
├── mcp-tools.ts # MCP tool definitions
├── conversation-manager.ts
├── openai-client.ts
├── config.ts
├── logger.ts
└── types.ts
Debugging
Enable debug logs to stderr:
{
"mcpServers": {
"polybrain": {
"command": "polybrain",
"env": {
"POLYBRAIN_DEBUG": "true"
}
}
}
}
Restart Server
After changing configuration in ~/.polybrain.yaml, restart the HTTP backend server:
polybrain --restart
This kills the background HTTP server. The next time you use polybrain, it will automatically start a fresh server with the updated configuration.
License
MIT
