MCP Server for Splunk
Enable AI agents to interact seamlessly with Splunk environments through the Model Context Protocol (MCP)
Transform your Splunk instance into an AI-native platform. Our community-driven MCP server bridges Large Language Models and Splunk Enterprise/Cloud with 20+ tools, 16 resources (including CIM data models), and production-ready security—all through a single, standardized protocol.
🌟 Why This Matters
- 🔌 Universal AI Connection: One protocol connects any AI to Splunk data
- ⚡ Zero Custom Integration: No more months of custom API development
- 🛡️ Production-Ready Security: Client-scoped access with no credential exposure
- 🤖 AI-Powered Workflows: Intelligent troubleshooting agents that work like experts
- 🤝 Community-Driven: Extensible framework with contribution examples
🚀 NEW: AI-Powered Troubleshooting Workflows - Transform reactive firefighting into intelligent, systematic problem-solving with specialist AI workflows.
📋 Table of Contents
- 🚀 Quick Start
- 🎯 What You Can Do
- 📚 Documentation Hub
- 🔧 Available Tools & Capabilities
- 🌐 Client Integration Examples
- 🤝 Community & Contribution
- 🚀 Deployment Options
- 🆘 Support & Community
- 📈 Project Stats
- 🎯 Ready to Get Started?
🚀 Quick Start
Prerequisites
- Python 3.10+ and UV package manager
- Nodejs (optional used for mcp inspector)
- Docker (optional but recommended for full stack)
- Splunk instance with API access (or use included Docker Splunk)
📖 Complete Setup Guide: Installation Guide
Configuration
Before running the setup, configure your Splunk connection:
# Copy the example configuration
cp env.example .env
# Edit .env with your Splunk credentials
# - Use your existing Splunk instance (local, cloud, or Splunk Cloud)
# - OR use the included Docker Splunk (requires Docker)
# Optional HTTP transport defaults (local runs)
# - Stateless HTTP avoids sticky-session requirements
# - JSON responses improve compatibility with some clients
# These are already the defaults for local runs via `mcp-server --local`
echo "MCP_STATELESS_HTTP=true" >> .env
echo "MCP_JSON_RESPONSE=true" >> .env
One-Command Setup
Windows:
git clone https://github.com/deslicer/mcp-for-splunk.git
cd mcp-for-splunk
```python
# Start the MCP Server (project script)
uv run mcp-server --local --detached
# Verify the server
uv run mcp-server --test
# Optional: show detailed tools/resources and health output
uv run mcp-server --test --detailed
macOS/Linux:
git clone https://github.com/deslicer/mcp-for-splunk.git
cd mcp-for-splunk
# (Recommended) Preview what would be installed
./scripts/smart-install.sh --dry-run
# Install missing prerequisites (base: Python, uv, Git, Node)
./scripts/smart-install.sh
# Start the MCP Server (project script)
# Local runs default to HTTP stateless mode + JSON response
uv run mcp-server --local --detached
# Verify the server
uv run mcp-server --test
# Optional: show detailed tools/resources and health output
uv run mcp-server --test --detailed
💡 Deployment Options: The
mcp-servercommand will prompt you to choose:
- Docker (Option 1): Full stack with Splunk, Traefik, MCP Inspector - recommended if Docker is installed
- Local (Option 2): Lightweight FastMCP server only - for users without Docker
Stopping services:
uv run mcp-server --stopstops only this project's compose services (dev/prod/splunk). It does not stop the Docker engine.
Note on Splunk licensing: When using the
so1Splunk container, you must supply your own Splunk Enterprise license if required. The compose files include a commented example mount:# - ./lic/splunk.lic:/tmp/license/splunk.lic:ro. Create alic/directory and mount your license file, or add the license via the Splunk Web UI after startup.
🎯 What You Can Do
🤖 AI-Powered Troubleshooting (NEW!)
Transform your Splunk troubleshooting from manual procedures to intelligent, automated workflows using the MCP server endpoints:
# Discover and execute intelligent troubleshooting workflows
result = await list_workflows.execute(ctx, format_type="summary")
# Returns: missing_data_troubleshooting, performance_analysis, custom_workflows...
# Run AI-powered troubleshooting with a single command
result = await workflow_runner.execute(
ctx=ctx,
workflow_id="missing_data_troubleshooting",
earliest_time="-24h",
latest_time="now",
focus_index="main"
)
# → Parallel execution, expert analysis, actionable recommendations
🚀 Key Benefits:
- 🧠 Natural Language Interface: "Troubleshoot missing data" → automated workflow execution
- ⚡ Parallel Processing: Multiple diagnostic tasks run simultaneously for faster resolution
- 🔧 Custom Workflows: Build organization-specific troubleshooting procedures
- 📊 Intelligent Analysis: AI agents follow proven Splunk best practices
📖 Read the Complete AI Workflows Guide → for detailed examples, workflow creation, and advanced troubleshooting techniques.
📚 Documentation Hub
| Document | Purpose | Audience | Time |
|---|---|---|---|
| 🤖 AI-Powered Troubleshooting | Intelligent workflows powered by the workflow tools | All users | 5 min |
| Getting Started | Complete setup guide with prerequisites | New users | 15 min |
| Integration Guide | Connect AI clients | Developers | 30 min |
| Deployment Guide | Production deployment | DevOps | 45 min |
| Workflows Guide | Create and run workflows (OpenAI env vars) | Developers | 10 min |
| API Reference | Tool documentation | Integrators | Reference |
| Resources Reference | Access CIM data models and Splunk docs | All users | Reference |
| Contributing | Add your own tools | Contributors | 60 min |
| 📖 Contrib Guide | Complete contribution framework | Contributors | 15 min |
| Architecture | Technical deep-dive | Architects | Reference |
| Tests Quick Start | First success test steps | Developers | 2 min |
| Plugins | Extend with entry-point plugins (separate package) | Integrators | 5 min |
🔧 Available Tools & Capabilities
🤖 AI Workflows & Specialists (NEW!)
list_workflows: Discover available troubleshooting workflows (core + contrib)workflow_runner: Execute any workflow with full parameter control and progress trackingworkflow_builder: Create custom troubleshooting procedures for your organization- Built-in Workflows: Missing data troubleshooting, performance analysis, and more
- 📖 Complete Workflow Guide →
🔍 Search & Analytics
- Smart Search: Natural language to SPL conversion
- Real-time Search: Background job management with progress tracking
- Saved Searches: Create, execute, and manage search automation
📊 Data Discovery
- Metadata Exploration: Discover indexes, sources, and sourcetypes
- Schema Analysis: Understand your data structure
- Usage Patterns: Identify data volume and access patterns
👥 Administration
- App Management: List, enable, disable Splunk applications
- User Management: Comprehensive user and role administration
- Configuration Access: Read and analyze Splunk configurations
🏥 Health Monitoring
- System Health: Monitor Splunk infrastructure status
- Degraded Feature Detection: Proactive issue identification
- Alert Management: Track and analyze triggered alerts
🌐 Client Integration Examples
💪 Multi-Client Configuration Strength: One of the key advantages of this MCP Server for Splunk is its ability to support multiple client configurations simultaneously. You can run a single server instance and connect multiple clients with different Splunk environments, credentials, and configurations - all without restarting the server or managing separate processes.
🔄 Multi-Client Benefits
Session-Based Isolation: Each client connection maintains its own Splunk session with independent authentication, preventing credential conflicts between different users or environments.
Dynamic Configuration: Switch between Splunk instances (on-premises, cloud, development, production) by simply changing headers - no server restart required.
Scalable Architecture: A single server can handle multiple concurrent clients, each with their own Splunk context, making it ideal for team environments, CI/CD pipelines, and multi-tenant deployments.
Resource Efficiency: Eliminates the need to run separate MCP server instances for each Splunk environment, reducing resource consumption and management overhead.
Cursor IDE
Single Tenant
{
"mcpServers": {
"splunk": {
"command": "fastmcp",
"args": ["run", "/path/to/src/server.py"],
"env": {
"MCP_SPLUNK_HOST": "your-splunk.com",
"MCP_SPLUNK_USERNAME": "your-user"
}
}
}
}
Client Specified Tenant
{
"mcpServers": {
"splunk-in-docker": {
"url": "http://localhost:8002/mcp/",
"headers": {
"X-Splunk-Host": "so1",
"X-Splunk-Port": "8089",
"X-Splunk-Username": "admin",
"X-Splunk-Password": "Chang3d!",
"X-Splunk-Scheme": "http",
"X-Splunk-Verify-SSL": "false",
"X-Session-ID": "splunk-in-docker-session"
}
},
"splunk-cloud-instance": {
"url": "http://localhost:8002/mcp/",
"headers": {
"X-Splunk-Host": "myorg.splunkcloud.com",
"X-Splunk-Port": "8089",
"X-Splunk-Username": "admin@myorg.com",
"X-Splunk-Password": "Chang3d!Cloud",
"X-Splunk-Scheme": "https",
"X-Splunk-Verify-SSL": "true",
"X-Session-ID": "splunk-cloud-session"
}
}
}
}
Google Agent Development Kit
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset
splunk_agent = LlmAgent(
model='gemini-2.0-flash',
tools=[MCPToolset(connection_params=StdioServerParameters(
command='fastmcp',
args=['run', '/path/to/src/server.py']
))]
)
🤝 Community & Contribution
Quick links: Contributing · Code of Conduct · Security Policy · Governance · License
🛠️ Create Your Own Tools & Extensions
🚀 Quick Start for Contributors:
# Interactive tool generator (project script)
uv run generate-tool
# Browse existing tools for inspiration
./contrib/scripts/list_tools.py
# Validate your tool implementation (project script)
uv run validate-tools
# Test your contribution
./contrib/scripts/test_contrib.py
📖 Complete Contributing Guide → - Everything you need to know about creating tools, resources, and workflows for the MCP Server for Splunk.
Contribution Categories
- 🛡️ Security Tools: Threat hunting, incident response, security analysis
- ⚙️ DevOps Tools: Monitoring, alerting, operations, SRE workflows
- 📈 Analytics Tools: Business intelligence, reporting, data analysis
- 💡 Example Tools: Learning templates and patterns for new contributors
- 🔧 Custom Workflows: AI-powered troubleshooting procedures for your organization
🚀 Deployment Options
Development (Local)
- Startup Time: ~10 seconds
- Resource Usage: Minimal (single Python process)
- Best For: Development, testing, stdio-based AI clients
- HTTP Defaults: Local runs enable
MCP_STATELESS_HTTP=trueandMCP_JSON_RESPONSE=trueby default for compatibility with Official MCP clients (no sticky sessions; JSON over SSE).- Endpoint:
http://localhost:8003/mcp/ - Required client headers:
Accept: application/json, text/event-streamMCP-Session-ID: <uuid>(preferred;X-Session-IDoptional)X-Splunk-*headers (host, port, username, password, scheme, verify-ssl) or set via.env
- Endpoint:
Production (Docker)
- Features: Load balancing, health checks, monitoring
- Includes: Traefik, MCP Inspector, optional Splunk
- Best For: Multi-client access, web-based AI agents
- Session Routing: Traefik is configured with sticky sessions for streamable HTTP; alternatively, enable stateless HTTP for development scenarios.
Enterprise (Kubernetes)
- Scalability: Horizontal scaling, high availability
- Security: Pod-level isolation, secret management
- Monitoring: Comprehensive observability stack
🆘 Support & Community
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📖 Documentation: Complete guides and references
- 🔧 Interactive Testing: MCP Inspector for real-time testing
Windows Support
Windows users get first-class support with PowerShell scripts and comprehensive troubleshooting guides. See our Windows Setup Guide.
📈 Project Stats
- ✅ 20+ Production Tools - Comprehensive Splunk operations
- ✅ 16 Rich Resources - System info, documentation, and CIM data models
- ✅ Comprehensive Test Suite - 170+ tests passing locally
- ✅ Multi-Platform - Windows, macOS, Linux support
- ✅ Community-Ready - Structured contribution framework
- ✅ Enterprise-Proven - Production deployment patterns
🎯 Ready to Get Started?
Choose your adventure:
- 🚀 Quick Start - Get running in 15 minutes
- 💻 Integration Examples - Connect your AI tools
- 🏗️ Architecture Guide - Understand the system
- 🤝 Contribute - Add your own tools
Learn More: Model Context Protocol | FastMCP Framework
