🚀 OneMCP - Generic MCP Aggregator
OneMCP is a universal Model Context Protocol (MCP) aggregator. It combines multiple external MCP servers into a unified interface with progressive discovery, effectively solving the problem of high token consumption when dealing with multiple MCP servers and enhancing the efficiency and extensibility of the interaction between LLMs and MCP servers.
Version 0.2.0 - This is a production-ready generic aggregator with a meta-tool architecture, which improves efficiency and extensibility.
Built with the official MCP Go SDK from the Anthropic/Google collaboration.
✨ Features
What is OneMCP?
OneMCP is a generic MCP aggregator with the following features:
- It aggregates tools from multiple external MCP servers.
- Supports custom internal tools with type-safe registration.
- Exposes a unified meta-tool interface to reduce token usage.
- Supports progressive tool discovery (search before loading schemas).
- Works with any MCP-compliant server.
Why OneMCP?
When working with many MCP servers, exposing hundreds of tools directly to LLMs consumes a massive amount of tokens and context window. As explained in Anthropic's Code Execution with MCP article, the meta-tool pattern solves this problem in the following ways:
- Reducing token overhead: Instead of loading 50+ tool schemas (tens of thousands of tokens), it exposes just 2 meta-tools.
- Progressive discovery: LLMs search for relevant tools only when needed.
- Preserving context: It leaves more room for actual conversation and code, and less for tool definitions.
- Scaling gracefully: New servers can be added without increasing the baseline token usage.
OneMCP implements this pattern as a universal aggregator that is compatible with:
- Any LLM that supports MCP (Claude, OpenAI, Gemini, local models via Claude Desktop, etc.).
- Any MCP-compliant server.
- Any deployment scenario (local development, production APIs, agent frameworks).
🔧 Technical Details
Architecture
OneMCP Aggregator
├── Meta-Tools (2)
│ ├── tool_search - Discover available tools
│ └── tool_execute - Execute a single tool
│
├── Internal Tools (optional)
│ └── Custom Go-based tools with type-safe handlers
│
└── External MCP Servers (configured via .onemcp.json)
├── Playwright (21 tools) - Browser automation
├── Filesystem (N tools) - File operations
└── Your Server (N tools) - Any MCP-compliant server
Benefits
- Token Efficiency: It achieves a 99% reduction by exposing 2 meta-tools instead of hundreds of individual tools.
- Progressive Discovery: It searches first and loads schemas only for the needed tools.
- Universal: It works with any MCP-compliant server.
- Flexible: It supports both external servers (config) and internal tools (Go code).
- Type-Safe: Built-in tools leverage Go's type system with automatic schema inference.
Performance Optimizations
OneMCP includes several optimizations for token efficiency and speed:
- Configurable Result Limit: It returns 5 tools per search by default (configurable via
.onemcp.json).
- LLM-Powered Semantic Search: It uses Claude, Codex, or Copilot to intelligently match queries to tools.
- Progressive Discovery: It has four detail levels (names_only → summary → detailed → full_schema).
- Schema Caching: External tool schemas are cached at startup, eliminating repeated fetching.
- Lazy Loading: Schemas are only sent when explicitly requested via detail_level.
Token Usage Examples (default 5 tools):
names_only search: ~50 tokens total
summary search: ~200 - 400 tokens total
full_schema search: ~2000 - 5000 tokens total
LLM-Powered Semantic Search
OneMCP uses LLM-powered semantic search to intelligently match queries to the right tools. Instead of exact keyword matching, it understands intent and context using AI models.
Example: Query "take a picture of the page" → finds browser_screenshot
You can choose from 3 LLM providers based on your needs:
1. Claude (Anthropic, Default)
- Best for: Highest quality semantic understanding with Claude models.
- Speed: ~3 - 5 seconds per search.
- Quality: Excellent - Claude Haiku/Sonnet/Opus reason about tool descriptions.
- Memory: <10MB RAM.
- Requirements: Claude CLI (
brew install anthropics/claude/claude-code).
- Cost: Uses local Claude CLI.
{
"settings": {
"searchProvider": "claude",
"claudeModel": "haiku"
}
}
2. Codex (OpenAI GPT-5)
- Best for: OpenAI's latest Codex models for tool search.
- Speed: ~3 - 5 seconds per search.
- Quality: Excellent - GPT-5 Codex reasoning.
- Memory: <10MB RAM.
- Requirements: Codex CLI.
- Cost: Uses Codex CLI.
{
"settings": {
"searchProvider": "codex",
"codexModel": "gpt-5-codex-mini"
}
}
3. Copilot (GitHub Copilot)
- Best for: GitHub Copilot integration for tool discovery.
- Speed: ~3 - 5 seconds per search.
- Quality: Excellent - Uses Claude Haiku 4.5 via GitHub Copilot.
- Memory: <10MB RAM.
- Requirements: GitHub CLI with Copilot (
gh copilot).
- Cost: Requires GitHub Copilot subscription.
{
"settings": {
"searchProvider": "copilot",
"copilotModel": "claude-haiku-4.5"
}
}
How it works: For each search, OneMCP sends your query + all tool schemas to the LLM, which ranks tools by semantic relevance. The LLM understands context, synonyms, and intent far better than traditional keyword search.
Performance Comparison:
| Provider |
Latency |
Memory |
Quality |
Requirements |
| Claude (haiku) |
~3s |
<10MB |
⭐⭐⭐⭐⭐ |
Claude CLI |
| Codex (gpt-5-codex-mini) |
~3s |
<10MB |
⭐⭐⭐⭐⭐ |
Codex CLI |
| Copilot |
~3s |
<10MB |
⭐⭐⭐⭐⭐ |
GitHub CLI + Copilot |
Recommendation: Use Claude with haiku (default) for the best balance of speed and quality.
Technology
OneMCP is built with the following technologies:
- Official MCP Go SDK v1.1.0 - A collaboration between Anthropic and Google.
- Go 1.25 - A modern, efficient, and type-safe programming language.
- JSON-RPC 2.0 - The standard protocol for MCP communication.
- Multiple Transports - Stdio (command), HTTP (SSE), and more.
The official SDK provides the following features:
- Type-safe tool registration with automatic schema inference.
- Multiple transport options (stdio via CommandTransport, HTTP via SSE, StreamableHTTP, in-memory for testing).
- A built-in client for connecting to external servers.
- Full support for MCP protocol features.
Supported Transports:
- Command (stdio): Execute local commands and communicate via stdin/stdout using JSON-RPC - most common for local tools.
- Streamable HTTP: Connect to remote HTTP-based MCP servers using JSON-RPC over HTTP with optional SSE streaming (MCP spec 2025 - 03 - 26+) - ideal for cloud services.
- In-Memory: Direct in-process communication - useful for testing.
Protocol Details:
- All MCP communication uses JSON-RPC 2.0 for message encoding.
- Stdio transport: JSON-RPC messages over stdin/stdout.
- Streamable HTTP transport: JSON-RPC via HTTP POST/GET with optional Server-Sent Events (SSE) for streaming responses.
- Single endpoint (no dual endpoint complexity).
- Supports both request/response and streaming.
- Session management via
Mcp-Session-Id header.
- Automatic reconnection with
Last-Event-ID for resilience.
📦 Installation
1. Build the aggregator
GOOS=darwin GOARCH=amd64 go build -o one-mcp ./cmd/one-mcp
GOOS=linux GOARCH=amd64 go build -o one-mcp-linux ./cmd/one-mcp
2. Configure OneMCP
Create .onemcp.json:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
},
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp"],
"category": "browser",
"enabled": true
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"category": "filesystem",
"enabled": true
}
}
}
3. Run the aggregator
./one-mcp
ONEMCP_CONFIG=/path/to/config.json ./one-mcp
MCP_SERVER_NAME=my-aggregator MCP_SERVER_VERSION=0.2.0 ./one-mcp
MCP_LOG_LEVEL=debug ./one-mcp
4. Use with MCP Clients
Add to your MCP client config. For example, Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"onemcp": {
"command": "/path/to/one-mcp",
"env": {
"MCP_SERVER_NAME": "my-aggregator",
"MCP_LOG_FILE": "/tmp/onemcp.log"
}
}
}
}
💻 Usage Examples
Meta-Tools API
1. tool_search
Discover available tools with optional filtering. It uses LLM-powered semantic search to intelligently match your query to the most relevant tools. It returns 5 tools per query by default (configurable via .onemcp.json).
Arguments:
query (optional) - Search query in natural language (e.g., "take a screenshot", "navigate to webpage", "read files").
category (optional) - Filter by category (e.g., "browser", "filesystem").
detail_level (optional) - Level of detail to return:
"names_only" - Just tool names and categories (minimal tokens).
"summary" - Name, category, and description (default).
"detailed" - Includes argument schema.
"full_schema" - Complete schema with all details.
offset (optional) - Number of results to skip for pagination (default: 0).
Semantic Search: The LLM understands natural language queries, context, and intent. It matches your query to tool descriptions semantically, not just by keywords.
Schema Caching: External tool schemas are cached at startup for fast repeated searches.
Hybrid Approach: Search returns 5 tools inline by default (configurable) plus a schema_file path (/tmp/onemcp-tools-schema.json) containing ALL executable tools with full schemas (external and internal tools only, excluding meta-tools which are already exposed via MCP's tools/list). For comprehensive tool exploration, search the schema file using filesystem tools instead of paginating through search results. This reduces token usage while maintaining access to complete tool information.
Example - Basic search:
{
"tool_name": "tool_search",
"arguments": {
"query": "navigate",
"detail_level": "summary"
}
}
Example - Paginated search:
{
"tool_name": "tool_search",
"arguments": {
"query": "browser",
"category": "browser",
"detail_level": "detailed",
"offset": 5
}
}
Returns:
{
"total_count": 21,
"returned_count": 5,
"offset": 0,
"limit": 5,
"has_more": true,
"schema_file": "/tmp/onemcp-tools-schema.json",
"message": "Showing 5 of 21 tools. For complete tool list with full schemas, search with filesystem tools in: /tmp/onemcp-tools-schema.json",
"tools": [
{
"name": "playwright_browser_navigate",
"category": "browser",
"description": "Navigate to a URL",
"schema": {...}
},
{
"name": "playwright_browser_click",
"category": "browser",
"description": "Click an element",
"schema": {...}
}
]
}
2. tool_execute
Execute a single tool by name.
Arguments:
tool_name (required) - Name of the tool (e.g., playwright_browser_navigate).
arguments (required) - Tool-specific arguments.
Example:
{
"tool_name": "tool_execute",
"arguments": {
"tool_name": "playwright_browser_navigate",
"arguments": {
"url": "https://example.com"
}
}
}
📚 Documentation
Configuration
OneMCP uses .onemcp.json for configuration. The configuration file supports JSON with comments (JSONC) format - add // for line comments or /* */ for block comments to document your configuration.
See .onemcp.json.example for a complete example with comments.
Settings
Configure OneMCP behavior:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
}
}
Available Settings:
searchResultLimit (number) - Number of tools to return per search query. Default: 5. Lower values reduce token usage but require more searches for discovery.
searchProvider (string) - LLM provider for semantic search. Options: "claude" (default), "codex", "copilot". See the "LLM-Powered Semantic Search" section above for details.
claudeModel (string) - Claude model to use when searchProvider is "claude". Options: "haiku" (default), "sonnet", "opus".
codexModel (string) - Codex model to use when searchProvider is "codex". Options: "gpt-5-codex-mini" (default), "gpt-5-codex".
copilotModel (string) - Copilot model to use when searchProvider is "copilot". Default: "claude-haiku-4.5".
External Server Configuration
Define external MCP servers in the mcpServers section. OneMCP supports multiple transport types:
1. Command Transport (stdio) - Most common, runs a local command:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp"],
"env": {
"DEBUG": "1"
},
"category": "browser",
"enabled": true
}
}
}
2. HTTP Transport (Streamable HTTP) - Connect to a remote MCP server via HTTP:
{
"mcpServers": {
"remote-server": {
"url": "https://api.example.com/mcp",
"category": "api",
"enabled": true
}
}
}
Note: OneMCP uses the Streamable HTTP transport (MCP spec 2025 - 03 - 26+) for all HTTP connections. This is the modern standard that replaces the deprecated SSE transport.
Configuration Fields:
command (string) - Command to execute (for stdio transport).
args (array) - Command arguments (stdio only).
url (string) - HTTP endpoint URL (for Streamable HTTP transport).
env (object) - Environment variables (stdio only).
category (string) - Category for grouping tools.
enabled (boolean) - Whether to load this server.
Note: Provide either command or url, not both.
Environment Variables
ONEMCP_CONFIG - Configuration file path (default: ".onemcp.json").
MCP_SERVER_NAME - Server name (default: "one-mcp-aggregator").
MCP_SERVER_VERSION - Server version (default: "0.2.0").
MCP_LOG_FILE - Log file path (default: "/tmp/one-mcp.log").
MCP_LOG_LEVEL - Log level: "debug" or "info" (default: "info").
Tool Naming Convention
External tools are automatically prefixed with their server name:
browser_navigate from playwright → playwright_browser_navigate.
take_screenshot from chrome → chrome_take_screenshot.
This prevents naming conflicts when aggregating multiple servers.
Progressive Discovery Workflow
The recommended workflow for LLMs:
- Search for tools: Use
tool_search with filters to find relevant tools.
- Get detailed schemas: Use
detail_level: "full_schema" for tools you plan to use.
- Execute tools: Use
tool_execute with validated arguments.
Example conversation:
User: "Take a screenshot of example.com"
LLM: Let me search for screenshot tools...
→ tool_search(query="screenshot", detail_level="full_schema")
LLM: Found playwright_browser_navigate and playwright_browser_take_screenshot.
Let me navigate first...
→ tool_execute(tool_name: "playwright_browser_navigate", arguments: {url: "https://example.com"})
LLM: Now taking screenshot...
→ tool_execute(tool_name: "playwright_browser_take_screenshot", arguments: {filename: "example.png"})
Logging
Logs are written to the file specified by MCP_LOG_FILE (default: /tmp/one-mcp.log):
time=2025-11-11T10:00:00.000+00:00 level=INFO msg="Starting OneMCP aggregator server over stdio..." name=one-mcp-aggregator version=0.2.0
time=2025-11-11T10:00:01.000+00:00 level=INFO msg="Loaded external MCP server" name=playwright tools=21 category=browser
time=2025-11-11T10:00:02.000+00:00 level=INFO msg="Registered tool" name=playwright_browser_navigate category=browser
time=2025-11-11T10:00:03.000+00:00 level=INFO msg="Executing tool" name=playwright_browser_navigate
time=2025-11-11T10:00:04.000+00:00 level=INFO msg="Tool execution successful" name=playwright_browser_navigate execution_time_ms=245
Troubleshooting
External server fails to start
- Check that the command path is correct in
.onemcp.json.
- Verify that the required environment variables are set.
- Check the logs in
MCP_LOG_FILE for startup errors.
- Test the server command manually:
command args....
Tool not found
- Use
tool_search to verify the tool exists.
- Check that tool names include the server prefix (e.g.,
playwright_browser_navigate).
- Verify that the external server is enabled in
.onemcp.json.
Tool execution fails
- Use
tool_search with detail_level: "full_schema" to see the required arguments.
- Check that the argument types match the schema.
- Review the logs for detailed error messages.
Development
Project Structure
.
├── cmd/
│ └── one-mcp/
│ └── main.go # Entry point
├── internal/
│ ├── mcp/
│ │ └── server.go # Aggregator server with meta-tools
│ ├── tools/
│ │ ├── types.go # Tool type definitions
│ │ └── registry.go # Tool registry and dispatcher
│ └── mcpclient/
│ └── client.go # External MCP server client
├── .onemcp.json # Configuration (settings + external servers)
├── go.mod
└── README.md
Adding External Servers
Simply add to the mcpServers section in .onemcp.json - no code changes are required:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
},
"mcpServers": {
"your-server": {
"command": "/path/to/your-mcp-server",
"args": ["--config", "config.json"],
"env": {
"API_KEY": "your-key"
},
"category": "custom",
"enabled": true
}
}
}
OneMCP will automatically:
- Start the external server.
- Fetch its tool list.
- Prefix tool names with
your-server_.
- Make tools discoverable via
tool_search.
- Route
tool_execute calls to the external server.
Adding Internal Tools
Note: Adding internal tools requires modifying the OneMCP source code. You'll need to:
- Clone this repository:
git clone https://github.com/radutopala/onemcp.git.
- Make your changes (see steps below).
- Rebuild the binary:
go build -o one-mcp ./cmd/one-mcp.
To add custom internal tools directly to the OneMCP aggregator:
1. Define your tool struct with input/output types
package tools
type CalculatorInput struct {
A int `json:"a" jsonschema:"First number"`
B int `json:"b" jsonschema:"Second number"`
}
type CalculatorOutput struct {
Result int `json:"result" jsonschema:"Calculation result"`
}
2. Implement the tool handler
func (s *AggregatorServer) handleCalculate(ctx context.Context, req *mcp.CallToolRequest, input CalculatorInput) (*mcp.CallToolResult, any, error) {
result := CalculatorOutput{
Result: input.A + input.B,
}
resultJSON, _ := json.Marshal(result)
return &mcp.CallToolResult{
Content: []mcp.Content{
&mcp.TextContent{Text: string(resultJSON)},
},
}, nil, nil
}
3. Register the tool in the server
func (s *AggregatorServer) registerCustomTools(server *mcp.Server) error {
mcp.AddTool(server, &mcp.Tool{
Name: "calculate",
Description: "Add two numbers together",
}, s.handleCalculate)
return nil
}
4. Call the registration function
if err := aggregator.registerCustomTools(server); err != nil {
return nil, fmt.Errorf("failed to register custom tools: %w", err)
}
Key Points
- Type Safety: The official SDK automatically infers schemas from your Go structs.
- Struct Tags: Use
jsonschema:"description" to document arguments.
- Handler Signature:
func(ctx, *CallToolRequest, InputType) (*CallToolResult, any, error).
- Response Format: Always return JSON in TextContent for consistency with meta-tools.
- No Schema Required: If you don't provide
inputSchema in the Tool struct, it's inferred from your input type.
Example: Echo Tool
type EchoInput struct {
Message string `json:"message" jsonschema:"Message to echo back"`
}
func (s *AggregatorServer) handleEcho(ctx context.Context, req *mcp.CallToolRequest, input EchoInput) (*mcp.CallToolResult, any, error) {
return &mcp.CallToolResult{
Content: []mcp.Content{
&mcp.TextContent{Text: input.Message},
},
}, nil, nil
}
mcp.AddTool(server, &mcp.Tool{
Name: "echo",
Description: "Echo back a message",
}, s.handleEcho)
Internal tools are directly exposed via tools/list alongside the 2 meta-tools, making them immediately available without needing tool_search.
When to use internal tools vs external servers:
- Use external servers (recommended): For most use cases - no code changes are needed, just configuration.
- Use internal tools: Only when you need tight integration with OneMCP's core logic or want Go's type safety for custom business logic.
📄 License
This project is licensed under the MIT License. See the LICENSE file for details.
Contributing
Contributions are welcome! Please open an issue or submit a pull request on GitHub.