๐ MCP LLM Bridge
A bridge connecting the Model Context Protocol (MCP) server with OpenAI-compatible Large Language Models (LLMs). It primarily supports the OpenAI API and is also compatible with local endpoints implementing the OpenAI API specification.
This implementation provides a bidirectional protocol conversion layer that converts between MCP and OpenAI's function call interfaces. It transforms MCP tool specifications into the OpenAI function architecture and handles the process of mapping function calls back to MCP tool execution. This enables any OpenAI-compatible language model to use MCP-compatible tools through a standardized interface, whether it's a cloud-based model or a local implementation (such as Ollama).
For more information about MCP, please refer to the following links:
๐ Quick Start
Installation
# Installation
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/bartolli/mcp-llm-bridge.git
cd mcp-llm-bridge
uv venv
source .venv/bin/activate
uv pip install -e .
# Create a test database
python -m mcp_llm_bridge.create_test_db
๐ฆ Installation
Configuration
OpenAI (Main Support)
Create a .env
file:
OPENAI_API_KEY=your_key
OPENAI_MODEL=gpt-4o # Or other OpenAI models supporting tools
โ ๏ธ Important Note
If you need to use the key in
.env
, reactivate the environment:source .venv/bin/activate
Then configure the bridge in src/mcp_llm_bridge/main.py:
config = BridgeConfig(
mcp_server_params=StdioServerParameters(
command="uvx",
args=["mcp-server-sqlite", "--db-path", "test.db"],
env=None
),
llm_config=LLMConfig(
api_key=os.getenv("OPENAI_API_KEY"),
model=os.getenv("OPENAI_MODEL", "gpt-4o"),
base_url=None
)
)
Additional Endpoint Support
The bridge also works with any endpoint implementing the OpenAI API specification:
Ollama
llm_config=LLMConfig(
api_key="Not required",
model="mistral-nemo:12b-instruct-2407-q8_0",
base_url="http://localhost:11434/v1"
)
๐ก Usage Tip
After testing, I found that
mistral-nemo:12b-instruct-2407-q8_0
is better at handling complex queries.
LM Studio
llm_config=LLMConfig(
api_key="Not required",
model="local-model",
base_url="http://localhost:1234/v1"
)
I haven't tested this configuration, but it should theoretically work.
๐ป Usage Examples
Basic Usage
python -m mcp_llm_bridge.main
# Example question: What is the most expensive product in the database?
# Enter 'quit' or press Ctrl+C to exit
๐งช Running Tests
Install the package with test dependencies:
uv pip install -e ".[test]"
Then run the tests:
python -m pytest -v tests/
๐ License
๐ค Contribution Guidelines
If you'd like to contribute, please refer to the project repository: https://github.com/bartolli/mcp-llm-bridge







