Dolphin MCP is a flexible Python library and command-line tool designed to enable users to interact with multiple Model Context Protocol (MCP) servers through natural language. Users can connect to multiple MCP servers, access the tools provided by these servers, and access and process data in a conversational manner. This project supports multiple language models, including OpenAI, Anthropic, Ollama, and LMStudio.
The main features include:
1. **Multi-vendor support**: Compatible with multiple language models, allowing users to choose freely according to their needs.
2. **Modular architecture**: Each functional module has a clear division of labor, facilitating maintenance and expansion.
3. **Dual interfaces**: Users can choose to use it as a Python library or operate through the command-line tool.
4. **MCP server integration**: Allows simultaneous connection to multiple MCP servers, greatly expanding functionality.
5. **Tool discovery**: Automatically discovers and uses the tools provided by MCP servers, simplifying the operation process.
6. **Flexible configuration**: Easily configure model and server settings through a JSON configuration file.
7. **Environment variable support**: Supports securely storing API keys through environment variables, enhancing security.
8. **Comprehensive documentation**: Provides detailed usage examples and API documentation to help users get started quickly.
9. **Easy installation**: It is very convenient to install via pip, and the command-line tool is also easy to use.
Dolphin MCP makes interacting with data simple and efficient. Users only need to enter queries, and the system will automatically process and return results. Whether it is for scientific research, data analysis, or applications in other fields, Dolphin MCP can provide strong support.
Product link: [Dolphin MCP GitHub page](https://github.com/cognitivecomputations/dolphin-mcp)







