Lian MCP Llm Agent
A local multi - expert intelligent agent scheduling system that supports automatic generation of experts, tool calls, and knowledge base expansion, used for a graduation project.
rating : 2.5 points
downloads : 3.9K
What is the Lian-MCP-LLM Agent System?
This is a local intelligent agent system developed for a graduation project. It's like an 'intelligent agent factory'. When you give it a complex task, the system will automatically create multiple 'expert' agents (such as data analysis experts, file processing experts, web scraping experts, etc.), let them divide the work and collaborate. Finally, an 'administrator' agent will summarize the results and give you a complete answer.How to use the Lian-MCP-LLM Agent System?
You can use it in two ways: 1. Enter instructions like chatting in the terminal; 2. Open a web interface and interact in a more beautiful window. The system will automatically analyze your needs, call appropriate tools (such as reading and writing files, accessing the web), and coordinate different expert agents to work for you.Applicable Scenarios
Suitable for handling complex tasks that require multiple steps and a combination of multiple skills. For example: analyzing a report and generating a summary, collecting information from multiple web pages and organizing it into a table, or managing files and folders on your local computer.Main Features
Multi-expert Agent Collaboration
The core of the system. The administrator agent can automatically spawn expert agents with specific roles (such as researchers, programmers, analysts) according to the task type, and direct them to work together. Finally, it integrates all the results.
Unified Tool Call Server (MCP Server)
It has a built-in tool center. It encapsulates various functions such as file operations, directory management, and web access into standardized tools. Agents can call them safely and conveniently like ordering from a menu.
Flexible LLM Client and Interaction Interface
Supports connecting to large AI models like DeepSeek. It provides two interaction methods: a simple command-line terminal and a web interface (Web UI) with a visualized tool call process, which is convenient for users with different habits.
Self-developed Lightweight Database Layer (LianORM)
To enable agents to remember information and states, the system has a built-in small but fully functional database management tool. It can handle data intelligently without developers writing complex SQL statements.
Rich Basic Tool Library (Kit)
The system's underlying layer contains a series of self-developed tools, such as colored terminal output, state machines, text parsers, etc., which provide reliable support for the complex logic of upper-level agents.
Advantages
Modular design: Each layer (database, tools, agents) is clearly separated, making it easy to understand and expand.
Automated scheduling: Users only need to propose the final goal, and the system automatically decomposes tasks and schedules experts without manual intervention in the process.
Localization and privacy: The core logic and tools run locally, making it safer to process sensitive data.
Unified tool integration: All tools are centrally managed through the MCP Server, with standardized calls and high security.
Diverse interaction methods: It supports both a geeky command line and a friendly graphical web interface.
Limitations
Academic project stage: Currently, it is a graduation project prototype and may not have been fully tested in terms of extreme stability and large-scale concurrency.
Configuration dependency: Users need to configure API keys (such as DeepSeek) and the local environment by themselves.
Function scope: The current toolset mainly focuses on file, directory, and basic network operations. More professional tools (such as database connection, graphics processing) need to be expanded.
Learning curve: Although the interface is friendly, understanding the underlying principle of its multi-agent collaboration requires a certain technical background.
How to Use
Environment Preparation and Startup
Make sure Python and the uv package manager are installed. Clone the project code to your local machine.
Start the Tool Server (MCP Server)
Open a terminal window and start the tool center. This is the basis for agents to call functions.
Configure the AI Model
In the `mylib/llm/llm_config.toml` file, fill in your DeepSeek API key and make sure the MCP Server address is correct.
Choose a Way to Interact with the System
Open another terminal and choose your preferred way to start the client.
Start Conversations and Tasks
Enter your requirements in the client or the web page. For example: “Please list all Python files in the current directory and tell me how many lines of code each file has.” The system will automatically schedule experts and tools to complete the task.
Usage Examples
Example 1: File Content Analysis and Summarization
You have a folder containing multiple log files and want to quickly understand the overall situation.
Example 2: Information Collection and Organization
You need to quickly obtain information on a certain topic from the Internet and make it into a list.
Example 3: Local Project Structure Combing
When taking over a new project, you want to quickly understand the code structure.
Frequently Asked Questions
Do I need to pay for this system?
What's the difference between it and directly using ChatGPT?
Is my data safe?
How to add new tools, such as operating Excel?
What should I do if I encounter a port conflict or connection error when starting?
Related Resources
Detailed Documentation of MCP Server API
Deeply understand the interfaces, tool lists, and call methods of the tool server.
Usage Documentation of LianORM Database Layer
Understand the design and usage methods of the self - developed lightweight database management tool.
Agent Design Documentation
Understand the core design philosophy, identities, goals, and memory mechanisms of agents in the system.
User Guide for Configuring the System
Check how to manage and modify various configurations of the system.

Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
27.6K
5 points

Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
17.6K
4.5 points

Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
55.3K
4.3 points

Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
18.6K
4.3 points

Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
24.3K
5 points

Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
52.4K
4.5 points

Gmail MCP Server
A Gmail automatic authentication MCP server designed for Claude Desktop, supporting Gmail management through natural language interaction, including complete functions such as sending emails, label management, and batch operations.
TypeScript
17.2K
4.5 points

Minimax MCP Server
The MiniMax Model Context Protocol (MCP) is an official server that supports interaction with powerful text-to-speech, video/image generation APIs, and is suitable for various client tools such as Claude Desktop and Cursor.
Python
35.7K
4.8 points
