Pyrunner MCP
P

Pyrunner MCP

PyRunner MCP is an MCP server optimized for Python script development, providing a persistent kernel that enables variables to be retained across executions, offering a command - line experience similar to Jupyter Notebook.
2.5 points
8.1K

What is PyRunner MCP?

PyRunner MCP is a Model Context Protocol (MCP) server that enables AI assistants (such as Gemini CLI) to execute code in a persistent Python kernel. This means that the variables, functions, and imported modules you define will be retained across multiple code executions without the need to reload them each time. It is particularly suitable for data analysis, machine learning experiments, and interactive script development.

How to use PyRunner MCP?

First, you need to install Python and Gemini CLI. Then, configure PyRunner MCP as the MCP server for Gemini CLI. After the configuration is complete, start a Gemini chat, and the AI assistant can use the tools provided by PyRunner to execute Python code, manage scripts and packages, all in a persistent environment.

Applicable scenarios

PyRunner MCP is very suitable for tasks that require repeated interaction and state retention, such as exploratory data analysis (loading a large dataset once and analyzing it multiple times), machine learning model debugging (loading a model once and testing it multiple times), and writing and testing independent utility scripts. For large-scale development that requires a complete project structure or team collaboration, using native development tools may be more appropriate.

Main features

Persistent Kernel
Core feature. When executing Python code, all variables, functions, and imported modules will be retained in memory for subsequent execution. It's like having an always-on Jupyter Notebook kernel without the need to repeatedly execute initialization code.
Intelligent script management
Saved scripts will come with descriptions and tags (metadata). You can search for and find relevant scripts through natural language descriptions (such as 'PTT crawler') rather than just relying on file names.
Fast package management
Check whether Python packages are installed extremely quickly (in microseconds) and support one-click installation of missing dependencies. All checks are completed within the MCP server process without starting a slow subprocess.
Structured long-term memory
It can remember user preferences, project context, or commonly used commands. Memories are stored in a structured JSON format and support retrieval by category (such as 'preference', 'project') and keywords.
Safe Shell command execution
When executing system commands (such as git clone, pip install), a purified environment variable will be automatically set to avoid command stalls caused by interactive prompts and unify the working directory.
Non-blocking output processing
When executing tasks that generate a large amount of output or run for a long time (such as web crawlers, SSH commands), the standard output and errors will be redirected to temporary files to prevent the entire process from getting stuck due to a full buffer.
Advantages
State persistence: Variables are retained across executions, greatly improving the efficiency of interactive development.
Smooth execution: Intelligent handling of long-running tasks and large amounts of output to avoid freezing of the AI dialogue interface.
Search-friendly: Manage scripts with descriptions and tags, making it easier to find than just using file names.
Fast startup: Operations such as package checking are completed within the process, much faster than starting a subprocess.
Structured memory: Long-term memory is categorized, enabling more accurate retrieval and clearer context management.
Limitations
Not project-oriented: More suitable for independent scripts and experiments. For complete projects with complex directory structures (src/, tests/), it is not as good as the @workspace function of VS Code Copilot.
Single-language focus: Mainly serves the Python ecosystem and has limited support for multi-language mixed projects.
Requires configuration: Manual configuration of the MCP server to Gemini CLI is required, with a certain learning curve.
Kernel state management: Users need to actively manage the kernel state (such as resetting), otherwise, too many variables may accumulate and occupy memory.

How to use

Install prerequisite software
Ensure that Python 3.8 or a higher version is installed on your computer, and install Gemini CLI according to the official guide.
Obtain and install PyRunner MCP
Clone the PyRunner MCP project from the code repository and install its required Python dependency packages.
Configure the MCP server
Edit the configuration file of Gemini CLI and add PyRunner MCP as an MCP server. You need to specify the path of the Python interpreter and the full path of the PyRunner main script.
Start and use
In the PyRunner MCP project directory, start the chat interface of Gemini CLI. If the configuration is successful, you will see a prompt to connect to the PyRunner MCP server. Now you can use all the functions through the AI assistant.

Usage examples

Case 1: Interactive data analysis
You want to analyze a large sales data CSV file. In the traditional way, every time you ask the AI to analyze different dimensions, it needs to reload the entire file, which is very time-consuming. With PyRunner MCP, you only need to load it once, and all subsequent analyses are performed directly on the DataFrame in memory.
Case 2: Managing a utility script library
You often write some small utility scripts, such as cleaning the download folder, monitoring website status, etc. Over time, it's difficult to remember what each script does. PyRunner MCP allows you to add descriptions and tags to scripts.
Case 3: Executing long-term network tasks
You need to write a script to continuously ping a server for 5 minutes and record the results. This kind of task generates a large amount of output and can easily cause ordinary execution methods to get stuck.

Frequently Asked Questions

What's the difference between PyRunner MCP and running Python directly in the terminal?
What's the difference between it and Jupyter Notebook?
Will the kernel running all the time consume a lot of memory?
Where are the scripts I wrote saved?
How can I make the AI assistant use PyRunner better?

Related resources

Model Context Protocol (MCP) official website
Understand the background, specifications, and design concepts of the MCP protocol.
Gemini CLI GitHub repository
Get the installation guide, usage documentation, and the latest news of Gemini CLI.
FastMCP library
The Python MCP server framework on which PyRunner MCP is built.

Installation

Copy the following command to your Client for configuration
{
  "mcpServers": {
    "PyRunner_MCP": {
      "command": "python",
      "args": ["C:/path/to/PyRunner_MCP/PyRunner_MCP.py"],
      "env": {
        "MCP_BASE_DIR": "C:/path/to/PyRunner_MCP"
      }
    }
  }
}
Note: Your key is sensitive information, do not share it with anyone.

Alternatives

V
Vestige
Vestige is an AI memory engine based on cognitive science. By implementing 29 neuroscience modules such as prediction error gating, FSRS - 6 spaced repetition, and memory dreaming, it provides long - term memory capabilities for AI. It includes a 3D visualization dashboard and 21 MCP tools, runs completely locally, and does not require the cloud.
Rust
10.6K
4.5 points
M
Moltbrain
MoltBrain is a long-term memory layer plugin designed for OpenClaw, MoltBook, and Claude Code, capable of automatically learning and recalling project context, providing intelligent search, observation recording, analysis statistics, and persistent storage functions.
TypeScript
9.2K
4.5 points
B
Bm.md
A feature-rich Markdown typesetting tool that supports multiple style themes and platform adaptation, providing real-time editing preview, image export, and API integration capabilities
TypeScript
16.0K
5 points
S
Security Detections MCP
Security Detections MCP is a server based on the Model Context Protocol that allows LLMs to query a unified security detection rule database covering Sigma, Splunk ESCU, Elastic, and KQL formats. The latest version 3.0 is upgraded to an autonomous detection engineering platform that can automatically extract TTPs from threat intelligence, analyze coverage gaps, generate SIEM-native format detection rules, run tests, and verify. The project includes over 71 tools, 11 pre-built workflow prompts, and a knowledge graph system, supporting multiple SIEM platforms.
TypeScript
7.8K
4 points
P
Paperbanana
Python
9.1K
5 points
F
Finlab Ai
FinLab AI is a quantitative financial analysis platform that helps users discover excess returns (alpha) in investment strategies through AI technology. It provides a rich dataset, backtesting framework, and strategy examples, supporting automated installation and integration into mainstream AI programming assistants.
10.0K
4 points
B
Better Icons
An MCP server and CLI tool that provides search and retrieval of over 200,000 icons, supports more than 150 icon libraries, and helps AI assistants and developers quickly obtain and use icons.
TypeScript
9.7K
4.5 points
A
Assistant Ui
assistant - ui is an open - source TypeScript/React library for quickly building production - grade AI chat interfaces, providing composable UI components, streaming responses, accessibility, etc., and supporting multiple AI backends and models.
TypeScript
10.0K
5 points
M
Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
38.2K
5 points
N
Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
24.1K
4.5 points
D
Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
80.7K
4.3 points
G
Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
28.6K
4.3 points
U
Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
38.6K
5 points
F
Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
70.0K
4.5 points
M
Minimax MCP Server
The MiniMax Model Context Protocol (MCP) is an official server that supports interaction with powerful text-to-speech, video/image generation APIs, and is suitable for various client tools such as Claude Desktop and Cursor.
Python
56.6K
4.8 points
C
Context7
Context7 MCP is a service that provides real-time, version-specific documentation and code examples for AI programming assistants. It is directly integrated into prompts through the Model Context Protocol to solve the problem of LLMs using outdated information.
TypeScript
106.6K
4.7 points
AIBase
Zhiqi Future, Your AI Solution Think Tank
© 2026AIBase