Gemini Researcher
G

Gemini Researcher

A lightweight, stateless MCP server that allows development agents (such as Claude Code and GitHub Copilot) to delegate in - depth codebase analysis tasks to the Gemini CLI. By reading files locally and returning structured JSON results, it reduces the context consumption and model usage of agents.
2 points
6.5K

What is Gemini Researcher?

Gemini Researcher is a Model Context Protocol (MCP) server that allows AI assistants (such as Claude Code and GitHub Copilot) to delegate complex codebase analysis tasks to Google's Gemini CLI tool. When an AI assistant needs to analyze a large codebase, instead of copying the entire file content into its own context (which consumes a large number of tokens and makes the conversation confusing), it can use Gemini Researcher to let the Gemini CLI directly read and analyze local files and then return structured analysis results.

How to use Gemini Researcher?

Using Gemini Researcher requires three steps: 1) Install the necessary dependencies (Node.js and Gemini CLI); 2) Configure your AI assistant (Claude, Cursor, or VS Code) to connect to this MCP server; 3) Restart the AI assistant and start using it. After the configuration is complete, you can directly ask the AI assistant questions about the codebase, and it will automatically use Gemini Researcher to obtain detailed analysis results.

Applicable scenarios

Gemini Researcher is particularly suitable for the following scenarios: analyzing the architecture of large codebases, reviewing code security, understanding complex business logic, quickly familiarizing with new projects, conducting multi - file association analysis, etc. This tool is very useful when you need an AI assistant to deeply understand the code but don't want to consume a large number of tokens.

Main features

Quick query
Use Gemini's fast model to quickly analyze specific files or small code snippets, suitable for simple questions and code explanations.
In - depth research
Use Gemini's professional model for complex multi - file analysis, suitable for in - depth tasks such as architecture review and security audit.
Directory analysis
Generate a mapping of the project directory structure to help quickly understand unfamiliar codebases and generate a project overview.
Path validation
Pre - check whether the file path exists before performing expensive queries to avoid invalid operations.
Health check
Diagnose the status of the server and the Gemini CLI to help troubleshoot connection and configuration issues.
Chunked response
Large responses are automatically transmitted in chunks (about 10KB per chunk), with a 1 - hour cache supported, improving the efficiency of large - file processing.
Advantages
Save token usage of AI assistants: Avoid copying a large amount of code into the context of AI assistants.
Improve analysis depth: Utilize Gemini's large context window for more comprehensive code analysis.
Keep AI assistants focused: Let AI assistants focus on high - level decision - making and delegate detailed analysis to specialized tools.
Read - only operation is safe: The server is read - only and will not modify any files, ensuring code security.
Structured output: Return results in JSON format for easy programmatic processing by AI assistants.
Limitations
Requires additional installation: Node.js and Gemini CLI need to be installed.
Depends on Gemini API: A Gemini API key or Google account authentication is required.
Path limitation: Can only analyze files within the project root directory.
Response time: In - depth analysis may take a long time.
Quota limitation: Subject to Gemini API quota limitations, and heavy usage may trigger restrictions.

How to use

Environment preparation
Ensure that Node.js 18+ and Gemini CLI are installed on your system. Run the following commands to verify the installation:
Initialization settings
Run the initialization wizard to verify that the Gemini CLI is correctly installed and authenticated:
Configure AI assistant
Add MCP server configuration according to the AI assistant you are using. The following is the general configuration:
Restart and test
Restart your AI assistant (Claude Code, Cursor, or VS Code) and then test the connection:

Usage examples

Security vulnerability analysis
When you need to check security vulnerabilities in the code, you can use the deep_research tool for a comprehensive security audit.
Code understanding and explanation
When you need to quickly understand the logic of a complex code segment, you can use quick_query to get a concise explanation.
Project structure exploration
When you first start working on a new project, you can use analyze_directory to quickly understand the project structure.

Frequently Asked Questions

Why do I need to install the Gemini CLI?
Will this tool modify my code?
Which AI assistants are supported?
How do I set a different project root directory?
What should I do if I encounter the "GEMINI_CLI_NOT_FOUND" error?
Will the analysis results be cached?

Related resources

NPM package page
View the latest version and download statistics
GitHub repository
Source code and issue tracking
Gemini CLI documentation
Learn the detailed usage of the Gemini CLI
MCP protocol documentation
Understand the technical specifications of the Model Context Protocol
Docker image
Pre - built Docker container image

Installation

Copy the following command to your Client for configuration
{
  "mcpServers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}

{
  "mcpServers": {
    "gemini-researcher": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "gemini-researcher"
      ]
    }
  }
}

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "npx",
      "args": [
        "gemini-researcher"
      ],
      "env": {
        "PROJECT_ROOT": "/path/to/your/project"
      }
    }
  }
}

{
  "mcpServers": {
    "gemini-researcher": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "-e", "GEMINI_API_KEY",
        "-v", "/path/to/your/project:/workspace",
        "capybearista/gemini-researcher:latest"
      ],
      "env": {
        "GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}
Note: Your key is sensitive information, do not share it with anyone.

Alternatives

V
Vestige
Vestige is an AI memory engine based on cognitive science. By implementing 29 neuroscience modules such as prediction error gating, FSRS - 6 spaced repetition, and memory dreaming, it provides long - term memory capabilities for AI. It includes a 3D visualization dashboard and 21 MCP tools, runs completely locally, and does not require the cloud.
Rust
9.6K
4.5 points
M
Moltbrain
MoltBrain is a long-term memory layer plugin designed for OpenClaw, MoltBook, and Claude Code, capable of automatically learning and recalling project context, providing intelligent search, observation recording, analysis statistics, and persistent storage functions.
TypeScript
10.2K
4.5 points
B
Bm.md
A feature-rich Markdown typesetting tool that supports multiple style themes and platform adaptation, providing real-time editing preview, image export, and API integration capabilities
TypeScript
14.9K
5 points
S
Security Detections MCP
Security Detections MCP is a server based on the Model Context Protocol that allows LLMs to query a unified security detection rule database covering Sigma, Splunk ESCU, Elastic, and KQL formats. The latest version 3.0 is upgraded to an autonomous detection engineering platform that can automatically extract TTPs from threat intelligence, analyze coverage gaps, generate SIEM-native format detection rules, run tests, and verify. The project includes over 71 tools, 11 pre-built workflow prompts, and a knowledge graph system, supporting multiple SIEM platforms.
TypeScript
8.8K
4 points
P
Paperbanana
Python
10.0K
5 points
F
Finlab Ai
FinLab AI is a quantitative financial analysis platform that helps users discover excess returns (alpha) in investment strategies through AI technology. It provides a rich dataset, backtesting framework, and strategy examples, supporting automated installation and integration into mainstream AI programming assistants.
10.0K
4 points
B
Better Icons
An MCP server and CLI tool that provides search and retrieval of over 200,000 icons, supports more than 150 icon libraries, and helps AI assistants and developers quickly obtain and use icons.
TypeScript
9.7K
4.5 points
A
Assistant Ui
assistant - ui is an open - source TypeScript/React library for quickly building production - grade AI chat interfaces, providing composable UI components, streaming responses, accessibility, etc., and supporting multiple AI backends and models.
TypeScript
10.0K
5 points
G
Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
28.5K
4.3 points
M
Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
38.2K
5 points
D
Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
80.5K
4.3 points
N
Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
24.9K
4.5 points
U
Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
37.5K
5 points
F
Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
70.8K
4.5 points
C
Context7
Context7 MCP is a service that provides real-time, version-specific documentation and code examples for AI programming assistants. It is directly integrated into prompts through the Model Context Protocol to solve the problem of LLMs using outdated information.
TypeScript
106.4K
4.7 points
M
Minimax MCP Server
The MiniMax Model Context Protocol (MCP) is an official server that supports interaction with powerful text-to-speech, video/image generation APIs, and is suitable for various client tools such as Claude Desktop and Cursor.
Python
55.5K
4.8 points
AIBase
Zhiqi Future, Your AI Solution Think Tank
© 2026AIBase