Gemini Researcher
A lightweight, stateless MCP server that allows development agents (such as Claude Code and GitHub Copilot) to delegate in - depth codebase analysis tasks to the Gemini CLI. By reading files locally and returning structured JSON results, it reduces the context consumption and model usage of agents.
rating : 2 points
downloads : 7.3K
What is Gemini Researcher?
Gemini Researcher is a Model Context Protocol (MCP) server that allows AI assistants (such as Claude Code and GitHub Copilot) to delegate complex codebase analysis tasks to Google's Gemini CLI tool. When an AI assistant needs to analyze a large codebase, instead of copying the entire file content into its own context (which consumes a large number of tokens and makes the conversation confusing), it can use Gemini Researcher to let the Gemini CLI directly read and analyze local files and then return structured analysis results.How to use Gemini Researcher?
Using Gemini Researcher requires three steps: 1) Install the necessary dependencies (Node.js and Gemini CLI); 2) Configure your AI assistant (Claude, Cursor, or VS Code) to connect to this MCP server; 3) Restart the AI assistant and start using it. After the configuration is complete, you can directly ask the AI assistant questions about the codebase, and it will automatically use Gemini Researcher to obtain detailed analysis results.Applicable scenarios
Gemini Researcher is particularly suitable for the following scenarios: analyzing the architecture of large codebases, reviewing code security, understanding complex business logic, quickly familiarizing with new projects, conducting multi - file association analysis, etc. This tool is very useful when you need an AI assistant to deeply understand the code but don't want to consume a large number of tokens.Main features
Quick query
Use Gemini's fast model to quickly analyze specific files or small code snippets, suitable for simple questions and code explanations.
In - depth research
Use Gemini's professional model for complex multi - file analysis, suitable for in - depth tasks such as architecture review and security audit.
Directory analysis
Generate a mapping of the project directory structure to help quickly understand unfamiliar codebases and generate a project overview.
Path validation
Pre - check whether the file path exists before performing expensive queries to avoid invalid operations.
Health check
Diagnose the status of the server and the Gemini CLI to help troubleshoot connection and configuration issues.
Chunked response
Large responses are automatically transmitted in chunks (about 10KB per chunk), with a 1 - hour cache supported, improving the efficiency of large - file processing.
Advantages
Save token usage of AI assistants: Avoid copying a large amount of code into the context of AI assistants.
Improve analysis depth: Utilize Gemini's large context window for more comprehensive code analysis.
Keep AI assistants focused: Let AI assistants focus on high - level decision - making and delegate detailed analysis to specialized tools.
Read - only operation is safe: The server is read - only and will not modify any files, ensuring code security.
Structured output: Return results in JSON format for easy programmatic processing by AI assistants.
Limitations
Requires additional installation: Node.js and Gemini CLI need to be installed.
Depends on Gemini API: A Gemini API key or Google account authentication is required.
Path limitation: Can only analyze files within the project root directory.
Response time: In - depth analysis may take a long time.
Quota limitation: Subject to Gemini API quota limitations, and heavy usage may trigger restrictions.
How to use
Environment preparation
Ensure that Node.js 18+ and Gemini CLI are installed on your system. Run the following commands to verify the installation:
Initialization settings
Run the initialization wizard to verify that the Gemini CLI is correctly installed and authenticated:
Configure AI assistant
Add MCP server configuration according to the AI assistant you are using. The following is the general configuration:
Restart and test
Restart your AI assistant (Claude Code, Cursor, or VS Code) and then test the connection:
Usage examples
Security vulnerability analysis
When you need to check security vulnerabilities in the code, you can use the deep_research tool for a comprehensive security audit.
Code understanding and explanation
When you need to quickly understand the logic of a complex code segment, you can use quick_query to get a concise explanation.
Project structure exploration
When you first start working on a new project, you can use analyze_directory to quickly understand the project structure.
Frequently Asked Questions
Why do I need to install the Gemini CLI?
Will this tool modify my code?
Which AI assistants are supported?
How do I set a different project root directory?
What should I do if I encounter the "GEMINI_CLI_NOT_FOUND" error?
Will the analysis results be cached?
Related resources
NPM package page
View the latest version and download statistics
GitHub repository
Source code and issue tracking
Gemini CLI documentation
Learn the detailed usage of the Gemini CLI
MCP protocol documentation
Understand the technical specifications of the Model Context Protocol
Docker image
Pre - built Docker container image

Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
24.4K
4.3 points

Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
20.4K
4.5 points

Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
35.3K
5 points

Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
72.5K
4.3 points

Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
31.1K
5 points

Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
64.4K
4.5 points

Gmail MCP Server
A Gmail automatic authentication MCP server designed for Claude Desktop, supporting Gmail management through natural language interaction, including complete functions such as sending emails, label management, and batch operations.
TypeScript
21.0K
4.5 points

Context7
Context7 MCP is a service that provides real-time, version-specific documentation and code examples for AI programming assistants. It is directly integrated into prompts through the Model Context Protocol to solve the problem of LLMs using outdated information.
TypeScript
96.8K
4.7 points



