Charly Memory Cache
An MCP service that reduces token consumption in language model interactions through a caching mechanism
rating : 2.5 points
downloads : 8.4K
What is the Memory Cache MCP Server?
The Memory Cache MCP Server is a tool that can automatically store and reuse data to reduce token consumption when interacting with language models. It is suitable for any client that supports the MCP protocol and language models that require tokens.How to use the Memory Cache MCP Server?
After installation, simply add the server configuration to your MCP client settings to start using it. The server will automatically cache commonly used data without additional operations.Applicable Scenarios
It is particularly suitable for scenarios that require frequent file reading, repeated data analysis, or static content processing.Main Features
Intelligent Caching
Automatically cache repeatedly accessed data, such as file content or calculation results.
Memory Management
Limit the maximum number of cache entries and memory usage to prevent resource exhaustion.
Statistical Reports
Monitor cache hit rate and efficiency in real - time to optimize performance.
Advantages
Reduce token consumption and improve interaction efficiency
Automatically manage the cache without manual intervention
Support multiple MCP clients and language models
Limitations
The cache may occupy a certain amount of memory
Not suitable for rapidly changing data (TTL needs to be adjusted)
How to Use
Clone the Repository
Clone the server code to your local machine via Git.
Install Dependencies
Run the command to install the required Node.js dependencies.
Build the Project
Compile the code to generate an executable file.
Configure the Client
Add the server configuration to the MCP client.
Usage Examples
File Reading Test
Read the same file multiple times to verify the caching effect.
Data Analysis Test
Perform multiple analyses on the same data.
Project Navigation Test
Explore the files and directories of the project.
Frequently Asked Questions
How to check if the cache is working properly?
Will the cache grow infinitely?
How to adjust the cache size?
Related Resources
Official Documentation
Detailed server usage guide.
GitHub Repository
Source code and Issue tracking.
Tutorial Video
Quick start demonstration.

Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
17.7K
4.5 points

Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
27.7K
5 points

Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
17.7K
4.3 points

Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
55.0K
4.3 points

Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
24.7K
5 points

Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
51.8K
4.5 points

Gmail MCP Server
A Gmail automatic authentication MCP server designed for Claude Desktop, supporting Gmail management through natural language interaction, including complete functions such as sending emails, label management, and batch operations.
TypeScript
17.5K
4.5 points

Context7
Context7 MCP is a service that provides real-time, version-specific documentation and code examples for AI programming assistants. It is directly integrated into prompts through the Model Context Protocol to solve the problem of LLMs using outdated information.
TypeScript
76.9K
4.7 points

