Memory Cache
M

Memory Cache

An MCP service that reduces token consumption in language model interactions through efficient data caching
2.5 points
10.6K

What is an in-memory cache server?

An in-memory cache server is a tool based on the MCP (Model Context Protocol) designed to reduce token consumption by storing reusable data. It can automatically cache data when you interact with the language model, thereby improving efficiency and saving costs.

How to use an in-memory cache server?

Simply install and run the in-memory cache server, and then configure it in your MCP client. No manual intervention is required, and the cache will automatically handle all data.

Applicable scenarios

The in-memory cache server is particularly suitable for application scenarios that require frequent access to the same data, such as file reading, data analysis, and project navigation.

Main features

Automatic caching
When you interact with the language model, the in-memory cache server will automatically save commonly used data and directly return the cached content on the next request.
Dynamic cleaning
Automatically remove expired or infrequently used cache entries according to the set rules to ensure that the memory does not grow indefinitely.
Multi-client compatibility
Supports any client that follows the MCP protocol and can be easily integrated into the existing workflow.
Advantages
Significantly reduce token consumption and lower costs
Improve response speed and enhance the user experience
Easy to use without additional configuration
Limitations
May not show obvious effects for one-time tasks
Requires reasonable setting of the cache size to balance performance and memory usage

How to use

Install the in-memory cache server
You can choose to install it automatically through Smithery or manually clone the code and deploy it locally.
Configure the MCP client
Add the in-memory cache server to your MCP client settings and specify its path and parameters.
Start the server
After the configuration is completed, the in-memory cache server will automatically run in the background.

Usage examples

File reading test
A certain amount of tokens will be consumed when reading a large file for the first time, and the second reading will directly obtain the data from the cache.
Data processing test
After performing complex calculations on a set of data, subsequent requests can directly reference the results in the cache.

Frequently Asked Questions

How to check if the cache is working properly?
Will the cache grow indefinitely?
What data will be cached?

Related resources

Official documentation
Learn more about the in-memory cache server.
GitHub repository
View the source code and contribution guidelines.
Video tutorial
Watch the demonstration video to learn how to get started quickly.

Installation

Copy the following command to your Client for configuration
{
  "mcpServers": {
    "memory-cache": {
      "command": "node",
      "args": ["/path/to/ib-mcp-cache-server/build/index.js"]
    }
  }
}

{
  "mcpServers": {
    "memory-cache": {
      "command": "node",
      "args": ["/path/to/build/index.js"],
      "env": {
        "MAX_ENTRIES": "5000",
        "MAX_MEMORY": "209715200",  // 200MB
        "DEFAULT_TTL": "7200",      // 2 hours
        "CHECK_INTERVAL": "120000",  // 2 minutes
        "STATS_INTERVAL": "60000"    // 1 minute
      }
    }
  }
}
Note: Your key is sensitive information, do not share it with anyone.

Alternatives

A
Airweave
Airweave is an open - source context retrieval layer for AI agents and RAG systems. It connects and synchronizes data from various applications, tools, and databases, and provides relevant, real - time, multi - source contextual information to AI agents through a unified search interface.
Python
10.6K
5 points
V
Vestige
Vestige is an AI memory engine based on cognitive science. By implementing 29 neuroscience modules such as prediction error gating, FSRS - 6 spaced repetition, and memory dreaming, it provides long - term memory capabilities for AI. It includes a 3D visualization dashboard and 21 MCP tools, runs completely locally, and does not require the cloud.
Rust
5.9K
4.5 points
M
Moltbrain
MoltBrain is a long-term memory layer plugin designed for OpenClaw, MoltBook, and Claude Code, capable of automatically learning and recalling project context, providing intelligent search, observation recording, analysis statistics, and persistent storage functions.
TypeScript
7.0K
4.5 points
B
Bm.md
A feature-rich Markdown typesetting tool that supports multiple style themes and platform adaptation, providing real-time editing preview, image export, and API integration capabilities
TypeScript
10.7K
5 points
S
Security Detections MCP
Security Detections MCP is a server based on the Model Context Protocol that allows LLMs to query a unified security detection rule database covering Sigma, Splunk ESCU, Elastic, and KQL formats. The latest version 3.0 is upgraded to an autonomous detection engineering platform that can automatically extract TTPs from threat intelligence, analyze coverage gaps, generate SIEM-native format detection rules, run tests, and verify. The project includes over 71 tools, 11 pre-built workflow prompts, and a knowledge graph system, supporting multiple SIEM platforms.
TypeScript
5.8K
4 points
P
Paperbanana
Python
7.2K
5 points
B
Better Icons
An MCP server and CLI tool that provides search and retrieval of over 200,000 icons, supports more than 150 icon libraries, and helps AI assistants and developers quickly obtain and use icons.
TypeScript
8.5K
4.5 points
A
Assistant Ui
assistant - ui is an open - source TypeScript/React library for quickly building production - grade AI chat interfaces, providing composable UI components, streaming responses, accessibility, etc., and supporting multiple AI backends and models.
TypeScript
6.9K
5 points
N
Notion Api MCP
Certified
A Python-based MCP Server that provides advanced to-do list management and content organization functions through the Notion API, enabling seamless integration between AI models and Notion.
Python
22.4K
4.5 points
M
Markdownify MCP
Markdownify is a multi-functional file conversion service that supports converting multiple formats such as PDFs, images, audio, and web page content into Markdown format.
TypeScript
35.8K
5 points
D
Duckduckgo MCP Server
Certified
The DuckDuckGo Search MCP Server provides web search and content scraping services for LLMs such as Claude.
Python
74.5K
4.3 points
G
Gitlab MCP Server
Certified
The GitLab MCP server is a project based on the Model Context Protocol that provides a comprehensive toolset for interacting with GitLab accounts, including code review, merge request management, CI/CD configuration, and other functions.
TypeScript
25.6K
4.3 points
F
Figma Context MCP
Framelink Figma MCP Server is a server that provides access to Figma design data for AI programming tools (such as Cursor). By simplifying the Figma API response, it helps AI more accurately achieve one - click conversion from design to code.
TypeScript
66.9K
4.5 points
U
Unity
Certified
UnityMCP is a Unity editor plugin that implements the Model Context Protocol (MCP), providing seamless integration between Unity and AI assistants, including real - time state monitoring, remote command execution, and log functions.
C#
34.1K
5 points
G
Gmail MCP Server
A Gmail automatic authentication MCP server designed for Claude Desktop, supporting Gmail management through natural language interaction, including complete functions such as sending emails, label management, and batch operations.
TypeScript
21.8K
4.5 points
C
Context7
Context7 MCP is a service that provides real-time, version-specific documentation and code examples for AI programming assistants. It is directly integrated into prompts through the Model Context Protocol to solve the problem of LLMs using outdated information.
TypeScript
100.3K
4.7 points
AIBase
Zhiqi Future, Your AI Solution Think Tank
© 2026AIBase