🚀 MCP_Agent:RE Project Guide
MCP_Agent:RE is a Python project designed to fetch requirement and defect data from the TAPD platform and generate quality analysis reports, providing data support for AI clients.
📺 Conversation Effect Preview
📅 Project Collection Information
This project was collected by punkpeye (Frank Fiegel) on June 10, 2025, and can be found at TAPD Data Fetcher | Glama.
🌐 Project Repository Links
📚 Project Background
MCP_Agent:RE
is a Python project that fetches requirement and defect data from the TAPD platform and generates quality analysis reports, aiming to provide data support for AI clients.
🛠️ Available MCP Servers
This project offers a rich set of MCP tools, supporting data fetching, processing, analysis, and intelligent summarization of TAPD data:
📊 Data Fetching Tools
get_tapd_data()
- Fetch requirement and defect data from the TAPD API, save it to a local file, and return the quantity statistics. Recommended.
- Suitable for initial data fetching or regular local data updates.
- Includes a complete integration of requirement and defect data.
get_tapd_stories()
- Fetch requirement data from TAPD projects, support pagination, and directly return JSON data without saving it locally. It is recommended to use this only when the data volume is small.
get_tapd_bugs()
- Fetch defect data from TAPD projects, support pagination, and directly return JSON data without saving it locally. It is recommended to use this only when the data volume is small.
🧹 Data Preprocessing Tools
preprocess_tapd_description(data_file_path, output_file_path, use_api, process_documents, process_images)
- Clean the HTML styles in the description
field of TAPD data, extract text, links, and image content, and optimize the expression through the DeepSeek API (requires configuring the DeepSeek API key). This significantly compresses the data length while retaining key information. Still in development...
preview_tapd_description_cleaning(data_file_path, item_count)
- Preview the cleaning effect of the description
field, show the compression ratio and extracted information, and do not modify the original data.
docx_summarizer.py
- Extract text, images, and table information from .docx documents and generate summaries. Still in development...
🔍 Vectorization and Search Tools
vectorize_data(data_file_path, chunk_size)
- A vectorization tool that supports the vectorization of custom data sources, converting data into vector format for subsequent semantic search and analysis.
get_vector_info()
- Get the simplified status and statistical information of the vector database.
search_data(query, top_k)
- An intelligent search based on semantic similarity, supporting natural language queries and returning the most relevant results to the query.
📈 Data Generation and Analysis Tools
generate_fake_tapd_data(n_story_A, n_story_B, n_bug_A, n_bug_B, output_path)
- Generate simulated TAPD data for testing and demonstration purposes. (If the address is not specified, the local data may be overwritten. If you need the correct data from the API, please call the data fetching tool again.)
generate_tapd_overview(since, until, max_total_tokens, model, endpoint, use_local_data)
- Use an LLM to briefly generate a project overview report and summary for understanding the project's general situation. (Requires configuring the DeepSeek API key in the environment.)
analyze_word_frequency(min_frequency, use_extended_fields, data_file_path)
- Analyze the word frequency distribution of TAPD data, generate keyword cloud statistics, and provide precise keyword suggestions for the search function.
🚀 Example Tools
example_tool(param1, param2)
- An example tool that demonstrates the registration method of MCP tools.
These tools support the complete workflow from data fetching to intelligent analysis, providing powerful support for AI-driven test management.
🔄 Available WorkFlow Scripts
🧪 Test Case Evaluation
mcp_tools\test_case_rules_customer.py
- A script for configuring test case evaluation rules, used to configure the evaluation criteria and priorities of test cases.
mcp_tools\test_case_require_list_knowledge_base.py
- A script for generating the test case requirement knowledge base, which can extract requirement information from TAPD data and generate a knowledge base, or manually modify the requirement information.
mcp_tools\test_case_evaluator.py
- A script for the AI evaluator of test cases, used to evaluate the quality of test cases based on the configured rules and generate an evaluation report to a local file.
🛠️ Unified Interface Script
- Located in
mcp_tools\common_utils.py
.
- Provides a unified tool interface, simplifying the registration and invocation of MCP tools.
- The included tools are as follows:
🛠️ MCPToolsConfig Class
__init__()
- Initialize the configuration manager and automatically create the directory structure required by the project (local_data, models, vector_data).
_get_project_root()
- Get the absolute path of the project root directory.
get_data_file_path(relative_path)
- Get the absolute path of the data file, supporting the automatic conversion of relative paths.
get_vector_db_path(name)
- Get the path of the vector database file, with the default being "data_vector".
get_model_cache_path()
- Get the path of the model cache directory.
🧠 ModelManager Class
__init__(config)
- Initialize the model manager, depending on an instance of MCPToolsConfig.
get_project_model_path(model_name)
- Check if the specified model exists locally and return the model path or None.
get_model(model_name)
- Get an instance of the SentenceTransformer model, giving priority to using the local model and supporting automatic download and caching.
clear_cache()
- Clear the global model cache and release memory resources.
📝 TextProcessor Class
extract_text_from_item(item, item_type)
- Extract key text information from TAPD data items (requirements/defects), supporting different field extraction strategies for different types.
📂 FileManager Class
__init__(config)
- Initialize the file manager, depending on an instance of MCPToolsConfig.
load_tapd_data(file_path)
- Load TAPD JSON data files, supporting both absolute and relative paths.
load_json_data(file_path)
- Load JSON data files, supporting error handling and returning an empty dictionary if the file does not exist.
save_json_data(data, file_path)
- Save data in JSON format and automatically create the directory structure.
🚀 APIManager Class 【Updated on July 22, 2025】
__init__()
- Initialize the API manager, supporting the configuration of both DeepSeek and SiliconFlow APIs.
get_headers(endpoint)
- Intelligently construct API request headers, automatically selecting the corresponding API key based on the endpoint.
call_llm(prompt, session, model, endpoint, max_tokens)
- A multi-API compatible LLM invocation interface.
- Supports the DeepSeek API (default):
deepseek-chat
, deepseek-reasoner
models.
- Supports the SiliconFlow API:
moonshotai/Kimi-K2-Instruct
and other models.
- Automatically detects the API type and adapts to different request formats and error handling.
🌐 Global Instance Management Functions
get_config()
- Get the global MCPToolsConfig instance (singleton pattern).
get_model_manager()
- Get the global ModelManager instance (singleton pattern).
get_file_manager()
- Get the global FileManager instance (singleton pattern).
get_api_manager()
- Get the global APIManager instance (singleton pattern).
📁 Project Structure
MCPAgentRE\
├─config\ # Configuration file directory
├─knowledge_documents\ # Knowledge documents (Files in this directory are ignored by default when committing to Git. If you want to commit them, manually remove the ignore rule in .gitignore)
├─documents_data\ # Document data directory (Temporary, will be replaced with local_data eventually)
│ ├─docx_data\ # Directory for storing .docx documents
│ ├─excel_data\ # Directory for storing Excel spreadsheets
│ └─pictures_data\ # Directory for storing images
├─local_data\ # Local data directory for storing data fetched from TAPD, databases, etc. (Ignored when committing to Git)
│ ├─msg_from_fetcher.json # Requirement and defect data fetched from TAPD
│ ├─fake_tapd.json # Simulated TAPD data generated by the fake data generator
│ ├─preprocessed_data.json # Preprocessed TAPD data
│ └─vector_data\ # Vector database file directory
│ ├─data_vector.index # Vector database index file
│ ├─data_vector.metadata.pkl # Vector database metadata file
│ └─data_vector.config.json # Vector database configuration file
├─mcp_tools\ # MCP tool directory
│ ├─data_vectorizer.py # Vectorization tool supporting the vectorization of custom data sources
│ ├─context_optimizer.py # Context optimizer supporting intelligent summary generation
│ ├─docx_summarizer.py # Document summarizer for extracting content from .docx documents
│ ├─fake_tapd_gen.py # TAPD fake data generator for testing and demonstration
│ ├─word_frequency_analyzer.py # Word frequency analysis tool for generating keyword cloud statistics
│ ├─data_preprocessor.py # Data preprocessing tool for cleaning and optimizing TAPD data
│ ├─common_utils.py # Unified common tool module
│ └─example_tool.py # Example tool
├─models\ # Model directory
├─test\ # Test directory
│ ├─test_comprehensive.py # Comprehensive vectorization function test
│ ├─test_vectorization.py # Basic vectorization function test
│ ├─test_data_vectorizer.py # Test the full functionality of the data_vectorizer tool
│ ├─test_word_frequency.py # Word frequency analysis tool test
│ └─vector_quick_start.py # Quick start script for vectorization functionality
├─.gitignore # Filter rules followed when committing to Git
├─.python-version # Record the Python version (3.10)
├─提示词-TAPD平台MCP分析助手.md
├─TAPD平台MCP服务器开发指南.md
├─api.txt # Contains API key information and needs to be created manually (Ignored when committing to Git)
├─main.py # Project entry file with no actual functionality
├─pyproject.toml # Modern Python dependency management file
├─README.md # Project description document, which is this document
├─tapd_data_fetcher.py # Contains the logic for fetching requirement and defect data from the TAPD API
├─tapd_mcp_server.py # MCP server startup script for providing all MCP tools
└─uv.lock # Lock file used by the UV package manager
🚚 Migration Steps
The following are the detailed steps to migrate the project to another Windows computer (Mac and Linux have not been tested yet):
1. 🛠️ Environment Preparation
Install Python 3.10
- Download the Python 3.10.x installation package from the Python official website (it is recommended to use 3.10.11 to match the original environment).
- Check the
Add Python to PATH
option during installation (this is crucial! Otherwise, you need to manually configure the environment variables).
- Verify the installation by running
python --version
in the terminal. It should output Python 3.10.11
.
Install the uv Tool
- Run
pip install uv
in the terminal (make sure pip is installed with Python):pip install uv
- Verify the installation by running
uv --version
. It should display the version information.
2. 📂 Project File Migration
- Copy the Project Directory:
- Copy the original project directory
D:\MiniProject\MCPAgentRE
to the target computer (it is recommended that the path does not contain Chinese characters or spaces, such as D:\MCPAgentRE
).
3. 📦 Dependency Installation
- Install Project Dependencies:
- Enter the project directory in the terminal:
cd D:\MCPAgentRE
(adjust according to the actual path).
- Run the dependency installation command:
uv sync
- This command will install all dependencies (including MCP SDK, aiohttp, etc.) according to
pyproject.toml
.
4. ⚙️ Configuration Adjustment
TAPD API Configuration
LLM API Configuration (Optional)
The system now supports two LLM API providers. You can choose to configure them according to your needs:
DeepSeek API Configuration
If you need to use the intelligent summary function (generate_tapd_overview
) or the description optimization function (preprocess_tapd_description
), you need to configure the DeepSeek API key:
SiliconFlow API Configuration 【🆕 Added on July 22, 2025】
SiliconFlow provides a variety of high-quality models, including Kimi and Tongyi Qianwen:
- Get the API Key: Visit the SiliconFlow Open Platform to register and obtain the API key.
- Set Environment Variables (Windows PowerShell):
$env:SF_KEY = "your-siliconflow-api-key-here"
[Environment]::SetEnvironmentVariable("SF_KEY", "your-siliconflow-api-key-here", "User")
- Verify the Configuration:
echo $env:DS_KEY
echo $env:SF_KEY
- Precautions:
- You need to restart the editor and the MCP client after setting the environment variables.
- If you do not configure the API key, the intelligent summary tool will return an error prompt, but it will not affect the use of other functions.
- Please refer to
knowledge_documents/DeepSeek API 环境变量配置指南.md
for detailed configuration instructions.
5. 🚀 Test Run
Enter the Project Folder in the Terminal
- Run
cd D:\MCPAgentRE
in the terminal (adjust according to the actual path).
Test Mode
- Verify Data Fetching:
- If you need to verify whether
tapd_data_fetcher.py
can fetch data normally, run the following command:uv run tapd_data_fetcher.py
- Expected output:
Successfully loaded configuration: User=********, Workspace=********
===== Starting to fetch requirement data =====
Requirement data fetching completed, a total of X items were fetched.
===== Starting to fetch defect data =====
Defect data fetching completed, a total of Y items were fetched.
The data has been successfully saved to the msg_from_fetcher.json file.
- Verify MCP Tool Registration:
- If you need to verify whether all MCP tools in
tapd_mcp_server.py
are registered correctly, run the following command:uv run check_mcp_tools.py
- The output result is as follows:
Successfully loaded configuration: User=4ikoesFM, Workspace=37857678
✅ MCP server started successfully!
📊 Number of registered tools: 14
🛠️ List of registered tools:
1. example_tool -
Example tool function (used to demonstrate the MCP tool registration method)
Function description:
...
2. get_tapd_data - Fetch requirement and defect data from the TAPD API and save it to a local file
Function description:
...
- Quickly Verify Vectorization Function (Recommended):
uv run test\vector_quick_start.py
- This script will automatically run data fetching, vectorization, and search functions to verify whether the overall process is normal.
- You need to connect to a VPN to download the model when using it for the first time.
- Expected output: Show the successful vectorization and search demonstration results.
- Context Optimizer and Fake Data Generation Test:
uv run mcp_tools\fake_tapd_gen.py
uv run mcp_tools\context_optimizer.py -f local_data\msg_from_fetcher.json --offline --debug
uv run mcp_tools\context_optimizer.py -f local_data\msg_from_fetcher.json --debug
- Word Frequency Analysis Tool Test:
uv run mcp_tools\word_frequency_analyzer.py
- This script will analyze the data in
local_data/msg_from_fetcher.json
and generate keyword cloud statistics.
- Document Summarization Test (Still in development):
uv run mcp_tools\docx_summarizer.py
- This script will extract text, images, and table information from the specified .docx document and generate a summary.
- Expected output: The generated summary JSON file and the extracted image and table files.
- Test Case Evaluator:
uv run test\demo_custom_rules.py
uv run mcp_tools\test_case_require_list_knowledge_base.py
uv run mcp_tools\test_case_evaluator.py
- The test case evaluator will evaluate the quality of test cases based on the configured rules and generate an evaluation report.
- The default rule configuration files
config/test_case_rules.json
and config/require_list_config.json
will be automatically generated when running for the first time.
- Please refer to
knowledge_documents\AI测试用例评估器操作手册.md
for detailed instructions.
- API Compatibility Test 【🆕 Added on July 22, 2025】:
uv run test\test_api_compatibility.py
- This script will test the connectivity and response of the DeepSeek and SiliconFlow APIs.
- Expected output: Show the call results and response content of each API.
- Used to verify whether the multi-API configuration is correct.
Normal Mode
Start the MCP Server
- Ensure that there are no print statements (or they are commented out) in the main function of
tapd_mcp_server.py
to avoid outputting debugging information during startup.
- Run the MCP server (this operation will be automatically executed by the AI client according to the configuration file and does not require manual operation):
uv run tapd_mcp_server.py
Run WorkFlow Scripts
- Scoring Rule Configuration
uv run mcp_tools/test_case_rules_customer.py
uv run mcp_tools/test_case_rules_customer.py --config
uv run mcp_tools/test_case_rules_customer.py --reset
uv run mcp_tools/test_case_rules_customer.py --help
- Run the Requirement List Knowledge Base
uv run mcp_tools/test_case_require_list_knowledge_base.py
- Run the AI Evaluator
uv run mcp_tools/test_case_evaluator.py
6. 🐞 Common Problem Troubleshooting
- Missing Dependencies: If you encounter a
ModuleNotFoundError
, check if you have executed the uv add
command, or try uv add <missing module name>
.
- API Connection Failure: Confirm that
API_USER
, API_PASSWORD
, and WORKSPACE_ID
are correct, and that the TAPD account has read permissions for the corresponding project.
- Python Version Mismatch: Ensure that the Python version on the target computer is 3.10.x (verify by running
python --version
).
🔗 How to Connect the Project to an AI Client
📋 Prerequisites
- The project has been migrated and verified on the local computer.
- The MCP server has been installed and is running.
- The AI client has been installed and is running on the local computer (using Claude Desktop as an example).
🔌 Connection Steps
- Open Claude Desktop:
- Start the Claude Desktop client.
- Configure the MCP Server:
- Use the shortcut key
Ctrl + ,
to open the settings page (or click the menu icon in the top-left corner - File - Settings).
- Select the
Developer
tab.
- Click the
Edit Config
button, and a file explorer will pop up.
- Edit the highlighted
claude_desktop_config.json
file and add the following content (if there is other content, pay attention to the hierarchical relationship):
, {
"mcpServers": {
"tapd_mcp_server": {
"command": "uv",
"args": [
"--directory",
"D:\MiniProject\MCPAgentRE",
"run",
"tapd_mcp_server.py"
]
}
}
}- Note:
- The `command` field specifies the command to run the MCP server (usually `uv`).
- The `args` field specifies the parameters to run the MCP server, including the project directory (`--directory`) and the script file to run (`run tapd_mcp_server.py`).
- Ensure that `--directory` points to the directory where the MCP server is located, i.e., `D:\MiniProject\MCPAgentRE` (please modify it according to the actual directory).
- Save and close the file.
🧪 Test the Connection
⚠️ Precautions
🚀 Extend MCP Server Functionality
To make the project directory structure clearer, it is recommended to place MCP tool functions in the mcp_tools
folder. Here is an example method for adding a new tool function.
1. 📄 Create a Tool Function File
2. 📝 Register the Tool to the Server
- Add the following in
tapd_mcp_server.py
:
- Import statement:
from mcp_tools.new_tool import new_function
- Use the
@mcp.tool()
decorator to register the function:@mcp.tool()
async def new_tool(param1: str, param2: int) -> dict:
"""
Detailed description of the tool function
Parameters:
param1 (str): Detailed description of the parameter
param2 (int): Detailed description of the parameter
Returns:
dict: Detailed description of the returned data structure
"""
return await new_function(param1, param2)
3. 📚 Best Practices for Documentation
- Add clear documentation for the AI client:
- Function-level documentation: Use detailed Chinese descriptions, including parameter types and return value structures.
- Parameter description: Clearly state the data type and expected use of each parameter.
- Return description: Describe each field of the returned dictionary in detail.
- Example: Provide a call example and expected output.
📚 Related Documents or URLs