概述
安裝
內容詳情
替代品
什麼是Gemini Cloud Assist MCP Server?
這是一個智能連接器,將AI助手(如Gemini CLI、Claude等)與Google Cloud平臺連接起來。通過自然語言對話,您可以直接查詢雲資源狀態、排查問題、分析性能,無需記憶複雜的命令行或控制檯操作。如何使用Gemini Cloud Assist?
只需在支持的AI客戶端中配置MCP服務器,然後像與助手聊天一樣詢問雲環境相關問題。系統會自動將您的問題轉換為雲平臺查詢,並以易於理解的方式返回結果。適用場景
適合開發人員、運維工程師、雲管理員等需要頻繁與Google Cloud交互的用戶。特別適用於快速故障排查、資源狀態檢查、性能分析等日常運維任務。主要功能
如何使用
使用案例
常見問題
相關資源
安裝
"mcpServers" : {
"GeminiCloudAssist": {
"command": "npx",
"args": ["-y", "@google-cloud/gemini-cloud-assist-mcp@latest"],
"timeout": 600000
}
}🚀 Gemini Cloud Assist MCP服務器
Gemini Cloud Assist MCP服務器可將模型上下文協議(MCP)客戶端(如Gemini CLI)連接到Gemini Cloud Assist API。藉助該服務器,你可以直接在本地命令行使用自然語言來理解、管理和排查Google Cloud環境中的問題。
⚠️ 重要提示
《Google Cloud Platform服務條款》(可在https://cloud.google.com/terms/ 查看)和《數據處理與安全條款》(可在https://cloud.google.com/terms/data-processing-terms 查看)不適用於Gemini Cloud Assist MCP服務器軟件的任何組件。
要了解有關Gemini Cloud Assist的更多信息,請參閱Google Cloud文檔中的Gemini Cloud Assist概述。
✨ 主要特性
- 創建並運行調查:創建並運行Cloud Assist調查,以找出複雜問題的根本原因。
- 深入挖掘並迭代調查:獲取調查結果的更多詳細信息,並添加觀察結果以完善分析。
- 查找並分析Google Cloud資源:快速檢查你的Cloud資源和應用程序的狀態與性能。
🚀 快速開始
在開始之前,請確保你已完成以下設置:
- Node.js(v20或更高版本)。
- Git。
- 已安裝並配置好Google Cloud SDK。
- 擁有一個Google Cloud項目。
- 用戶賬戶具備以下IAM角色:
roles/serviceusage.serviceUsageAdmin:啟用Cloud Assist API所需。roles/geminicloudassist.user:向Cloud Assist API發出請求所需。
步驟1:對Google Cloud進行身份驗證
Gemini Cloud Assist MCP服務器使用本地應用默認憑據(ADC)來安全地對Google Cloud進行身份驗證。要設置ADC,請運行以下gcloud命令:
# 對gcloud CLI進行用戶賬戶身份驗證
gcloud auth login
# 為服務器設置應用默認憑據
# 這允許MCP服務器代表你安全地調用Google Cloud API
gcloud auth application-default login
步驟2:配置你的MCP客戶端
以下是你將使用的標準配置片段。它指示客戶端使用npx按需下載並運行MCP服務器的最新版本。將MCP配置粘貼到你選擇的MCP客戶端中。我們建議使用Gemini CLI以獲得最佳體驗。
MCP配置
"mcpServers" : {
"GeminiCloudAssist": {
"command": "npx",
"args": ["-y", "@google-cloud/gemini-cloud-assist-mcp@latest"],
"timeout": 600000
}
}
📦 安裝指南
Gemini CLI
選項1(推薦):擴展安裝
將MCP服務器作為Gemini CLI擴展進行安裝:
gemini extensions install https://github.com/GoogleCloudPlatform/gemini-cloud-assist-mcp
通過運行以下命令驗證安裝是否成功:
gemini extensions list
選項2:全局安裝
將MCP配置添加到**~/.gemini/settings.json**中。這樣,你在每個Gemini CLI會話中都可以使用MCP工具。
選項3:項目級安裝
將MCP配置添加到你的**/path/to/project/.gemini/settings.json**文件中。這樣,你僅在該項目文件夾下創建的Gemini CLI會話中可以使用MCP工具。
Cursor
Claude Desktop
- 啟動Claude Desktop應用程序。
- 導航到“設置”>“開發者”。
- 點擊“編輯配置”按鈕以打開claude_desktop_config.json文件。
- 將MCP配置添加到claude_desktop_config.json文件中。
- 保存文件並重啟Claude Desktop。
Claude Code
claude mcp add GeminiCloudAssist npx -y @google-cloud/gemini-cloud-assist-mcp@latest
有關更多信息,請參閱Anthropic文檔中的詳細說明。
Cline
使用MCP配置更新cline_mcp_settings.json。 有關更多信息,請參閱Cline網站上的詳細說明。
🔧 MCP工具
create_investigation:創建一個新的Gemini Cloud Assist調查。run_investigation:運行你已創建的現有Gemini Cloud Assist調查。add_observation:向現有調查中添加新的觀察結果,以完善其分析。fetch_investigation:獲取調查詳細信息或列出項目中的調查。search_and_analyze_gcp_resources:使用自然語言查詢查找特定的Cloud資源。
💻 使用示例
本節展示了使用Gemini CLI通過Cloud Assist MCP服務器可以執行的任務示例,包括示例提示和響應。
基礎用法
創建並運行調查
Using 1 MCP server (Ctrl+T to view descriptions)
╭───────────────────────────────────────────────────────────────────────────────────╮
│ > Investigate the pending pods issue in my test-cluster-1 cluster │
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ I can help with that. I'll start an investigation into the pending pods issue in
your test-cluster-1 cluster. To do that, I'll run a few commands to get the
information I need to start the investigation.
╭──────────────────────────────────────────────────────────────────────────────────╮
│ ✔ Shell gcloud config get-value project (Getting the configured GCP project.) |
... |
... |
╰──────────────────────────────────────────────────────────────────────────────────╯
✦ I'll now create an investigation into the pending pods issue in your test-cluster-1 GKE cluster.
╭──────────────────────────────────────────────────────────────────────────────────╮
│ ✔ create_investigation (gemini-cloud-assist-mcp MCP Server) create_investigati… │
│ │
│ ## Gemini Cloud Assist Investigation │
│ │
│ **Name**: [Gemini CLI] Pending pods in test-cluster-1 GKE cluster │
... |
... |
╰──────────────────────────────────────────────────────────────────────────────────╯
✦ I've created the investigation. Now I'll run it to analyze the issue.
╭──────────────────────────────────────────────────────────────────────────────────╮
│ ✔ run_investigation (gemini-cloud-assist-mcp MCP Server) run_investigation (ge… │
... |
... |
╰──────────────────────────────────────────────────────────────────────────────────╯
✦ The investigation is complete. Here's what I found:
The main problem is that your GKE cluster `test-cluster-1` is out of resources. It
has only one node, and that node is full. This is causing new pods to get stuck in
a "pending" state because there's no room for them.
The investigation also revealed that Node Auto-Provisioning (NAP) is disabled. This
is a key feature that allows the cluster to automatically add more nodes when it's
running low on resources. Because NAP is disabled, the cluster can't scale up to
meet the demand.
I have two recommendations to fix this:
1. Enable Node Auto-Provisioning for the test-cluster-1 cluster. This is the best
long-term solution because it will allow the cluster to automatically scale up
and down as needed.
2. Manually add more nodes to the cluster. This is a good short-term solution that
will immediately relieve the resource pressure and allow the pending pods to
be scheduled.
I recommend that you start by enabling Node Auto-Provisioning. I can do this for
you now, or you can do it yourself by following the instructions in the official
documentation (https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-pro
visioning#enable).
What would you like to do?
高級用法
深入挖掘並迭代調查
✦ The investigation is complete. Here's a summary of the findings:
Primary Cause:
* Ephemeral Storage Exhaustion: The main reason for the pod crashes is
that the GKE nodes are running out of ephemeral storage. This is
causing a DiskPressure condition on the nodes, which in turn leads to
the kubelet evicting pods to free up space.
╭───────────────────────────────────────────────────────────────────────────────────╮
│ > Do you know which pod is consuming all the storage space? │
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ That's a great question. The investigation points to the emptyDir volume
in the gke-001-deployment pods. I can check the storage usage
inside one of the running pods to confirm this.
First, I'll list the pods to find one that is currently running.
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ Shell kubectl get pods -n gke-001-namespace (This command will list.. │
... |
... |
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ That's unexpected. The /data directory in that pod is only using 4.0K,
which is negligible. This contradicts the hypothesis that the emptyDir
volume is the primary consumer of ephemeral storage.
This means something else on the node is consuming the storage. It could
be logs, or another process running on the node that isn't immediately
obvious.
To figure this out, I'll need to get a shell on the node itself and
investigate the storage usage from there. I'll start by getting an
interactive shell on the node
gke-001-cluster-gke-001-node-pool-001.
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ Shell gcloud compute ssh gke-001-cluster-gke-001-node-pool-001… │
... |
... |
│ ERROR: (gcloud.compute.ssh) [/usr/local/bin/ssh] exited │
│ with return code [255]. |
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ It seems I can't directly SSH into the node. This is likely due to
firewall rules or the node not having an external IP address.
However, I can still investigate the node's storage. I'll add an
observation to the investigation, noting that the emptyDir volume is not
the cause of the storage issue, and then re-run the investigation. This
will allow Gemini to analyze the node's logs and other signals to
pinpoint the true source of the storage consumption.
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ add_observation (gemini-cloud-assist MCP Server) {"observation":"Fu… │
│ │
│ ## Gemini Cloud Assist Investigation │
│ │
│ **Name**: [Gemini CLI] GKE pods crashing due to ephemeral storage issues │
... |
... |
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ I've added the new information to the investigation. Now I'll re-run the
analysis. This will take the new observation into account and should help
us identify the real source of the storage consumption.
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ run_investigation (gemini-cloud-assist MCP Server) {"investigationI… │
│ │
... |
... |
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ The new investigation results are in.
Key Findings:
* `emptyDir` Ruled Out: The investigation confirms our finding that the
emptyDir volume is not the cause of the storage pressure.
* Container Logs are the Likely Culprit: The analysis strongly suggests
that the ephemeral storage is being consumed by container logs. The
kubelet is failing to reopen log files due to "no space left on
device," which is a clear indicator of this issue.
* Another Deployment is Implicated: The investigation also points to
another deployment, gke-002-deployment, as a potential source of the
problem. The logs mention that this deployment is "configured to fill
up disk."
搜索並分析Cloud資源
╭───────────────────────────────────────────────────────────────────────────────────╮
│ > Give me an overview of my GKE node pools. For each one, show me its │
│ cluster and MIGs │
╰───────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ search_and_analyze_gcp_resources (gemini-cloud-assist-mcp MCP Server) s… │
│ │
│ Based on the output of the `container.googleapis.com` API │
│ calls, here is an overview of your GKE node pools, including │
│ their associated clusters and Managed Instance Groups (MIGs): │
│ │
│ * **Cluster: `cluster-1` (Zone: `us-central1-a`)** │
│ * **Node Pool:** `default-pool` │
│ * **Machine Type:** `custom-1-3840` │
│ * **Initial Node Count:** 3 │
│ * **Autoscaling:** Enabled (min: 1, max: 5) │
│ * **MIGs:** `gke-cluster-1-default-pool-001-grp` │
│ │
│ * **Cluster: `anthos-cluster-1` (Zone: `us-central1-c`)** |
│ * **Node Pool:** `default-pool` │
│ * **Machine Type:** `e2-standard-2` │
│ * **Initial Node Count:** 3 │
│ * **Autoscaling:** Enabled (min: 2, max: 3) │
│ * **MIGs:** `gke-anthos-cluster-default-pool-001-grp` |
│ │
│ * **Cluster: `cluster-2` (Zone: `us-central1-c`)** │
│ * **Node Pool:** `default-pool-1` │
│ * **Machine Type:** `e2-micro` │
│ * **Initial Node Count:** 3 │
│ * **Autoscaling:** Enabled (max: 5) │
│ * **MIGs:** `gke-cluster-2-default-pool-001-grp` │
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ OK. Here's an overview of your GKE node pools, their clusters, and MIGs. Is
there anything else I can help you with?
╭───────────────────────────────────────────────────────────────────────────────────╮
│ > Tell me details about each MIG │
╰───────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────╮
│ ✔ search_and_analyze_gcp_resources (gemini-cloud-assist-mcp MCP Server) s… │
│ │
│ I found the following details for the requested Managed │
│ Instance Groups (MIGs): │
│ │
│ * **gke-cluster-1-default-pool-001-grp**: │
│ * **Region/Zone**: us-central1-a │
│ * **Creation Timestamp**: 2024-03-25T13:57:25.588-07:00 │
│ * **Current Actions**: 2 instances with no current │
│ action │
│ * **Target Size**: 2 │
│ * **Status**: Stable │
│ * **Instance Template**: │
│ `gke-cluster-1-default-pool-001` │
│ * **Update Policy**: Opportunistic, minimal action │
│ REPLACE, max surge 1, max unavailable 1 │
│ │
│ * **gke-anthos-cluster-default-pool-001-grp**: │
│ * **Region/Zone**: us-central1-c │
│ * **Creation Timestamp**: 2024-04-03T09:09:32.058-07:00 │
│ * **Current Actions**: 2 instances with no current │
│ action │
│ * **Target Size**: 2 │
│ * **Status**: Stable │
│ * **Instance Template**: │
│ `gke-anthos-cluster-default-pool-001` │
│ * **Update Policy**: Opportunistic, minimal action │
│ REPLACE, max surge 1, max unavailable 1 │
│ * **Named Ports**: http (port 3337) │
│ │
│ * **gke-cluster-2-default-pool-001-grp**: │
│ * **Region/Zone**: us-central1-c │
│ * **Creation Timestamp**: 2024-11-13T21:46:01.072-08:00 │
│ * **Current Actions**: 5 instances with no current │
│ action │
│ * **Target Size**: 5 │
│ * **Status**: Stable │
│ * **Instance Template**: │
│ `gke-cluster-2-default-pool-001` │
│ * **Update Policy**: Opportunistic, minimal action │
│ REPLACE, max surge 1, max unavailable 1 │
│ │
│ These details were retrieved by calling the Compute Engine │
│ API's `instanceGroupManagers.get` and │
│ `instanceGroupManagers.aggregatedList` methods. │
╰───────────────────────────────────────────────────────────────────────────────────╯
✦ Here are the details for each MIG. Let me know if you need anything else.
🤝 貢獻
- 如果你遇到了錯誤,請在我們的GitHub問題頁面上提交問題。
- 在發送拉取請求之前,請查看我們的貢獻指南。
📄 許可證
本項目採用Apache 2.0許可證,並按“原樣”提供,不提供任何形式的保證或聲明適用於任何用途。有關詳細信息,請參閱LICENSE文件。
替代品








