Hi everyone! 
I’ve been working on a lightweight Kubernetes MCP (Model Control Plane) server and would love to get your feedback.
The project is called k8s-mcp-server (GitHub Repo) — it’s a small Go program that acts as a programmable control plane interface over stdin/stdout using JSON-RPC.
It can:
- Discover API resources dynamically
- List Kubernetes resources with filters
- Retrieve detailed resource data
- Describe resources like
kubectl describe
- Fetch pod logs easily (newly added!)
The goal is to create an extremely simple but powerful component for anyone building Kubernetes tools, dashboards, or automation systems — without needing the full kubectl
CLI or heavy SDKs.
Why I built it:
I found myself needing a lightweight, protocol-driven interface to clusters for custom agents and internal tools — but wanted to avoid complex APIs or heavy client libraries.
A simple, structured message format (JSON-RPC) seemed much cleaner for automation.
- What features would you want next?
- Would a streaming logs mode (for real-time pod logs) be useful?
- Is it interesting to extend this towards multi-cluster management too?
- If you’ve built your own internal tooling, would something like this fit into your stack?
Link to GitHub repo:
GitHub - reza-gholizade/k8s-mcp-server: Manage Your Kubernetes Cluster with k8s mcp-server
I would really love to hear your thoughts, feedback, and ideas!
Feel free to comment or even raise issues/PRs if you’re interested.
Thanks a lot for reading! 
-Reza
#kubernetes
#mcp
#control-plane
#devtools
#go
I came across your project here — congratulations on the initiative! It’s a very interesting idea, and it opens up multiple possibilities for expose Kubernetes to AI tools.
I was thinking about some potential deployment models beyond running it as a Docker container or directly via shell on the host. For example, have you considered running it as a Kubernetes Pod or Service that could aggregate and expose cluster-level context information? This could enable interesting multi-cluster or multi-environment management use cases.
In such a scenario, each MCP instance running as a Pod could expose not only external context data but also metadata about the cluster where it is deployed (such as node status, namespaces, workloads, etc). That might be useful for centralized governance or telemetry aggregation across multiple clusters, while still being fully aligned with Kubernetes-native patterns.
Regarding the current state of the project, I faced some minor difficulties due to the absence of more detailed logs during startup. For instance (my fault, actually), I was passing incorrect config parameters (I was in a rush and didn’t carefully read your doc, haha). When starting the service, it reported “server started successfully”, but nothing was actually working. I ended up adding some temporary logs along the initialization steps and eventually found the issue.
At the moment, I’m experimenting with running MCP inside the Kubernetes cluster itself as a Deployment (which may sound odd, but it’s for a POC, haha). I also adjusted the build to generate an arm64 image.
So far, I haven’t been fully successful in getting it to work yet, but I’m also looking at your N8N integration example — I believe this kind of architecture can be really useful in day-to-day pipelines.
Thanks again for sharing your work — really inspiring and fun to experiment with!
1 Like