Sorry for any formatting issues - I am on mobile
July 20 |
Correct - is that not what you are asking for? You can read their latest statuses, what images they are running, etc.
So any pointers here to docs that I can look at would be greatly appreciated.
Have you looked at the many client-go examples? Assuming you want to code in Go, that is.
I think there might be a slight difference in how we define terms here. To me, a sidecar provides standalone and useful functionality such that you don’t have to build in that functionality into each and every client the side car interacts with. In that sense, my application is a very straightforward sidecar. I do the following:
- query the pod that I’m in for all sibling containers which I attach to.
Cool, this statement clarified a lot for me. It IS a sidecar, and it is interacting exclusively with “peer” container(s) in the same pod. Thanks.
- provide functionality for all of these sibling containers by exec-ing code on them and gleaning results.
So, no matter what solution we cook up, you need to make some API call which runs a command in the target container, and which streams input into that executing command and output/errors from it. Right? Then you have to parse the output and some something with it.
If I need to program a go client/server in order to communicate between my sidecar and its clients, then I multiply my provisioning requirements proportional to the number of clients that my sidecar is providing services for. If I have 100 types of clients, I have to provide them with 100 go servers which handle requests, and I need to provision my go program locally in my sidecar. ie: I need to modify 100 Dockerfiles. Plus I need to either find one or write one, when I really want something standard.
Some confusion. I meant that your sidecar can use the Kubernetes client library in Go (or some other language, but I know less about those client libs) to talk to the Kubernetes API server to run the exec into your peer container.
I know you don’t like the idea of going up and out to the API server just I come back down to the same pod, but I think that’s the best option that works today. I don’t know if a shortcut.
Having to do all this adhoc work - instead of kubernetes providing a default solution akin to what it provides with kubectl -
If you don’t like writing your own kube API client, you can literally run kubectl. All you need to do is write an RBAC Role and RoleBinding which grants the ServiceAccount access to pods/exec.
I will admit I am not and expert in all the features of the RBAC system, so I am not sure if there is a way to express “only to my same pod” or “only to pods of this same SA”. I suspect not, which means you would be granting access to all pods in the same NS, as a first cut.
While, yes, the pod real a designed to work together, it’s not assumed that code running in one container can or should be able to access peer containers via the kube API.
Yes, and that is in my mind a major lost opportunity. Its easy enough, all you would have to do is have an internal directive on the pod that says:
which handles all the plumbing for you, uses a standard method for connecting between the pods (possibly kubectl exec), avoids the need to touch an api at all, makes it so you don’t need to have RBAC, etc. You could still do what you are saying is necessary right now, but you wouldn’t need to. In fact I don’t see why you would even WANT to given how much easier it would be to provide a single directive at the appropriate point here.
KEPs are welcome. It’s not a terrible idea, but the details are more complicated than you are assuming, I think. I have not seen many instances of people who need to exec into a peer container, but it’s not terribly implausible.
More often I see either exec probes, which kubelet already knows how to do, or I see sidecars that poke at peers over localhost network.
E.g. main app binds to localhost:port and exposes native stats, and the sidecar concerts those to the preferred export format (e.g. Prometheus).
Another path is to expose a UNIX socket on an emptyDir volume and have a sidecar use that.
I really want to hear what the devs’ opinions are on this. AFAICT it seems an obvious productivity win. The one thing I could see as an argument is that it skimps on security, but:
a. you wouldn't need to use it
b.if an attacker has gotten a shell to a given pod that doesn't have an ssh connection to the outside world, they most likely have the kube config and your day is already ruined.
c. by providing your own connection method for communication you are introducing a security vulnerability of your own if it is based on a listening port.
A is the saving grace for the idea.
B is not true - RCEs happen all the time.
Frankly it seems that communication between containers in a pod is a fairly basic requirement here, and not having a fairly straightfoward solution here out of the box and forcing end users to implement their own for even basic applications is puzzling. Don’t get me wrong, I love kubernetes but having this would make a lot of people’s lives easier IMO.
As I mentioned, we have lots of mechanisms for communication within a pod: network, volumes, signals, IPC, etc. Just not exec.
You might argue it is conspicuous in its absence, but I think it’s an order of magnitude more complex and and order of magnitude (or 2 or 3) less commonly needed.
I encourage you to start a KEP - work out the details of how it would actually work as securely as possible.