I am trying to orchestrate stateful application with postgres as backend with k8s on AWS. I am able to get the k8s working nicely on AWS and deploy test projects in one weeks time.
I have postgres on docker container and trying to configure EBS volume for multiple containers on different nodes. (something like above). postgres container basically exposes port 5432 of container and does data manipulation on db.
But one of the limitations of AWS I came across was EBS can only be mounted on single EC2. Does it mean I have to keep only single EC2 (node) with multiple pods or single pod with multiple containers on it? (shown below) Which design is better from the k8s point of view? I will be creating service on top of Postgres containers/pods which can be accessed by API deployment done with k8s.
I went through k8s volumes documentation and it mentions that every time pod is restarted volume is mounted back on the pod. What does it mean? Will there be downtime to mount back the volume?
Is there any better way or approach to do this?
Any guidance or links is highly appreciated. Thank you for your time and patience.
I realize it’s a bit off-topic, but if you’re on aws, what’s the advantage of postgres on k8s rather than RDS and let AWS deal with this complexity.
I’ve got a couple of workloads where I wanted shared storage and I’ve been using EFS(NFS) for it - but obviously that’s not fast enough for a db. Thinking about the NFS space, I have to worry about locking conditions and such on the storage - how do you think postgres would handle that if you have 2 active db containers on the same storage? it might make more sense to think about this as a single pod.
And all that’s to say - yes - you can only mount your ebs to a single node. scripting the recovery of that ebs if you lose the worker node is a non-trivial problem to think thru.
I cant use RDS. I have customized postgres binary with some additional functions. I am experimenting with EFS for multi-AZ availability so if container goes down and spawned in another AZ it still has access to PV.
Do you have any reference to EBS recovery step? Also How do we restrain EKS to spawn container back in same AZ as EBS? Or I am missing something.
You can use the node-selector field in the yaml, to make the pod run on nodes with a specific label.
I don’t know about EKS, but when you run with kops on AWS each node by default has a label with the AZ it is running on. Then, it is very easy to make sure that pod will be scheduled in a AZ that the EBS volume is. I had used that several years and it works fine.
The downside, though, is that if the AZ is down, that pod won’t be scheduled. But if you have a replica in another AZ or something, you mitigate that risk. Or maybe it’s okay to have downtime when an AZ is down
Regarding EBS recovery, what do you mean? AWS gives 99,99% uptime SLA and in several years I had to run only once a fsck for recovery (I did it manually, I was not using kubernetes on that EBS disk). Does this answer your question?
Also, please report back with EFS as a backend. I’m interested to see how it works
Right now I have 3 subnets in 3 AZ so I want to make sure it is in one of the private subnets.
For the EBS recovery, it sounds great. I think if EFS doesn’t give reasonable performance i will have to fall back on EBS.
Could you share any articles covering high level process flow of doing the recovery? I just need to understand what i will be dealing with. I am new to dev ops and k8s things so still learning the ropes.
Thanks Rata for your time.
Also I have posted another newbie question if you get some time
Not sure about any particular guide, but you can use it as any other drive (+snapshots, if it is useful for something). Sorry, don’t know any specific guide to link to, but hopefully there are several out there
@pratikpawar : is the query solved for you? I am also looking for solution to attach volume such as ebs or efs in docker build since my pod can run in any ec2 instance