Relative beginner overwhelmed with the entire process. Looking for help or advice on several things

About a year ago I had just gotten into web development, spinning up a small VPS and hosting a simple LAMP stack for a small website.
Now that I have bigger aspirations and ideas, I wanted to get closer to enterprise techniques. Discovered Docker, loved it, thought it was the most amazing thing since sliced bread. Figured it was everything I needed and began developing my idea. It had several parts, including a MySQL image, a Redis image, an API image, static file server image, and an HAProxy image. Being able to spin the whole thing up with docker-compose and run it/test it locally was awesome.
Then I had the beautiful curse of discovering Kubernetes after only ever hearing offhanded mentions about it. After diving into it headfirst, I was hooked on the idea. Swapped from DO to GCS because of GKE.

Now I’m at the point where I’m realizing I dove into the deep end with way too much way too fast, I have no idea what I’m doing. I can see all the puzzle pieces, but I can’t put it together. I’m not looking for direct code-related help or debugging, I feel like I can break those sorts of problems down perfectly fine on my own. I’m looking for just general overall conceptual ‘help’, I guess is the best way to describe it. I’m overwhelmed, I’m at the point where Googling my problems away isn’t possible because I don’t even know what I’m supposed to be focusing on or doing right now.

I’ll start by describing my situation, what I’d like to be able to do, and methods I’ve found that I think might be related or helpful (but are most likely wrong or incomplete, please don’t get mad if I use a word or definition wrong) I’m going to mentioning a lot of loosely related things, so obviously feel free to answer any one part, any amount of help is appreciated.

When using Docker, I’d usually spin up my images, make changes to my code in VSCode, and visit the related webpages on localhost to see the changes reflected immediately. This is because docker-compose let me attach volumes of code into my images that were immediately “transferred” over to the image when saved in my editor.
As far as I can tell, this is much more complicated with Kubernetes since it works primarily off of pre-built images(?), with the code included in the dockerfile itself. This isn’t a problem for me to do in concept, I know how to put the code in the image itself, build the image, tag it, push it to my repo, and update my Kubernetes deployments to use the new image. This takes a crazy amount of time though, and being a very new developer, I will find myself making dozens of mistakes that require many updates and saves to get working right.

On this topic, so far I’ve heard of a couple methods:

  • Push changes to git repo in Google cloud, use Cloud Build to build the image and push it to my image repo, then use some method to detect the image update and automatically deploy it to my GKE cluster(?). Seems slow, but knowing Google who knows. Also doesn’t work for local testing.
  • Run a cluster locally, then just manually build each image and run kubectl apply each update. This is what I currently do, but it’s long and tedious and I feel like there’s a better way.
  • I remember reading something about testing pods(?) locally by proxying(?) them to a cloud cluster, so that they can use resources you have running in the cloud, while you tinker with the pod that’s local. Don’t know if that’s relevant or even meant for what I’m going for.
  • I guess I could attach shell into a pod and mess with the code in there, and transfer the code out of the pod when it works? Seems dangerous and dumb because of the ephemeral nature of pods.
  • Skaffold: Haven’t tried it? Seems to be the answer I’m looking for. Is it fast enough for my use case of fixing minor bugs from my incompetence?

I’d like to maintain a Production/Next/Test environment in the easiest, most automated and maintainable way possible.
How would I even begin to set this up?
Multiple clusters? A single cluster with multiple namespaces?
Do I need separate docker images?
Separate Skaffold configurations?
Can I have Prod/Next on GCS and ‘test’ be local? Can I tie them to git branches?
Should I have each Docker image in a different git repo or is separate folders fine?
How can I make changes to parts of my code depending on which branch the codeis running in? Say, accessing an external database ‘Project1-next’ from a MySQL server instead of ‘Project1-prod’?
Do I store those db credentials in a configmap, a secret or hardcoded?
Can I access configmap/secret data in my php/python/.net/go/whatever pod?
I think this is the part I’m most overwhelmed with, and my questions can be summed up with ‘what is the methodology of setting up multiple environments in Kubernetes?’.

Needless to say any help is appreciated. And if this isn’t the best way to get help, pointing in the right direction is appreciated. I’ve tried to pay for Kubernetes help on Fiverr twice now with no replies, had to be refunded. I feel confident enough to figure out most issues, what I need most is direction and overarching concepts to work off of.
Thank you

I’ll preface this with everyone’s work pattern is different, but the docker workflow and kubernetes aren’t mutually exclusive. I use docker and docker-compose for a lot of my local testing and hacking on things. That isn’t to say that I don’t have templates for my deployments / services etc, but that workflow is still the easiest when working local.

Tools like skaffold and tilt do make it easier for a more hybrid approach.

For this, it’s sort of how much is acceptable risk. Outside of your own application you may also want to take into account Kubernetes versions and test upgrade/downgrade - this sort of thing lends itself to more multiple clusters where at least prod is running in its own.

Do you mean separate ones for dev/test/prod? If so, I’d suggest just tagging them at the version of your application or use the git commit hash. That way you always know what version of the code is running. These can be promoted from dev/test/prod.

  • Sort of your call on skaffold (to my knowledge).
  • You definitely can have test be local.If you want to try local kubernetes testing instead of through docker-compose, check out kind.
  • If your images are all very tightly coupled it’d probably make sense to keep them together and version them together. If they can vary, it may be worth keeping separate so you can increment or bump them independently.
  • Secret is preferred, they’d be passed into the container the same way no matter the env and the db server itself could be an ExternalName service.
  • You can access secrets/configmaps the same way in a pod.
  • There really isn’t just one good way to go about it. Part of what makes Kubernetes so powerful is its flexibility. This can certainly lead to some level of decision paralysis though. Much of it is just trying to get an idea about what options exist and then trying to align them with what works best for you/your company.

I’m not sure how much those answers will help, but hopefully it sheds a little light on it :slight_smile: