Best Practice for Static Resource Deployment

I’m having a general application design question regarding static resources of my application.


I have a web application that serves a server-rendered react app, so it serves html, js, css and images.

New application versions are deployed with a rolling release, so for a short amount of time pods of the old and new application are live at the same time until the new pods are fully started up. Since we don’t use sticky pods, requests go randomly to old and new pods during this time.

Now the issue is that the old and new application version have different sets of html, js, css and image resources. So an initial browser request might get the html from the new pods, but subsequent requests from the browser to load js, css and image files might go to the old pods, causing some 404s and resulting in a broken webpage in the user’s browser.

Since we have client-routing, the page might even break if the user requested the page before the deployment and did client navigation after the deployment.

Our current solution

Our current solution is to serve all static resources from an Nginx server where we upload the static resources before deploying the new application version. Our assets have a hash in their filename, so the urls are unambiguous across application versions. And we keep older assets of previous application versions for some time on the Nginx server, so it has static resources for multiple application versions available. Our load balancer routes requests for static resources directly to the Nginx server.

This solution works great, but it’s a bit annoying to having to upload the resources in a separate step before the helm deployment of our new application version.

Note that we also could use a CDN instead of Nginx, but it is not necessary for our very locally used website and it wouldn’t change the principle.

A discarded solution

One discarded solution was to use the Nginx as a reverse-proxy. This had 2 downsides: 1. we would need to implement pod stickiness so Nginx can fetch resources for cache misses when currently a deployment is going on and pods from multiple versions are active at the same time. And 2. If we deploy a new application version before the Nginx reverse-proxy has cached all possible resources, and then this resource is requested by a client who did some client-routing, it would result in a 404, again breaking the webpage in the user’s browser.

A desired solution

It would be nice if we could just deploy our application with helm and have the cache work automatically. Maybe some Nginx pod that is part of the application helm chart and gets its cache warmed automatically during deployment? Not sure if something like this is possible.

I also thought about initContainers and doing the cache server deployment in there, but that would run for every pod which is a bit impractical when having a lot of pods for one application, or when using dynamic upscaling.

I’m open for wildly different suggestions too :slight_smile: