Do I need multiple statefulsets for each rack/zone when using topologyspreadconstraint consider 2 cases where I have a single or multiple datacenter?

Currently I deployed Cassandra in k8s without multi-rack in single/multiple data-centers using single rack.

Now I am planning to deploy Cassandra across multiple racks in single/multiple DC(s). I am planning to use topologySpreadConstraints for this. I will define to constraints one for zone and another for node and will add nodes label accordingly. Here is the link which I am referring for above implementation.

The idea behind this is to achieve High Availability (HA) so that if one rack goes down my service will be available & pods should not be scheduled on other racks. When it’s restored, pods should be restored back on it.

But I am not sure how many statefulset (sts) I should use?

  1. Should I use one sts if I have one DC and N sts if I have N DC?
  2. Or, I should always use N sts if I have N Racks in each DC?

Sample Code Consider, I have 3 nodes, 3 racks and am trying to deploy 2 pods on each rack and node. And added zone & node labels on all nodes.

apiVersion: v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
      foo: bar
  serviceName: "nginx"
  replicas: 6 # by default is 1
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: node-pu
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  - maxSkew: 1
    topologyKey: zone-pu
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
   ... # removed other config

Added Stackoverflow Post Reference: