Skip to main content
Version: v1.0.67.0

Configure topology spreading of DxEnterpriseSqlAg pods

By default, Kubernetes does not place any constraints on how pods are spread across different nodes or data centers. This can lead to potential issues when deploying SQL Server Availability Groups using DxOperator.

There are two main problems that can arise when pods are not properly spread:

  1. Inefficient Resource Utilization: If multiple pods of a DxEnterpriseSqlAg are scheduled on the same worker node, it leads to unnecessary replication of databases on the same physical resources. This results in wasteful use of compute and storage resources.

  2. Limited Disaster Recovery Capabilities: In case of a disaster or failure at one data center, if all pods of a DxEnterpriseSqlAg are located in that data center, it hampers the ability to recover and continue operations.

The topologySpreadConstraints property

To address these issues, Kubernetes offers a setting called topologySpreadConstraints, which can be configured in the DxEnterpriseSqlAg. This setting allows you to control how pods are scheduled across the cluster based on properties of worker nodes. By using topologySpreadConstraints, you can prevent more than one pod from being scheduled on the same node or in the same data center.

The worker node topology is specific to each Kubernetes cluster. Most public clouds offer a way to create clusters with worker nodes spread across different data centers for availability and disaster recovery purposes, and this is ideal for deploying DxEnterpriseSqlAg.

Spread pods across data centers

The label topology.kubernetes.io/zone is a standard label applied to worker nodes, identifying the data center in which the worker node is physically located.

Below is an example DxEnterpriseSqlAg with topologySpreadConstraints spreading the pods across multiple data centers. This uses the node label topology.kubernetes.io/zone as the key, and prevents more than one pod from being scheduled in the same zone. It also includes the pod selector dh2i.com/entity: dxesqlag, which matches the label applied to pods belonging to this DxEnterpriseSqlAg, which is named dxesqlag.

DxEnterpriseSqlAg.yaml
apiVersion: dh2i.com/v1
kind: DxEnterpriseSqlAg
metadata:
name: dxesqlag
spec:
synchronousReplicas: 3
...
template:
spec:
...
topologySpreadConstraints:
- topologyKey: topology.kubernetes.io/zone
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
dh2i.com/entity: dxesqlag

Spread pods across worker nodes

Below is another example DxEnterpriseSqlAg with topologySpreadConstraints, which simply prevents pods from being deployed on the same worker node. This uses the node label kubernetes.io/hostname as the key instead of the zone.

DxEnterpriseSqlAg.yaml
apiVersion: dh2i.com/v1
kind: DxEnterpriseSqlAg
metadata:
name: dxesqlag
spec:
synchronousReplicas: 3
...
template:
spec:
...
topologySpreadConstraints:
- topologyKey: kubernetes.io/hostname
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
dh2i.com/entity: dxesqlag

Change topologySpreadConstraints of an existing DxEnterpriseSqlAg

It is possible to change the topologySpreadConstraints of a DxEnterpriseSqlAg after creation. However, the change will not cause any deletion or rescheduling of pods. Pods that do not follow the new constraints can be manually deleted, and replacement pods will be scheduled based on the new constraints.

Additional information: