Other than tolerations, we can also override the default PodAntiAffiinity with a custom one to split the pods, and use a secondary NodeAffinity configuration, to force our pods to only run on both of our nodes. Same concept, but utilizing NodeAffinity instead of Tolerations.
To make this work, we will first have to apply the same label (Not a taint) to both of our nodes, so then we can use NodeAffinity to force our pods to only be scheduled on our nodes:
With the above commands, we labeled both of our nodes with app=artifactory. Now we can use NodeAffinity to force our pods to be scheduled based on the label, and PodAntiAffinity to make sure that they are separated:
Note that we need to provide our own custom PodAntiAffinity, because when using the artifactory.affinity key, it completely overrides the default one that comes with the chart. You can also use preferredDuringSchedulingIgnoredDuringExecution, Instead of requiredDuringSchedulingIgnoredDuringExecution, which will give the pods the option to be scheduled on the other node, in case their node went down. Furthermore, you can use any topologyKey you would like, as long as the label exists on both nodes.
The same will have to be added to the bundled Nginx and Postgresql, as with the Tolerations example.
Finally, this is an example with 2 nodes, and 2 pods, but you can use this as a base standard with a higher number of nodes and pods.
To make this work, we will first have to apply the same label (Not a taint) to both of our nodes, so then we can use NodeAffinity to force our pods to only be scheduled on our nodes:
kubectl label nodes ip-10-0-0-5.eu-central-1.compute.internal app=artifactory
kubectl label nodes ip-10-0-1-168.eu-central-1.compute.internal app=artifactory
With the above commands, we labeled both of our nodes with app=artifactory. Now we can use NodeAffinity to force our pods to be scheduled based on the label, and PodAntiAffinity to make sure that they are separated:
artifactory: replicaCount: 2 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - artifactory podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - artifactory topologyKey: "kubernetes.io/hostname"
Note that we need to provide our own custom PodAntiAffinity, because when using the artifactory.affinity key, it completely overrides the default one that comes with the chart. You can also use preferredDuringSchedulingIgnoredDuringExecution, Instead of requiredDuringSchedulingIgnoredDuringExecution, which will give the pods the option to be scheduled on the other node, in case their node went down. Furthermore, you can use any topologyKey you would like, as long as the label exists on both nodes.
The same will have to be added to the bundled Nginx and Postgresql, as with the Tolerations example.
Finally, this is an example with 2 nodes, and 2 pods, but you can use this as a base standard with a higher number of nodes and pods.