Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DENA-845] set affinity for elasticsearch, remove it for Kibana #62

Merged
merged 1 commit into from
Aug 5, 2024

Conversation

MarcinGinszt
Copy link
Collaborator

@MarcinGinszt MarcinGinszt requested a review from a team as a code owner August 1, 2024 13:38
Copy link

linear bot commented Aug 1, 2024

Copy link
Contributor

github-actions bot commented Aug 1, 2024

Post kustomize build diff:

diff --git a/root-manifests/elasticsearch/example/manifests.yaml b/built-manifests/elasticsearch/example/manifests.yaml
index cbf539b..de90630 100644
--- a/root-manifests/elasticsearch/example/manifests.yaml
+++ b/built-manifests/elasticsearch/example/manifests.yaml
@@ -248,15 +248,7 @@ spec:
         helm.sh/chart: kibana-10.7.0
     spec:
       affinity:
-        podAntiAffinity:
-          preferredDuringSchedulingIgnoredDuringExecution:
-          - podAffinityTerm:
-              labelSelector:
-                matchLabels:
-                  app.kubernetes.io/instance: elasticsearch
-                  app.kubernetes.io/name: kibana
-              topologyKey: kubernetes.io/hostname
-            weight: 1
+        podAntiAffinity: null
       containers:
       - env:
         - name: BITNAMI_DEBUG
@@ -473,7 +465,16 @@ spec:
         helm.sh/chart: elasticsearch-19.16.1
     spec:
       affinity:
-        podAntiAffinity: null
+        podAntiAffinity:
+          preferredDuringSchedulingIgnoredDuringExecution:
+          - podAffinityTerm:
+              labelSelector:
+                matchLabels:
+                  app.kubernetes.io/component: master
+                  app.kubernetes.io/instance: elasticsearch
+                  app.kubernetes.io/name: elasticsearch
+              topologyKey: kubernetes.io/hostname
+            weight: 1
       containers:
       - args:
         - |
diff --git a/root-manifests/elasticsearch/manifests/manifests.yaml b/built-manifests/elasticsearch/manifests/manifests.yaml
index 8b8f439..bb734f9 100644
--- a/root-manifests/elasticsearch/manifests/manifests.yaml
+++ b/built-manifests/elasticsearch/manifests/manifests.yaml
@@ -213,15 +213,7 @@ spec:
         helm.sh/chart: kibana-10.7.0
     spec:
       affinity:
-        podAntiAffinity:
-          preferredDuringSchedulingIgnoredDuringExecution:
-          - podAffinityTerm:
-              labelSelector:
-                matchLabels:
-                  app.kubernetes.io/instance: elasticsearch
-                  app.kubernetes.io/name: kibana
-              topologyKey: kubernetes.io/hostname
-            weight: 1
+        podAntiAffinity: null
       containers:
       - env:
         - name: BITNAMI_DEBUG
@@ -429,7 +421,16 @@ spec:
         helm.sh/chart: elasticsearch-19.16.1
     spec:
       affinity:
-        podAntiAffinity: null
+        podAntiAffinity:
+          preferredDuringSchedulingIgnoredDuringExecution:
+          - podAffinityTerm:
+              labelSelector:
+                matchLabels:
+                  app.kubernetes.io/component: master
+                  app.kubernetes.io/instance: elasticsearch
+                  app.kubernetes.io/name: elasticsearch
+              topologyKey: kubernetes.io/hostname
+            weight: 1
       containers:
       - env:
         - name: MY_POD_NAME

=============================================
k8s objects: 0 to add, 0 to destroy
4 changed hunks

podAntiAffinity:

preferredDuringSchedulingIgnoredDuringExecution:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I understand this could still result in pods being scheduled onto the same node (link: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#nodeaffinity-v1-core, emphasis added by me)

The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions

is this ok?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't noticed your comment before merging.
Yes, we prefer the soft affinitity requirement to the hard one.
Soft tries to place the nodes in most safe configuration (on different nodes), but doesn't fail if it's not possible- and I think we don't want it to fail.

@MarcinGinszt MarcinGinszt merged commit 344da11 into main Aug 5, 2024
1 check passed
@MarcinGinszt MarcinGinszt deleted the DENA-845 branch August 5, 2024 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants