-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DENA-845] set affinity for elasticsearch, remove it for Kibana #62
Conversation
Post diff --git a/root-manifests/elasticsearch/example/manifests.yaml b/built-manifests/elasticsearch/example/manifests.yaml
index cbf539b..de90630 100644
--- a/root-manifests/elasticsearch/example/manifests.yaml
+++ b/built-manifests/elasticsearch/example/manifests.yaml
@@ -248,15 +248,7 @@ spec:
helm.sh/chart: kibana-10.7.0
spec:
affinity:
- podAntiAffinity:
- preferredDuringSchedulingIgnoredDuringExecution:
- - podAffinityTerm:
- labelSelector:
- matchLabels:
- app.kubernetes.io/instance: elasticsearch
- app.kubernetes.io/name: kibana
- topologyKey: kubernetes.io/hostname
- weight: 1
+ podAntiAffinity: null
containers:
- env:
- name: BITNAMI_DEBUG
@@ -473,7 +465,16 @@ spec:
helm.sh/chart: elasticsearch-19.16.1
spec:
affinity:
- podAntiAffinity: null
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - podAffinityTerm:
+ labelSelector:
+ matchLabels:
+ app.kubernetes.io/component: master
+ app.kubernetes.io/instance: elasticsearch
+ app.kubernetes.io/name: elasticsearch
+ topologyKey: kubernetes.io/hostname
+ weight: 1
containers:
- args:
- |
diff --git a/root-manifests/elasticsearch/manifests/manifests.yaml b/built-manifests/elasticsearch/manifests/manifests.yaml
index 8b8f439..bb734f9 100644
--- a/root-manifests/elasticsearch/manifests/manifests.yaml
+++ b/built-manifests/elasticsearch/manifests/manifests.yaml
@@ -213,15 +213,7 @@ spec:
helm.sh/chart: kibana-10.7.0
spec:
affinity:
- podAntiAffinity:
- preferredDuringSchedulingIgnoredDuringExecution:
- - podAffinityTerm:
- labelSelector:
- matchLabels:
- app.kubernetes.io/instance: elasticsearch
- app.kubernetes.io/name: kibana
- topologyKey: kubernetes.io/hostname
- weight: 1
+ podAntiAffinity: null
containers:
- env:
- name: BITNAMI_DEBUG
@@ -429,7 +421,16 @@ spec:
helm.sh/chart: elasticsearch-19.16.1
spec:
affinity:
- podAntiAffinity: null
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - podAffinityTerm:
+ labelSelector:
+ matchLabels:
+ app.kubernetes.io/component: master
+ app.kubernetes.io/instance: elasticsearch
+ app.kubernetes.io/name: elasticsearch
+ topologyKey: kubernetes.io/hostname
+ weight: 1
containers:
- env:
- name: MY_POD_NAME
=============================================
k8s objects: 0 to add, 0 to destroy
4 changed hunks |
podAntiAffinity: | ||
|
||
preferredDuringSchedulingIgnoredDuringExecution: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand this could still result in pods being scheduled onto the same node (link: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#nodeaffinity-v1-core, emphasis added by me)
The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions
is this ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I didn't noticed your comment before merging.
Yes, we prefer the soft affinitity requirement to the hard one.
Soft tries to place the nodes in most safe configuration (on different nodes), but doesn't fail if it's not possible- and I think we don't want it to fail.
See https://utilitywarehouse.slack.com/archives/C03KMFV8DFU/p1721978514841589