-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Haproxy.cfg is not reloaded when pod IP address changes and standalone-backend annotation is true #702
Comments
I was able to get into state, where haproxy.cfg does not correspond with its state.
|
|
I tried to send HUP to haproxy without success.
|
@dosmanak , Thanks for the complementary input. I'll see that. |
We investigated the code and perhaps the controller decides it is not necessary to reload Haproxy. Maybe the isNeededReload checks only one haproxy backend that is connected to kubernetes service, but not the other haproxy backend that is created due the standalone-backend annotation. |
Congratulations for helping us in the debugging, actually that's an other part that is concerned about this issue. It has to do with update of servers lists from endpoints events that considers only the regular backend but not the derived ones from standalone annotations. I'll create a fix for that. |
Hello. Do you have estimate in what version, the fix will be available? |
Hi @dosmanak, it should be available in the next version. This should be around mid june. |
That is dissapointing. I will rewrite multiple ingresses into multiple service and I hope that will resove the issue for us. |
I have rewritten it into multiple services and just a single ingress with many host rules. It seems working properly without standalone backend annotation. So no hurry with fix needed. |
We experience a buggy behavior when
haproxy.org/standalone-backeng: "true"
.It is the most probable cause of error.
When the pod that is under service is deleted and its replicaset or statefulset starts a new one, the haproxy is not reloaded to put new pod IP into backend server list.
We also use ingres with serveral path prefixes, but we use that on different project without issue.
I am not able to reproduce it clearly on staging environment at the moment.
We decided for standalone backend so the backend-config snippet for each ingress is uniq even though they have the same service in backend. That is necesary to be able to use custom errorfiles on each ingress.
The text was updated successfully, but these errors were encountered: