-
Notifications
You must be signed in to change notification settings - Fork 4.6k
transport: Suppress warning when the connection is closing #8654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Don't emit a warning when the connection is closing. A warning implies that something is not as it should be, but it's expected that the connection attempt is interrupted in this case. RELEASE NOTES: N/A Signed-off-by: Tom Wieczorek <[email protected]>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #8654 +/- ##
==========================================
- Coverage 81.21% 80.83% -0.38%
==========================================
Files 416 416
Lines 41002 41096 +94
==========================================
- Hits 33298 33220 -78
- Misses 6226 6331 +105
- Partials 1478 1545 +67
🚀 New features to boost your workflow:
|
|
@twz123 : Could you please describe the problem you are seeing in a little bit more detail and say how your change helps fix the problem? Thanks. |
|
When connecting to etcd using their client, I'm seeing warnings during normal operations like this or this: This broke something on our side (k0sproject/k0sctl#953), as we weren't prepared for the additional output. While that was an oversight and easy to fix in k0sctl, we were wondering why we're seeing the warning in the first place. It seems to happen when/after the GRPC client connection gets established. I don't have a clue about GRPC internals, but it seems that a subchannel gets cancelled while it's still creating its transport. This seems like "normal operations" to me. A warning implies that something is not as it should be, so I figured the warning would be inappropriate in this situation? Below an example output of my test app, which connects to etcd, executes a command, and disconnects again: |
|
@twz123 Thank you for sharing logs. From the output I do see that the load balancing policy on the gRPC Channel is changing from We would like to keep throwing this warning log since a subchannel not being able to connect to the backend is an important event that we care about. This is in fact specifically mentioned in one of our docs that we use as a guideline for log statements: https://github.com/grpc/grpc-go/blob/master/Documentation/log_levels.md I'd like to close this and keep existing behavior since it looks like this can and has been easily fixed on your side. Please feel free to reopen if you think otherwise. |
Thanks for the explanation @easwars!
I agree that this warning is important and useful. I didn't remove it; I just silenced it in one specific case where I believe it is inappropriate: When the connection has been closed by the user. This is a normal operation, and subchannels are expected to close. They simply didn't finish connecting and were interrupted along the way. The code was issuing a warning in any case. Now, it will only issue a warning if the connection does not close. The PR diff might make it look like the warning has been removed completely, but it was actually just moved from
The hard error has been fixed in k0sctl, yes. However, the warning remains, not only in k0sctl, but every time something interacts with etcd on localhost. It's confusing for users and developers alike. Users are seeing it every time when they query the etcd member status via k0s, and the kube-apiserver logs are full with these warnings, too. We've already received bug reports claiming that etcd clusters were broken because users misinterpreted the logs. |
PS: I don't have the permissions to reopen this. |
Don't emit a warning when the connection is closing. A warning implies that something is not as it should be, but it's expected that the connection attempt is interrupted in this case.
RELEASE NOTES: N/A