Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible improvement for Watcher.Dispose() #1591

Open
mikhail-barg opened this issue Oct 2, 2024 · 2 comments
Open

Possible improvement for Watcher.Dispose() #1591

mikhail-barg opened this issue Oct 2, 2024 · 2 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@mikhail-barg
Copy link

Hi!

I think that current implementation of Watcher.Dispose() is a bit lacking:

protected virtual void Dispose(bool disposing)
{
if (!disposedValue)
{
if (disposing)
{
_cts?.Cancel();
_cts?.Dispose();
}
disposedValue = true;
}
}

specifically between _cts?.Cancel() and _cts?.Dispose() calls, we should await for the _watcherLoop task to finish. Otherwise it might attempt to access the Token of already disposed _cts, which is generally undesirable.

Also possible exception would be unobserved, which in turn may be a problem for applications with strict policies on unobserved exceptions.

What do you think? Will you consider a PR like this:

protected virtual void Dispose(bool disposing)
{
    if (!disposedValue)
    {
        if (disposing)
        {
            _cts.Cancel();
            try 
            {
                _watcherLoop.GetAwaiter().GetResult();
            }
            catch 
            {
                //consume possible exception
            }
            _cts.Dispose();
        }

        disposedValue = true;
    }
}
@tg123
Copy link
Member

tg123 commented Oct 3, 2024

i don't think waiting inside dispose is a good idea

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants