-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(WIP) feature: Drain nodes prior to termination when using RollingUpgrade strategy #259
base: master
Are you sure you want to change the base?
(WIP) feature: Drain nodes prior to termination when using RollingUpgrade strategy #259
Conversation
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Signed-off-by: Eytan Avisror <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #259 +/- ##
===========================================
+ Coverage 51.08% 85.37% +34.28%
===========================================
Files 33 18 -15
Lines 4504 2345 -2159
===========================================
- Hits 2301 2002 -299
+ Misses 2062 237 -1825
+ Partials 141 106 -35
Continue to review full report at Codecov.
|
Adds to #197
This feature adds a capability to do draining in parallel via kubectl as a library (not shelling out), prior to termination of instances.
This makes the rollingUpgrade strategy much more usable.
Implementation adds a sync.Map of
namespacedName
:sync.WaitGroup
at the controller level in order to keep track of draining nodes.The drain is initiated and then the controller will re-queue until operation has completed or errors out.
Also refactored to move the rollingUpgrade logic from kubeprovider into the eks package.
TBD: