-
Notifications
You must be signed in to change notification settings - Fork 192
Enhance CMIS DP decommissioning to be on a per-logical-port basis #663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
90fc06a to
f7377f2
Compare
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances CMIS decommissioning to operate on a per-logical-port basis instead of per-physical-port, and fixes a port event caching issue that could drop DEL events.
- Implements per-logical-port decommissioning by identifying minimal host lanes requiring decommission for each logical port
- Fixes PortChangeObserver event caching to prevent DEL events from being dropped in back-to-back DEL/SET sequences
- Adds validation to check Active Control Set for correct application code updates
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| sonic-xcvrd/xcvrd/xcvrd_utilities/port_event_helper.py | Updates event caching to use deque with maxlen=2 and preserves DEL events followed by SET events |
| sonic-xcvrd/xcvrd/xcvrd.py | Refactors decommissioning logic to work per-logical-port, adds new methods for host lane mask calculation and active application validation |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
|
Hi @prgeor @mihirpat1 could you please review, thanks |
Note
Will update/add UT code later
Description
For each logical port, identify the configured data path(s) (in Active Control Set) that share host lanes with this logical port and have conflicting appl codes. Then mark as decomm_pending for the lanes belonging to the conflicting data path(s), and this would be the minimal set of host lanes that require decommissioning to allow the given logical port to apply its new appl code without facing config error. If other logical ports are already in the progress of decommission on the lanes overlapping with this logical port, put it on hold util other logical ports' done decommissioning.
Motivation and Context
How Has This Been Tested?
Additional Information (Optional)