-
-
Notifications
You must be signed in to change notification settings - Fork 976
Refactor/channel based status updates #4836
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Refactor/channel based status updates #4836
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the UI client's status update mechanism from a polling-only approach to a hybrid channel-based system that responds to both periodic ticks and on-demand triggers. The change enables immediate, non-blocking status updates after connect/disconnect operations while maintaining the existing 2-second polling interval.
Key Changes:
- Introduced
statusUpdateLoop()that listens to both a ticker and a buffered channel for status update triggers - Replaced direct
updateStatus()calls withtriggerStatusUpdate()for non-blocking on-demand updates - Extracted connected status logic into
setConnectedStatus()helper for consistency
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| client/ui/event_handler.go | Replaced synchronous updateStatus() calls with non-blocking triggerStatusUpdate() in connect/disconnect handlers |
| client/ui/client_ui.go | Added channel-based status update loop, extracted setConnectedStatus() helper, and integrated trigger mechanism into event system |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
WalkthroughRefactors client UI status handling: adds an Changes
Sequence Diagram(s)sequenceDiagram
participant EM as Event Manager
participant EH as Event Handler
participant SC as Service Client
participant CH as updateStatusChan
participant SL as statusUpdateLoop
rect rgba(150,200,255,0.08)
Note over EM,SL: Channel-driven status refresh (ticker + event triggers)
end
EM->>EH: SYSTEM / Connect / Disconnect event
EH->>SC: triggerStatusUpdate()
SC->>CH: non-blocking send (signal)
alt signal accepted
CH-->>SL: receives trigger
SL->>SC: fetch config & status
SL->>SC: setConnectedStatus() / setConnectingStatus()
SL->>UI: update icon/tooltip/state
else dropped (buffer full)
Note right of SC: trigger dropped (non-blocking)
end
Note over SL: statusUpdateLoop also fires periodically via ticker
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
client/ui/client_ui.go (1)
989-1010: Consider documenting the initialization sleep and review feature check frequency.The hybrid ticker + channel approach is well-implemented. Two observations:
The 100ms sleep after
getSrvConfig()at line 991 appears to be a timing workaround. Consider adding a comment explaining why this delay is necessary for initialization.
checkAndUpdateFeatures()is called on every status update (every 2 seconds or on-demand). If feature flags change infrequently, this might be more often than necessary. Consider whether features need to be checked on every update or only periodically (e.g., every 30 seconds).Example for the sleep:
func (s *serviceClient) statusUpdateLoop() { s.getSrvConfig() + // Brief delay to ensure initial config is processed before first status update time.Sleep(100 * time.Millisecond)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
client/ui/client_ui.go(6 hunks)client/ui/event_handler.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
client/ui/client_ui.go (2)
client/ui/event/event.go (1)
NewManager(34-39)client/proto/daemon.pb.go (4)
SystemEvent(3658-3669)SystemEvent(3682-3682)SystemEvent(3697-3699)SystemEvent_SYSTEM(150-150)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: Client / Unit (386)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit (amd64)
- GitHub Check: iOS / Build
- GitHub Check: Android / Build
- GitHub Check: release
- GitHub Check: Client / Unit
- GitHub Check: release_ui_darwin
- GitHub Check: Windows
- GitHub Check: Darwin
- GitHub Check: JS / Lint
- GitHub Check: Linux
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
🔇 Additional comments (6)
client/ui/event_handler.go (1)
93-93: LGTM! Clean refactor to non-blocking status updates.The switch from
updateStatus()totriggerStatusUpdate()aligns with the new channel-based coordination. SincetriggerStatusUpdate()is non-blocking and signals the status update loop asynchronously, it's well-suited for these post-operation callbacks in goroutines.Also applies to: 117-117
client/ui/client_ui.go (5)
322-322: Well-designed signaling channel.The buffered channel with capacity 1 is appropriate: it allows one pending status update signal without blocking the sender, while preventing accumulation of redundant signals (subsequent sends will be dropped if the buffer is full, which is correct since one pending update is sufficient).
Also applies to: 358-358
961-976: LGTM! Good extraction that improves consistency.The new
setConnectedStatus()method properly consolidates the connected state logic and matches the existing pattern withsetDisconnectedStatus()andsetConnectingStatus(). All state updates (icon, tooltip, menu items, exit nodes) are correctly handled.
1012-1017: LGTM! Correctly implements non-blocking status update trigger.The select-with-default pattern ensures that
triggerStatusUpdate()never blocks. If a status update is already pending (channel buffer full), the new signal is safely dropped, which is the correct behavior since one pending update is sufficient.
897-897: LGTM! Clean use of the extracted helper.Replacing the inline connected status logic with a call to
setConnectedStatus()improves maintainability and consistency.
1111-1111: LGTM! Proper integration of the new status update mechanism.The changes correctly:
- Start the new hybrid
statusUpdateLoop()instead of the previous inline polling loop (line 1111)- Trigger immediate status updates on SYSTEM events for responsive UI feedback (line 1118), complementing the existing
updateExitNodes()callThis ensures the UI reflects state changes promptly after system events while maintaining periodic updates.
Also applies to: 1115-1120
2713671 to
dc50af4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
client/ui/client_ui.go (1)
989-1010: Consider removing the initial sleep.The
statusUpdateLoop()implementation is sound with proper ticker management and context cancellation. However, the 100ms sleep at line 991 suggests a potential initialization race condition.If the sleep is working around a startup timing issue, consider either:
- Using explicit synchronization (e.g., WaitGroup, sync primitives)
- Documenting why the sleep is necessary
Otherwise, if the sleep is no longer needed, remove it:
func (s *serviceClient) statusUpdateLoop() { s.getSrvConfig() - time.Sleep(100 * time.Millisecond) ticker := time.NewTicker(2 * time.Second)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
client/ui/client_ui.go(6 hunks)client/ui/event_handler.go(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
- GitHub Check: Android / Build
- GitHub Check: iOS / Build
- GitHub Check: Darwin
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: Build Cache
- GitHub Check: JS / Lint
- GitHub Check: release_ui_darwin
- GitHub Check: release
- GitHub Check: release_ui
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
🔇 Additional comments (6)
client/ui/event_handler.go (1)
93-93: LGTM! Clean refactor to channel-based triggering.The replacement of
updateStatus()withtriggerStatusUpdate()at both connect and disconnect completion points aligns well with the new asynchronous status update pattern. The non-blocking nature ensures event handlers don't wait for status refresh.Also applies to: 117-117
client/ui/client_ui.go (5)
321-322: LGTM! Field additions support the channel-based pattern.The
updateStatusChanenables asynchronous status update signaling, andconnectCancelproperly manages connection lifecycle. Both fields are appropriately placed in theserviceClientstruct.
358-358: LGTM! Appropriate buffer size for debouncing.Buffer size of 1 is correct for this pattern—it allows coalescing multiple rapid triggers while preventing unbounded queueing. The periodic ticker (2s) provides a fallback for any dropped signals.
897-897: LGTM! Clean extraction maintains all state transitions.The
setConnectedStatus()helper properly encapsulates all connected-state UI updates and follows the established pattern of similar helper functions (setDisconnectedStatus,setConnectingStatus). All state changes are preserved from the original inline implementation.Also applies to: 961-976
1012-1017: LGTM! Proper non-blocking channel send pattern.The
triggerStatusUpdate()implementation correctly usesselectwithdefaultto prevent blocking when the channel buffer is full. Dropped signals are acceptable here since the 2-second ticker provides eventual consistency.
1111-1111: LGTM! Clean integration of the status update loop.The replacement of the inline goroutine with
statusUpdateLoop()at line 1111 and the addition oftriggerStatusUpdate()for SYSTEM events at line 1118 properly integrate the new channel-based pattern. The event-driven trigger ensures the UI reflects system changes promptly.Also applies to: 1118-1118
- Extract setConnectedStatus() helper function for consistent status handling - Implement statusUpdateLoop() with channel-based triggering for status updates - Replace direct updateStatus() calls with triggerStatusUpdate() in event handlers - Centralizes status update logic and enables non-blocking event-driven updates
dc50af4 to
5ba351e
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
client/ui/client_ui.go (1)
989-1010: LGTM! Well-structured status update loop with proper resource management.The loop correctly combines periodic updates (ticker) with on-demand updates (channel), handles context cancellation, and properly cleans up resources.
Optional refinements:
- The 2-second ticker interval could be extracted as a named constant for easier maintenance:
+const ( + statusUpdateInterval = 2 * time.Second +) + func (s *serviceClient) statusUpdateLoop() { s.getSrvConfig() time.Sleep(100 * time.Millisecond) - ticker := time.NewTicker(2 * time.Second) + ticker := time.NewTicker(statusUpdateInterval)
- Consider adding a comment explaining the 100ms sleep purpose (if it's intentional for settling time after
getSrvConfig()).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
client/ui/client_ui.go(6 hunks)client/ui/event_handler.go(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Client / Unit (386)
- GitHub Check: Signal / Unit (amd64)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: iOS / Build
- GitHub Check: Linux
- GitHub Check: Android / Build
- GitHub Check: release_ui_darwin
- GitHub Check: release_ui
- GitHub Check: release
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
- GitHub Check: Client / Unit
- GitHub Check: Darwin
- GitHub Check: Client / Unit
- GitHub Check: Windows
🔇 Additional comments (5)
client/ui/event_handler.go (1)
93-93: LGTM! Clean refactoring to channel-based status updates.The change from direct
updateStatus()calls totriggerStatusUpdate()properly decouples the event handlers from status update logic. The non-blocking trigger approach ensures these goroutines won't be delayed by status update operations.Also applies to: 117-117
client/ui/client_ui.go (4)
322-322: LGTM! Proper channel initialization for status update signaling.The buffered channel with size 1 is well-suited for coalescing multiple rapid status update requests. Combined with the non-blocking send in
triggerStatusUpdate(), this prevents both blocking and excessive queuing of update requests.Also applies to: 358-358
961-976: LGTM! Good consolidation of connected state logic.The
setConnectedStatus()helper provides consistency with the existingsetDisconnectedStatus()andsetConnectingStatus()helpers. This consolidation improves maintainability and reduces code duplication.
1012-1017: LGTM! Excellent non-blocking trigger implementation.The select with default case ensures that callers never block, while the buffered channel naturally coalesces multiple rapid status update requests. This is a textbook implementation of a non-blocking signal pattern.
897-897: LGTM! Proper integration of the refactored status update mechanism.The initialization order is correct:
statusUpdateLoop()starts before the event handler, ensuring the channel consumer is ready before producers begin sending. The use ofsetConnectedStatus()at line 897 completes the consolidation of connected state logic.Also applies to: 1111-1111



Describe your changes
Refactors the UI client's status update mechanism from a polling-only approach to a hybrid channel-based system. Introduces a statusUpdateLoop() that responds to both periodic ticks (every 2 seconds) and on-demand triggers via a buffered channel. Event handlers now call triggerStatusUpdate() instead of directly invoking updateStatus(), enabling immediate non-blocking status updates after connect/disconnect operations. Also extracts the connected status logic into a dedicated setConnectedStatus() helper function for consistency with existing setDisconnectedStatus() and setConnectingStatus() patterns.
Issue ticket number and link
Stack
Checklist
Documentation
Select exactly one:
Docs PR URL (required if "docs added" is checked)
Paste the PR link from https://github.com/netbirdio/docs here:
https://github.com/netbirdio/docs/pull/__
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.