fix: keep sending ISDLOCK invs to non-MN peers with watchquorums#7293
fix: keep sending ISDLOCK invs to non-MN peers with watchquorums#7293PastaPastaPasta merged 1 commit intodashpay:developfrom
Conversation
PR dashpay#6994 made masternodes skip ISDLOCK inv announcements to any peer with m_wants_recsigs set, on the premise that such peers can reconstruct the ISLOCK from the recsig. It works for MN peers but it does not work for quorum observers running with -watchquorums: those nodes also opt in to recsigs via QSENDRECSIGS but they don't have a signer worker running, so they cannot reconstruct an ISDLOCK from a recsig and they still need the inv. nodes[0] runs with -watchquorums and had progressively sent QSENDRECSIGS to all four MN peers; by the third call every MN saw nodes[0].m_wants_recsigs=true and skipped the inv to it. This commit make the ISDLOCK skipped only on the peer being verified masternode. Move the policy from PushInv (called for every inv type) to the three sites that actually relay MSG_ISDLOCK and have CNode in scope.
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
✅ No Merge Conflicts DetectedThis PR currently has no conflicts with other open PRs. |
|
✅ Review complete (commit f6bb02d) |
WalkthroughThe changes refactor inventory announcement filtering logic in net processing by introducing a new helper function Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Verified the targeted fix on net_processing.cpp: a new helper PeerReconstructsISLockFromRecsig narrows the ISDLOCK inv suppression to verified masternodes that opted into recsigs, restoring relay to non-MN watchquorums peers. No correctness regressions found. Two suggestions and one nitpick stand: a small heuristic gap, duplicated skip-and-log blocks across the two RelayInvFiltered overloads, and absence of a dedicated regression test for the watchquorums + recsigs scenario.
Reviewed commit: f6bb02d
🟡 2 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/net_processing.cpp`:
- [SUGGESTION] lines 1201-1203: Add a direct regression test for ISDLOCK relay to watchquorums peers
The fix relies entirely on PeerReconstructsISLockFromRecsig to keep ISDLOCK invs flowing to non-masternode peers that opted into recsigs (e.g. nodes started with -watchquorums). The PR changes only src/net_processing.cpp and adds no test that pins this behavior down. The original regression in PR #6994 was masked because interface_dash_zmq.py covers the failure mode only indirectly (and was flaky enough to miss it ~50% of the time). A dedicated functional or unit assertion that a non-MN peer that has sent QSENDRECSIGS still receives MSG_ISDLOCK after all connected MNs have set m_wants_recsigs=true would prevent the next refactor from collapsing the three sites back into a PushInv-level filter and reintroducing the same bug.
- [SUGGESTION] lines 1196-1204: Helper assumes every verified-MN peer can reconstruct an ISDLOCK
PeerReconstructsISLockFromRecsig treats `m_wants_recsigs && verified-MN` as sufficient to assume the peer will reconstruct an ISDLOCK locally from the recsig. That holds for masternodes that are members of the relevant InstantSend quorum running the signer worker, but it is not strictly enforced: a verified MN that is not in the active quorum (or is outside the recovery window) will still receive recsigs but lacks the share data needed to assemble the ISDLOCK, and under this code we will not send it the inv either. The behavior is still strictly safer than the pre-fix state and other MN peers will fill the gap, but the comment overstates the invariant. Tightening the comment to call this out as a best-effort optimization (or confirming the IS team is comfortable with the edge case) would prevent future readers from over-trusting the predicate.
| // Returns true when peer is a verified masternode that has opted in to receive recsigs. | ||
| // Such peers participate in the signing flow that populates creatingInstantSendLocks, so | ||
| // they can reconstruct an ISDLOCK locally from the recsig and don't need the ISDLOCK inv. | ||
| // Non-MN peers (e.g. nodes running with -watchquorums) also opt in to recsigs via | ||
| // QSENDRECSIGS but still need ISDLOCK invs because they don't run the signing flow. | ||
| static bool PeerReconstructsISLockFromRecsig(const CNode& pnode, const Peer& peer) | ||
| { | ||
| return peer.m_wants_recsigs && !pnode.GetVerifiedProRegTxHash().IsNull(); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Helper assumes every verified-MN peer can reconstruct an ISDLOCK
PeerReconstructsISLockFromRecsig treats m_wants_recsigs && verified-MN as sufficient to assume the peer will reconstruct an ISDLOCK locally from the recsig. That holds for masternodes that are members of the relevant InstantSend quorum running the signer worker, but it is not strictly enforced: a verified MN that is not in the active quorum (or is outside the recovery window) will still receive recsigs but lacks the share data needed to assemble the ISDLOCK, and under this code we will not send it the inv either. The behavior is still strictly safer than the pre-fix state and other MN peers will fill the gap, but the comment overstates the invariant. Tightening the comment to call this out as a best-effort optimization (or confirming the IS team is comfortable with the edge case) would prevent future readers from over-trusting the predicate.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/net_processing.cpp`:
- [SUGGESTION] lines 1196-1204: Helper assumes every verified-MN peer can reconstruct an ISDLOCK
PeerReconstructsISLockFromRecsig treats `m_wants_recsigs && verified-MN` as sufficient to assume the peer will reconstruct an ISDLOCK locally from the recsig. That holds for masternodes that are members of the relevant InstantSend quorum running the signer worker, but it is not strictly enforced: a verified MN that is not in the active quorum (or is outside the recovery window) will still receive recsigs but lacks the share data needed to assemble the ISDLOCK, and under this code we will not send it the inv either. The behavior is still strictly safer than the pre-fix state and other MN peers will fill the gap, but the comment overstates the invariant. Tightening the comment to call this out as a best-effort optimization (or confirming the IS team is comfortable with the edge case) would prevent future readers from over-trusting the predicate.
| @@ -2566,6 +2574,11 @@ void PeerManagerImpl::RelayInvFiltered(const CInv& inv, const uint256& relatedTx | |||
| return; | |||
| } | |||
| } // LOCK(tx_relay->m_bloom_filter_mutex) | |||
| if (inv.type == MSG_ISDLOCK && PeerReconstructsISLockFromRecsig(*pnode, *peer)) { | |||
| LogPrint(BCLog::NET, "%s -- skipping ISDLOCK inv (peer wants recsigs): %s peer=%d\n", | |||
| __func__, inv.ToString(), peer->m_id); | |||
| return; | |||
| } | |||
There was a problem hiding this comment.
💬 Nitpick: Duplicated ISDLOCK skip-and-log block in both RelayInvFiltered overloads
The four-line if (inv.type == MSG_ISDLOCK && PeerReconstructsISLockFromRecsig(*pnode, *peer)) { LogPrint(...); return; } is copy-pasted into both RelayInvFiltered overloads (2547-2551 and 2577-2581) and is the structural twin of the guard at 6364. Extracting a small helper such as ShouldSkipISDLockInv(pnode, peer, inv) would avoid the three sites drifting apart in future edits. Pure readability/maintainability nit, not a defect.
source: ['claude']
Issue being fixed or feature implemented
PR #6994 made masternodes skip ISDLOCK inv announcements to any peer with m_wants_recsigs set, on the premise that such peers can reconstruct the ISLOCK from the recsig.
It works for MN peers but it does not work for quorum observers running with -watchquorums: those nodes also opt in to recsigs via QSENDRECSIGS but they don't have a signer worker running, so they cannot reconstruct an ISDLOCK from a recsig and they still need the inv.
nodes[0] runs with -watchquorums and had progressively sent QSENDRECSIGS to all four MN peers; by the third call every MN saw nodes[0].m_wants_recsigs=true and skipped the inv to it.
What was done?
The ISDLOCK is skipped now only on the peer being verified masternode. Move the policy from PushInv (called for every inv type) to the three sites that actually relay MSG_ISDLOCK and have CNode in scope.
How Has This Been Tested?
Run functional test interface_dash_zmq.py multiple times.
This PR drops failure rate from 50% to 0.
Breaking Changes
N/A
Checklist: