Skip to content

Conversation

valentinewallace
Copy link
Contributor

@valentinewallace valentinewallace commented Sep 10, 2025

In #4045, as part of supporting sending payments as an often-offline sender to an often-offline recipient, we began including blinded paths in the sender's LSP's RAA message for use as reply paths to the sender's held_htlc_available messages.

Because we hold the per-peer lock corresponding to the Channel while creating this RAA, we can't use our typical approach of calling ChannelManager::get_peers_for_blinded_path to create these blinded paths. The ::get_peers method takes each peer's lock in turn in order to check for usable channels/onion message feature support, and it's not permitted to hold multiple peer state locks at the same time due to the potential for deadlocks (see the debug_sync module).

To avoid taking other peer state locks while holding a particular Channel's peer state lock, here we cache the set of peers in the OffersMessageFlow, which is the struct that ultimately creates the blinded paths for the RAA.

For consistency, we also stop passing in peers to every other OffersMessageFlow method.

Based on #4045, #4046

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Sep 10, 2025

I've assigned @tankyleo as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

Copy link

codecov bot commented Sep 10, 2025

Codecov Report

❌ Patch coverage is 91.30435% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.53%. Comparing base (c2e3ae0) to head (0721c9e).

Files with missing lines Patch % Lines
lightning/src/offers/flow.rs 83.67% 4 Missing and 4 partials ⚠️
lightning/src/ln/channelmanager.rs 80.95% 1 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4066      +/-   ##
==========================================
- Coverage   88.70%   88.53%   -0.18%     
==========================================
  Files         176      175       -1     
  Lines      132859   132779      -80     
  Branches   132859   132779      -80     
==========================================
- Hits       117854   117555     -299     
- Misses      12309    12621     +312     
+ Partials     2696     2603      -93     
Flag Coverage Δ
fuzzing ?
tests 88.53% <91.30%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@valentinewallace valentinewallace force-pushed the 2025-09-multihop-raa-paths branch from 010b2b9 to 4cc7ff7 Compare September 10, 2025 20:05
@valentinewallace valentinewallace mentioned this pull request Sep 12, 2025
45 tasks
@joostjager
Copy link
Contributor

I'd hold off on this because it doesn't seem a strict requirement for initial deployment.

@TheBlueMatt
Copy link
Collaborator

Needs rebase now 🎉

@valentinewallace valentinewallace force-pushed the 2025-09-multihop-raa-paths branch from 4cc7ff7 to 0721c9e Compare September 22, 2025 20:51
@valentinewallace valentinewallace marked this pull request as ready for review September 22, 2025 20:52
@valentinewallace valentinewallace force-pushed the 2025-09-multihop-raa-paths branch from 0721c9e to 27f4930 Compare September 22, 2025 21:20
In upcoming commits, we'll move to creating multi-hop blinded paths during the
process of creating a revoke_and_ack message within the Channel struct.  These
paths will be included in said RAA to be used as reply paths for often-offline
senders held_htlc_available messages.

Because we hold the per-peer lock corresponding to the Channel while creating
this RAA, we can't use our typical approach of calling
ChannelManager::get_peers_for_blinded_path to create these blinded paths.
The ::get_peers method takes each peer's lock in turn in order to check for
usable channels/onion message feature support, and it's not permitted to hold
multiple peer state locks at the same time due to the potential for deadlocks
(see the debug_sync module).

To avoid taking other peer state locks while holding a particular Channel's
peer state lock, here we cache the set of peers in the OffersMessageFlow, which
is the struct that ultimately creates the blinded paths for the RAA.
In the previous commit, we started caching the set of peers in the
OffersMessageFlow that are used when creating blinded paths. Here we start
using that cache and stop passing in the peers for every method.
We've been including blinded paths in our RAA messages for async payments
purposes, but had to use 1-hop blinded paths because we can't acquire
additional peer locks while holding a specific channel's peer lock during the
RAA creation process.

In the past few commits we started caching the set of peers in the
OffersMessageFlow, so we can now generate multi-hop blinded paths during the
RAA creation process without needing to acquire additional peer locks.
@valentinewallace valentinewallace force-pushed the 2025-09-multihop-raa-paths branch from 27f4930 to bddbd9c Compare September 22, 2025 21:27
invoice_request.clone(), payment_id, nonce,
self.get_peers_for_blinded_path()
)?;
self.flow.enqueue_invoice_request(invoice_request.clone(), payment_id, nonce,)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a syntax error in this line - the trailing comma before the closing parenthesis is invalid. The comma after nonce should be removed to make this valid Rust syntax:

self.flow.enqueue_invoice_request(invoice_request.clone(), payment_id, nonce)?;
Suggested change
self.flow.enqueue_invoice_request(invoice_request.clone(), payment_id, nonce,)?;
self.flow.enqueue_invoice_request(invoice_request.clone(), payment_id, nonce)?;

Spotted by Diamond

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants