-
Notifications
You must be signed in to change notification settings - Fork 2.2k
graph/db: async graph cache population #10065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: elle-reset-callbacks
Are you sure you want to change the base?
graph/db: async graph cache population #10065
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @ellemouton, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances application startup performance by implementing asynchronous loading for the channel graph cache. This change ensures that the system remains available for queries by temporarily serving them from the database while the in-memory cache is being built. A new configuration flag provides flexibility for users to revert to synchronous loading if desired.
Highlights
- Performance Improvement: The channel graph cache now populates asynchronously on startup, allowing the application to become responsive faster. During the asynchronous loading phase, all read queries will automatically fall back to the database, ensuring continuous availability.
- Configuration Option: A new configuration option,
--db.sync-graph-cache-load
, has been introduced. This allows users to opt out of the new asynchronous loading behavior and force the graph cache to load synchronously, blocking startup until completion. - Robust Query Handling: Graph query methods have been updated to intelligently check if the in-memory cache is fully loaded. If not, they will transparently query the underlying database, preventing incomplete or stale data from being served from the cache.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The code changes introduce asynchronous graph cache population on startup, improving performance. The implementation includes a fallback to database queries during cache loading and a configuration option for opting out. The code appears well-structured and includes a new test case.
3d83c73
to
9ac1c58
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces asynchronous graph cache population, which is a great performance enhancement for node startup. The implementation is clean, with appropriate configuration options to control the new behavior. The tests are thorough, including a new test for the async loading and updates to existing tests to ensure they remain deterministic. The code adheres to the LND Style Guide.
a814e5a
to
944c68c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
numNodes int | ||
chanIndex = make(map[uint64]struct{}, numChannels) | ||
) | ||
// We query the graph for all nodes and channels, and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: missing new line above?
Instead of letting tests set the graphCache to nil in order to simulate it not being set, we instead make use of the WithUseGraphCache helper.
944c68c
to
99833e9
Compare
Clean up TestGraphCacheTraversal so that we are explicitly enabling the graphCache. This removes the need to explicitly make calls to the cache. Also remove a duplicate check from assertNodeNotInCache.
Use this to block reading from the cache unless cacheLoaded returns true. This will start being useful once cache population is done ascynchronously.
Refactor so that we don't have two layers of indentation later on when we want to spin populateCache off into a goroutine.
We also add a functional option to opt out of this behaviour so that any tests that rely on the graph cache to continue to work reliably.
Add a new option to opt out of the new asynchronous graph cache loading feature.
99833e9
to
c5372c4
Compare
If there are writes to the underlying graph during cache population are these queued to be written into the cache, or will these entries just experience a cache miss when a read happens later? |
Let the channel graph cache be populated asynchronously on startup. While the
cache is being populated, the graph is still available for queries, but
all read queries will be served from the database until the cache is fully
populated. This new behaviour can be opted out of via the new
--db.sync-graph-cache-load
option.Fixes #6187
Replaces #8919
Depends on #10068