-
Notifications
You must be signed in to change notification settings - Fork 26
Optimize computations via caching #143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR optimizes the CRBA, RNEA, and ABA computations by introducing caching mechanisms and precomputing static tree data. The key optimization is the addition of a _prepare_tree_cache() method that pre-computes tree structure information at initialization time, avoiding repeated Python lookups during algorithm execution.
- Introduces
_prepare_tree_cache()to precompute tree indices, parent relationships, joint indices, motion subspaces, and spatial inertias - Refactors CRBA, RNEA, and ABA algorithms to use cached data structures instead of repeatedly traversing the model tree
- Changes default dtype from
torch.float64totorch.float32in PyTorch implementations
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/adam/core/rbd_algorithms.py | Adds caching infrastructure and refactors CRBA/RNEA/ABA algorithms to use pre-computed tree data, adds Joint import |
| src/adam/pytorch/computations.py | Changes default dtype parameter from torch.float64 to torch.float32 |
| src/adam/pytorch/computation_batch.py | Changes default dtype parameter from torch.float64 to torch.float32 |
| tests/test_pytorch.py | Explicitly sets dtype=torch.float64 in test setup |
| tests/test_pytorch_batch.py | Explicitly sets dtype=torch.float64 in test setup |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/adam/pytorch/computations.py
Outdated
| torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") | ||
| ), | ||
| dtypes: torch.dtype = torch.float64, | ||
| dtypes: torch.dtype = torch.float32, |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default dtype has been changed from torch.float64 to torch.float32, but the tests explicitly set dtype=torch.float64. This inconsistency suggests that either the tests need to be updated to test with the new default (float32), or the default should remain float64. Changing precision defaults can affect numerical accuracy in robotics computations, especially for inverse dynamics and mass matrix calculations. If the change is intentional for performance reasons, it should be documented why float32 is sufficient for these computations.
| dtypes: torch.dtype = torch.float32, | |
| dtypes: torch.dtype = torch.float64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to have increased precision for the tests.
| torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") | ||
| ), | ||
| dtype: torch.dtype = torch.float64, | ||
| dtype: torch.dtype = torch.float32, |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default dtype has been changed from torch.float64 to torch.float32, but the tests explicitly set dtype=torch.float64. This inconsistency suggests that either the tests need to be updated to test with the new default (float32), or the default should remain float64. Changing precision defaults can affect numerical accuracy in robotics computations, especially for inverse dynamics and mass matrix calculations. If the change is intentional for performance reasons, it should be documented why float32 is sufficient for these computations.
| dtype: torch.dtype = torch.float32, | |
| dtype: torch.dtype = torch.float64, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to have increased precision for the tests.
| from adam.core.constants import Representations | ||
| from adam.core.spatial_math import ArrayLike, SpatialMath | ||
| from adam.model import Model, Node | ||
| from adam.model import Model, Node, Joint |
Copilot
AI
Nov 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'Node' is not used.
| from adam.model import Model, Node, Joint | |
| from adam.model import Model, Joint |
|
Thanks @traversaro! CI's happy. Merging! |
This PR aims to optimize crba, rnea and aba computations by caching and precomputing quantities.
Some benchmarking:
Now
Previous
📚 Documentation preview 📚: https://adam-docs--143.org.readthedocs.build/en/143/