Skip to content

Conversation

@Giulero
Copy link
Collaborator

@Giulero Giulero commented Nov 18, 2025

This PR aims to optimize crba, rnea and aba computations by caching and precomputing quantities.

Some benchmarking:

Now

URDF: stickbot.urdf | NDoF: 34 | batch: 4096 | dtype: torch.float32 | device: cuda | rep: mixed
RNEA: 112.229 ms/iter
ABA : 149.751 ms/iter
CRBA: 94.922 ms/iter

Previous

URDF: stickbot.urdf | NDoF: 34 | batch: 4096 | dtype: torch.float32 | device: cuda | rep: mixed
RNEA: 145.100 ms/iter
ABA: 190.172 ms/iter
CRBA: 143.114 ms/iter

📚 Documentation preview 📚: https://adam-docs--143.org.readthedocs.build/en/143/

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR optimizes the CRBA, RNEA, and ABA computations by introducing caching mechanisms and precomputing static tree data. The key optimization is the addition of a _prepare_tree_cache() method that pre-computes tree structure information at initialization time, avoiding repeated Python lookups during algorithm execution.

  • Introduces _prepare_tree_cache() to precompute tree indices, parent relationships, joint indices, motion subspaces, and spatial inertias
  • Refactors CRBA, RNEA, and ABA algorithms to use cached data structures instead of repeatedly traversing the model tree
  • Changes default dtype from torch.float64 to torch.float32 in PyTorch implementations

Reviewed Changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/adam/core/rbd_algorithms.py Adds caching infrastructure and refactors CRBA/RNEA/ABA algorithms to use pre-computed tree data, adds Joint import
src/adam/pytorch/computations.py Changes default dtype parameter from torch.float64 to torch.float32
src/adam/pytorch/computation_batch.py Changes default dtype parameter from torch.float64 to torch.float32
tests/test_pytorch.py Explicitly sets dtype=torch.float64 in test setup
tests/test_pytorch_batch.py Explicitly sets dtype=torch.float64 in test setup

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
),
dtypes: torch.dtype = torch.float64,
dtypes: torch.dtype = torch.float32,
Copy link

Copilot AI Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default dtype has been changed from torch.float64 to torch.float32, but the tests explicitly set dtype=torch.float64. This inconsistency suggests that either the tests need to be updated to test with the new default (float32), or the default should remain float64. Changing precision defaults can affect numerical accuracy in robotics computations, especially for inverse dynamics and mass matrix calculations. If the change is intentional for performance reasons, it should be documented why float32 is sufficient for these computations.

Suggested change
dtypes: torch.dtype = torch.float32,
dtypes: torch.dtype = torch.float64,

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to have increased precision for the tests.

torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
),
dtype: torch.dtype = torch.float64,
dtype: torch.dtype = torch.float32,
Copy link

Copilot AI Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default dtype has been changed from torch.float64 to torch.float32, but the tests explicitly set dtype=torch.float64. This inconsistency suggests that either the tests need to be updated to test with the new default (float32), or the default should remain float64. Changing precision defaults can affect numerical accuracy in robotics computations, especially for inverse dynamics and mass matrix calculations. If the change is intentional for performance reasons, it should be documented why float32 is sufficient for these computations.

Suggested change
dtype: torch.dtype = torch.float32,
dtype: torch.dtype = torch.float64,

Copilot uses AI. Check for mistakes.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to have increased precision for the tests.

from adam.core.constants import Representations
from adam.core.spatial_math import ArrayLike, SpatialMath
from adam.model import Model, Node
from adam.model import Model, Node, Joint
Copy link

Copilot AI Nov 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'Node' is not used.

Suggested change
from adam.model import Model, Node, Joint
from adam.model import Model, Joint

Copilot uses AI. Check for mistakes.
@Giulero Giulero requested a review from traversaro November 18, 2025 09:52
@Giulero
Copy link
Collaborator Author

Giulero commented Nov 21, 2025

Thanks @traversaro!

CI's happy. Merging!

@Giulero Giulero merged commit facaafa into main Nov 21, 2025
13 checks passed
@Giulero Giulero deleted the optimize-algo branch November 21, 2025 09:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants