From 6dba00a3cfed95b4a900a951b5401bdbe7da2787 Mon Sep 17 00:00:00 2001
From: symforce-org A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. Public Types Decompose A into A = P^T * L * D * L^T * P and store internally. x for A x = b, where x and b are dense Solves in place for x in A x = b, where x and b are dense Defined to satisfy interface. No analysis is needed so is a no-op. Construct a Linearizer from factors and optional keys Construct a Linearizer from factors and optional keys Returns whether Relinearize() has already been called once. Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size. Returns whether Relinearize() has already been called once. Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size. The keys to optimize, in the order they appear in the state vector (i.e., in rhs). A map from all optimized keys in the problem to an index entry for the corresponding optimized values (where the offset is into the problem state vector, i.e., the rhs of a linearization). Only read if IsInitialized() returns true (i.e., after the first call to Relinearize). Only read if IsInitialized() returns true (i.e., after the first call to Relinearize). Update linearization at a new evaluation point. This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization. On the first call to Relinearize, the matrices in linearization will be allocated and sized correctly. On subsequent calls, the matrices of linearization are expected to already be allocated and sized correctly. TODO(aaron): This should be const except that it can initialize the object A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. Can be specialized with anything satisfying the SparseSolver concept. For example, can be used like: Factorize A and store internally. x for A x = b, where x and b are dense Solves in place for x in A x = b, where x and b are dense Eigen solvers cannot actually solve in place, so this solves, then copies back into the argument. the lower triangular matrix L such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize and D is a diagonal matrix with positive diagonal entries, and P is a permutation matrix. the diagonal matrix D such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize, L is lower triangular with unit diagonal, and P is a permutation matrix Defined to satisfy interface. No analysis is needed so is a no-op. A residual term for optimization. Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation. Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation. Public Types Create directly from a (dense/sparse) hessian functor. This is the lowest-level constructor. Create from a function that computes the (dense/sparse) jacobian. The hessian will be computed using the Gauss Newton approximation: Evaluate the factor at the given linearization point and output just the numerical values of the residual. maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian. This overload can only be called if IsSparse() is false; otherwise, it will throw This overload can only be called if IsSparse() is false; otherwise, it will throw maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian. This overload can only be called if IsSparse() is true; otherwise, it will throw This overload can only be called if IsSparse() is true; otherwise, it will throw maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side. This overload can only be called if IsSparse() is false; otherwise, it will throw This overload can only be called if IsSparse() is false; otherwise, it will throw maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side. This overload can only be called if IsSparse() is false; otherwise, it will throw This overload can only be called if IsSparse() is false; otherwise, it will throw maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side. This overload can only be called if IsSparse is true; otherwise, it will throw maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as Get the optimized keys for this factor Get all keys required to evaluate this factor Create from a function that computes the jacobian. The hessian will be computed using the Gauss Newton approximation: This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, See Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to This should be used in cases where computing J^T J using a matrix multiplication is slower than evaluating J^T J symbolically; for instance, if the jacobian is sparse or J^T J has structure so that CSE is very effective. See Subclass of Optimizer for using Graduated Non-Convexity (GNC) Subclass of Optimizer for using Graduated Non-Convexity (GNC) Assumes the convexity of the cost function is controlled by a hyperparameter mu. When mu == 0 the cost function should be convex and as mu goes to 1 the cost function should smoothly transition to a robust cost. Public Types Constructor that copies in factors and keys More generally, the gravity argument should yield the true acceleration when added to the accelerometer measurement (after the measurement has been converted to the world frame). Is a functor so as to store the preintegrated IMU measurements between two times. The residual computed is the local-coordinate difference between the final state (pose and velocity) and the state you would expect given the initial state and the IMU measurements. Based heavily on the on manifold ImuFactor found in GTSAM and on the paper: Based heavily on the on manifold ImuFactor found in GTSAM and on the paper: Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Convenience method to avoid manually specifying which arguments are optimized. Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements. Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval. Class to on-manifold preintegrate IMU measurements for usage in a SymForce optimization problem. For usage, see the doc-string of ImuFactor in imu_factor.h For usage, see the doc-string of ImuFactor in imu_factor.h Public Types Initialize with given accel_bias and gyro_bias Integrate a new measurement.
Postcondition: A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector direction. For full documentation, see ImuFactor. For full documentation, see ImuFactor. Public Types Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Convenience method to avoid manually specifying which arguments are optimized. Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements. Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval. A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector. For full documentation, see ImuFactor. For full documentation, see ImuFactor. Public Types Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a preintegrator. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function. Convenience method to avoid manually specifying which arguments are optimized. Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements. Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval. Contains a letter plus an integral subscript and superscript. Can construct with a letter, a letter + sub, or a letter + sub + super, but not letter + super. TODO(hayk): Consider an abstraction where Key contains a type enum. TODO(hayk): Consider an abstraction where Key contains a type enum. Public Types Return true if a is LESS than b, in dictionary order of the tuple (letter, sub, super). Fast Levenberg-Marquardt solver for nonlinear least squares problems specified by a linearization function. Supports on-manifold optimization and sparse solving, and attempts to minimize allocations after the first iteration. This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver. This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver. TODO(aaron): Analyze what repeated allocations we do have, and get rid of them if possible Not thread safe! Create one per thread. Example usage: Class for evaluating multiple Factors at the linearization point given by a Values. Class for evaluating multiple Factors at the linearization point given by a Values. Stores the original Factors as well as the LinearizedFactors, and provides tools for aggregating keys and building a large jacobian / hessian for optimization. For efficiency, prefer calling Relinearize() instead of re-constructing this object! For efficiency, prefer calling Relinearize() instead of re-constructing this object! Public Types Construct a Linearizer from factors and optional keys Construct a Linearizer from factors and optional keys Update linearization at a new evaluation point This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization. TODO(aaron): This should be const except that it can initialize the object Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical. Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical. Not thread safe! Create one per thread. Example usage: Public Functions Base constructor Constructor that copies in factors and keys, with arguments for the nonlinear solver Optimize the given values in-place Optimize the given values in-place This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important. num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction populate_best_linearization – If true, the linearization at the best values will be filled out in the stats stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) Optimize the given values in-place This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important. num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) Optimize the given values in-place This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important. stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize()) Linearize the problem around the given values Get covariances for each optimized key at the given linearization Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. May not be called before either Optimize() or Linearize() has been called. Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. May not be called before either Optimize() or Linearize() has been called. Get covariances for the given subset of keys at the given linearization This version is potentially much more efficient than computing the covariances for all keys in the problem. Currently requires that Get the full problem covariance at the given linearization Unlike ComputeCovariance and ComputeAllCovariances, this includes the off-diagonal blocks, i.e. the cross-covariances between different keys. The ordering of entries here is the same as the ordering of the keys in the linearization, which can be accessed via May not be called before either Optimize() or Linearize() has been called. May not be called before either Optimize() or Linearize() has been called. covariance – A matrix that will be filled out with the full problem covariance. Get the optimized keys Get the factors Get the Linearizer object Get the Linearizer object Default constructor Construct with a representative sparse matrix Compute an efficient permutation matrix (ordering) for A and store internally. Compute symbolic sparsity pattern for A and store internally. Decompose A into A = L * D * L^T and store internally. A must have the same sparsity as the matrix used for construction. Returns true if factorization was successful, and false otherwise. NOTE(brad): Currently always returns true. Returns x for A x = b, where x and b are dense. Solves in place for x in A x = b, where x and b are dense. Expose the correct LCM type (values_t or valuesf_t) Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list NOTE(alvin): others Values should not contain overlapping Keys Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list NOTE(alvin): others Values should not contain overlapping Keys Construct from serialized form. Retrieve a value by key. Add or update a value by key. Returns true if added, false if updated. Overload for non-Eigen types Add or update a value by key. Returns true if added, false if updated. Overload for Eigen types Add a value by key, throws runtime_error if the key already exists. This is useful when setting up initial values for a problem. Update or add keys to this Values base on other Values of different structure. Update or add keys to this Values base on other Values of different structure. NOTE(alvin): it is less efficient than the Update methods below if index objects are created and cached. This method performs map lookup for each key of the index Get all keys.
NOTE(hayk): If we changed key storage to a sorted vector this could automatically be maintained and it would be more embedded friendly, but At(key) would become O(N) for linear search. Expose map type to allow iteration. Cast to another Scalar type (returns a copy) Remove the given key. Only removes the index entry, does not change the data array. Returns true if removed, false if already not present. Call Cleanup() to re-pack the data array. Call Cleanup() to re-pack the data array. Repack the data array to get rid of empty space from removed keys. If regularly removing keys, it’s up to the user to call this appropriately to avoid storage growth. Returns the number of Scalar elements cleaned up from the data array. It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex(). It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex(). Create an index from the given ordered subset of keys. This object can then be used for repeated efficient operations on that subset of keys. If you want an index of all the keys, call An index will be INVALIDATED if the following happens: 1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array 1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array NOTE(hayk): We could also add a simple UpdateIndex(&index) method, since the offset is the only thing that needs to get updated after repacking. Retrieve an index entry by key. This performs a map lookup. An index entry will be INVALIDATED if the following happens: 1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array 1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array Retrieve an index entry by key, or the empty optional if the key is not present. This performs a map lookup. An index entry will be INVALIDATED if the following happens: 1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array 1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array Retrieve a value by index entry. This avoids a map lookup compared to At(key). Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already. Overload for non-Eigen types Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already. Overload for Eigen types Efficiently update the keys given by this index from other into this. This purely copies slices of the data arrays, the index MUST be valid for both objects! Efficiently update the keys from a different structured Values, given by Efficiently update the keys from a different structured Values, given by Perform a retraction from an update vector. Express this Values in the local coordinate of others Values, i.e., this \ominus others Express this Values in the local coordinate of others Values, i.e., this \ominus others others – The other Values that the local coordinate is relative to index – Ordered list of keys to include (MUST be valid for both this and others Values) others – The other Values that the local coordinate is relative to index – Ordered list of keys to include (MUST be valid for both this and others Values) epsilon – Small constant to avoid singularities (do not use zero) Serialize to LCM. Format an assertion failure. Format an assertion failure with a custom message, and optional additional things to format into the message. A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. Public Types Decompose A into A = P^T * L * D * L^T * P and store internally. x for A x = b, where x and b are dense Solves in place for x in A x = b, where x and b are dense Defined to satisfy interface. No analysis is needed so is a no-op. Construct a Linearizer from factors and optional keys Construct a Linearizer from factors and optional keys Returns whether Relinearize() has already been called once. Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size. Returns whether Relinearize() has already been called once. Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size. The keys to optimize, in the order they appear in the state vector (i.e., in rhs). A map from all optimized keys in the problem to an index entry for the corresponding optimized values (where the offset is into the problem state vector, i.e., the rhs of a linearization). Only read if IsInitialized() returns true (i.e., after the first call to Relinearize). Only read if IsInitialized() returns true (i.e., after the first call to Relinearize). Update linearization at a new evaluation point. This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization. On the first call to Relinearize, the matrices in linearization will be allocated and sized correctly. On subsequent calls, the matrices of linearization are expected to already be allocated and sized correctly. TODO(aaron): This should be const except that it can initialize the object Evaluates the linearizations of the factors at values into linearization, caching all values needed for relinearization along the way. Specifically: Calculates state_index_ Write the factor graph in .dot format to the given stream The factor graph is represented as The factor graph is represented as A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver. Can be specialized with anything satisfying the SparseSolver concept. For example, can be used like: Factorize A and store internally. x for A x = b, where x and b are dense Solves in place for x in A x = b, where x and b are dense Eigen solvers cannot actually solve in place, so this solves, then copies back into the argument. the lower triangular matrix L such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize and D is a diagonal matrix with positive diagonal entries, and P is a permutation matrix. the diagonal matrix D such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize, L is lower triangular with unit diagonal, and P is a permutation matrix Defined to satisfy interface. No analysis is needed so is a no-op. Source code for symforce
import os
import sys
import typing as T
-import warnings
+from dataclasses import dataclass
from types import ModuleType
# -------------------------------------------------------------------------------------------------
@@ -875,11 +875,11 @@
Source code for symforce
return symengine
-_symbolic_api: T.Optional[str] = None
+_symbolic_api: T.Optional[T.Literal["sympy", "symengine"]] = None
_have_imported_symbolic = False
-def _set_symbolic_api(sympy_module: str) -> None:
+def _set_symbolic_api(sympy_module: T.Literal["sympy", "symengine"]) -> None:
# Set this as the default symbolic API
global _symbolic_api # pylint: disable=global-statement
_symbolic_api = sympy_module
@@ -949,16 +949,15 @@
Source code for symforce
logger.debug("No SYMFORCE_SYMBOLIC_API set, found and using symengine.")
set_symbolic_api("symengine")
except ImportError:
- logger.debug("No SYMFORCE_SYMBOLIC_API set, no symengine found, using sympy.")
- set_symbolic_api("sympy")
+ logger.debug("No SYMFORCE_SYMBOLIC_API set, no symengine found. Will use sympy.")
+ pass
-
Source code for symforce
pass
Source code for symforce
_epsilon = new_epsilon
+
Source code for symforce
Args:
name: The name of the symbol for the new default epsilon to use
"""
- if get_symbolic_api() == "sympy":
- import sympy
- elif get_symbolic_api() == "symengine":
- sympy = _find_symengine()
- else:
- raise InvalidSymbolicApiError(get_symbolic_api())
-
- _set_epsilon(sympy.Symbol(name))
Source code for symforce.internal.symbolic
from symforce import logger
from symforce import typing as T
+if symforce._symbolic_api is None: # pylint: disable=protected-access
+ import textwrap
+
+ logger.warning(
+ textwrap.dedent(
+ """
+ No SYMFORCE_SYMBOLIC_API set, no symengine found, using sympy.
+
+ For best performance during symbolic manipulation and code generation, use the symengine
+ symbolic API, which should come installed with SymForce. If you've installed SymForce
+ without intentionally disabling SymEngine, and you are seeing this warning, this is a
+ bug - please submit an issue: https://github.com/symforce-org/symforce/issues.
+
+ To silence this warning, call `symforce.set_symbolic_api("sympy")` or set
+ `SYMFORCE_SYMBOLIC_API=sympy` in your environment before importing `symforce.symbolic`.
+ """
+ )
+ )
+
# See `symforce/__init__.py` for more information, this is used to check whether things that this
# module depends on are modified after importing
symforce._have_imported_symbolic = True # pylint: disable=protected-access
@@ -982,42 +1001,45 @@
Source code for symforce.internal.symbolic
# --------------------------------------------------------------------------------
-# Default epsilon
+# Create scopes
# --------------------------------------------------------------------------------
-from symforce import numeric_epsilon
+
Source code for symforce.internal.symbolic
raise symforce.InvalidSymbolicApiError(sympy.__package__)
+# --------------------------------------------------------------------------------
+# Default epsilon
+# --------------------------------------------------------------------------------
+
+
+from symforce import numeric_epsilon
+
+
+
Source code for symforce.internal.symbolic
)
else:
raise symforce.InvalidSymbolicApiError(sympy.__package__)
-
-# --------------------------------------------------------------------------------
-# Create scopes
-# --------------------------------------------------------------------------------
-
-
-
Source code for symforce.opt.factor
output_dir: Where the generated linearization function will be output.
namespace: Namespace of the generated linearization function.
sparse_linearization: Whether the generated linearization function should use sparse
- matricies for the jacobian and hessian approximation
+ matrices for the jacobian and hessian approximation
Returns:
Dict containing locations where the code was generated (e.g. "output_dir" and
diff --git a/_modules/symforce/opt/optimizer.html b/_modules/symforce/opt/optimizer.html
index 228f61b8..bb9ae9a9 100644
--- a/_modules/symforce/opt/optimizer.html
+++ b/_modules/symforce/opt/optimizer.html
@@ -963,6 +963,8 @@
Source code for symforce.opt.optimizer
set(optimized_keys)
), f"Duplicates in optimized keys: {optimized_keys}"
+ optimized_keys_set = set(self.optimized_keys)
+
numeric_factors = []
for factor in factors:
if isinstance(factor, Factor):
@@ -970,9 +972,11 @@
Source code for symforce.opt.optimizer
raise ValueError(
"You must specify `optimized_keys` when passing symbolic factors."
)
- # We compute the linearization in the same order as `optimized_keys`
- # so that e.g. columns of the generated jacobians are in the same order
- factor_opt_keys = [opt_key for opt_key in optimized_keys if opt_key in factor.keys]
+ # We compute the linearization in the same order as `factor.keys` so that using the
+ # same factor with different problem keys will produce the same generated function
+ factor_opt_keys = [
+ opt_key for opt_key in factor.keys if opt_key in optimized_keys_set
+ ]
if not factor_opt_keys:
raise ValueError(
f"Factor {factor.name} has no arguments (keys: {factor.keys}) in "
diff --git a/api-cpp/class/classsym_1_1DenseCholeskySolver.html b/api-cpp/class/classsym_1_1DenseCholeskySolver.html
index 5f664c80..65d6ad5f 100644
--- a/api-cpp/class/classsym_1_1DenseCholeskySolver.html
+++ b/api-cpp/class/classsym_1_1DenseCholeskySolver.html
@@ -746,17 +746,17 @@
Class sym::DenseCholeskySolver
class DenseCholeskySolver#Class sym::DenseCholeskySolverPublic Functions
Class sym::DenseCholeskySolver
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
Class sym::DenseCholeskySolver
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
diff --git a/api-cpp/class/classsym_1_1DenseLinearizer.html b/api-cpp/class/classsym_1_1DenseLinearizer.html
index 35b4b6d4..1c5e32cc 100644
--- a/api-cpp/class/classsym_1_1DenseLinearizer.html
+++ b/api-cpp/class/classsym_1_1DenseLinearizer.html
@@ -750,12 +750,12 @@ Class sym::DenseCholeskySolver
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)#
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)#
Class sym::DenseLinearizerPublic Types
Class sym::DenseLinearizerPublic Functions
+
@@ -780,26 +780,26 @@
Class sym::DenseLinearizer
Class sym::EigenSparseSolver
class EigenSparseSolver#using LinearSolver =
sym::EigenSparseSolver<double, Eigen::CholmodDecomposition<Eigen::SparseMatrix<double>>>;
@@ -761,12 +761,12 @@
Class sym::EigenSparseSolverPublic Types
Class sym::EigenSparseSolverPublic Functions
Class sym::EigenSparseSolver
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
Class sym::EigenSparseSolver
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
@@ -813,7 +813,7 @@
Class sym::EigenSparseSolver
Class sym::EigenSparseSolver
diff --git a/api-cpp/class/classsym_1_1Factor.html b/api-cpp/class/classsym_1_1Factor.html
index 8eb554d9..b873edd4 100644
--- a/api-cpp/class/classsym_1_1Factor.html
+++ b/api-cpp/class/classsym_1_1Factor.html
@@ -747,52 +747,52 @@ Class sym::EigenSparseSolver
Class sym::Factor
template<typename ScalarType>
class Factor#
using JacobianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, JacobianMatrixType*)>#
using JacobianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, JacobianMatrixType*)>#
using HessianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, MatrixType*, MatrixType*, VectorX<Scalar>*)>#
using HessianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, MatrixType*, MatrixType*, VectorX<Scalar>*)>#
Class sym::Factor
Class sym::Factor
H = J.T * J
rhs = J.T * b
Class sym::Factor
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
.
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
.
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
.
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
.
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
.
@@ -924,13 +924,13 @@
values.CreateIndex(factor.AllKeys()).entries
. values.CreateIndex(factor.AllKeys()).entries
. Class sym::Factor
@@ -939,14 +939,14 @@
Class sym::FactorPublic Static Functions
static Factor Jacobian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
static Factor Jacobian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
H = J.T * J
rhs = J.T * b
func
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIANfunc
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIANsymforce_factor_test.cc
for many examples.
Class sym::Factor
static Factor Hessian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.
static Factor Hessian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=FULL_LINEARIZATION (the default)symforce_factor_test.cc
for many examples.Class sym::GncOptimizer#
class GncOptimizer : public BaseOptimizerType#
class GncOptimizer : public BaseOptimizerType#
+
Class sym::GncOptimizerPublic Functions
@@ -776,12 +776,12 @@
inline GncOptimizer(const optimizer_params_t &optimizer_params, const optimizer_gnc_params_t &gnc_params, const Key &gnc_mu_key, OptimizerArgs&&... args)#
inline GncOptimizer(const optimizer_params_t &optimizer_params, const optimizer_gnc_params_t &gnc_params, const Key &gnc_mu_key, OptimizerArgs&&... args)#
Class sym::GncOptimizer
Class sym::ImuFactorThe gravity argument should be
[0, 0, -g]
(where g is 9.8, assuming your world frame is gravity aligned so that the -z direction points towards the Earth) unless your IMU reads 0 acceleration while stationary, in which case it should be [0, 0, 0]
.
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
+
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
“IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori
Estimation”, Robotics: Science and Systems (RSS), 2015.
Class sym::ImuFactorPublic Types
Class sym::ImuFactorPublic Functions
+
+
+
-
diff --git a/api-cpp/class/classsym_1_1ImuPreintegrator.html b/api-cpp/class/classsym_1_1ImuPreintegrator.html
index fa8af453..00ae405b 100644
--- a/api-cpp/class/classsym_1_1ImuPreintegrator.html
+++ b/api-cpp/class/classsym_1_1ImuPreintegrator.html
@@ -747,22 +747,22 @@
Class sym::ImuPreintegrator
template<typename Scalar>
class ImuPreintegrator#
Class sym::ImuPreintegratorPublic Functions
-
Class sym::ImuPreintegrator
Class sym::ImuWithGravityDirectionFactor
template<typename Scalar>
class ImuWithGravityDirectionFactor#
Class sym::ImuWithGravityDirectionFactorPublic Functions
+
+
+
-
diff --git a/api-cpp/class/classsym_1_1ImuWithGravityFactor.html b/api-cpp/class/classsym_1_1ImuWithGravityFactor.html
index a99c5f61..39872984 100644
--- a/api-cpp/class/classsym_1_1ImuWithGravityFactor.html
+++ b/api-cpp/class/classsym_1_1ImuWithGravityFactor.html
@@ -747,32 +747,32 @@
Class sym::ImuWithGravityFactor
template<typename Scalar>
class ImuWithGravityFactor#
Class sym::ImuWithGravityFactorPublic Functions
+
+
+
-
diff --git a/api-cpp/class/classsym_1_1Interpolator.html b/api-cpp/class/classsym_1_1Interpolator.html
index d5ccaba3..cb3310e9 100644
--- a/api-cpp/class/classsym_1_1Interpolator.html
+++ b/api-cpp/class/classsym_1_1Interpolator.html
@@ -751,7 +751,7 @@
Class sym::InterpolatorPublic Types
Class sym::InterpolatorPublic Functions
Class sym::Key
@@ -758,12 +758,12 @@
Class sym::Key
Class sym::Key
-
@@ -786,32 +786,32 @@
Class sym::Key
@@ -821,12 +821,12 @@
Class sym::Key
Class sym::KeyPublic Static Functions
@@ -843,17 +843,17 @@
Class sym::KeyPublic Static Attributes
Class sym::KeyPublic Functions
Class sym::LevenbergMarquardtSolver#
class LevenbergMarquardtSolver#
class LevenbergMarquardtSolver#
constexpr const int M = 9;
@@ -841,27 +841,27 @@
Class sym::LevenbergMarquardtSolverPublic Types
@@ -871,7 +871,7 @@
Class sym::LevenbergMarquardtSolver
Class sym::LevenbergMarquardtSolverPublic Functions
@@ -894,12 +894,12 @@
Class sym::LevenbergMarquardtSolver
@@ -913,23 +913,23 @@
Class sym::LevenbergMarquardtSolver
-
Class sym::Linearizer
class Linearizer#
Class sym::LinearizerPublic Functions
+
@@ -790,7 +790,7 @@
Class sym::Linearizer
Class sym::Linearizer
Class sym::Optimizer#
class Optimizer#
class Optimizer#
+// Create a Values
sym::Key key0{'R', 0};
@@ -798,67 +798,67 @@
Class sym::OptimizerPublic Types
-
-
Optimizer(const optimizer_params_t ¶ms, std::vector<Factor<Scalar>> factors, const Scalar epsilon, const std::string &name, std::vector<Key> keys, bool debug_stats, bool check_derivatives, bool include_jacobians, NonlinearSolverArgs&&... args)#
Optimizer(const optimizer_params_t ¶ms, std::vector<Factor<Scalar>> factors, Scalar epsilon, const std::string &name, std::vector<Key> keys, bool debug_stats, bool check_derivatives, bool include_jacobians, NonlinearSolverArgs&&... args)#
@@ -868,7 +868,7 @@
Class sym::Optimizer
Class sym::Optimizer
@@ -906,19 +906,19 @@
@@ -898,7 +898,7 @@
Class sym::Optimizer
Class sym::Optimizer
@@ -926,48 +926,48 @@
Class sym::Optimizer
covariances_by_key
must not contain any keys that are not optimized by this Optimizer.covariances_by_key
must not contain any keys that are not optimized by this Optimizer.
keys
corresponds to a set of keys at the start of the list of keys for the full problem, and in the same order. It uses the Schur complement trick, so will be most efficient if the hessian is of the following form, with C block diagonal: A = ( B E )
@@ -980,16 +980,16 @@
Class sym::Optimizer
optimizer.Linearizer().StateIndex()
.
Class sym::Optimizer
+
diff --git a/api-cpp/class/classsym_1_1SparseCholeskySolver.html b/api-cpp/class/classsym_1_1SparseCholeskySolver.html
index 1b2d4e0a..638cff87 100644
--- a/api-cpp/class/classsym_1_1SparseCholeskySolver.html
+++ b/api-cpp/class/classsym_1_1SparseCholeskySolver.html
@@ -763,42 +763,42 @@
Class sym::SparseCholeskySolver
Class sym::SparseCholeskySolverPublic Functions
Class sym::SparseCholeskySolver
Class sym::SparseCholeskySolver
RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
Class sym::SparseSchurSolverPublic Types
Class sym::SparseSchurSolverPublic Functions
@@ -821,23 +821,23 @@
Class sym::SparseSchurSolver
-
Class sym::ValuesPublic Types
@@ -777,54 +777,54 @@
Class sym::Values
+
T At(const Key &key) const#
T At(const Key &key) const#
std::enable_if_t<!kIsEigenType<T>, bool> Set(const Key &key, const T &value)#
std::enable_if_t<!kIsEigenType<T>, bool> Set(const Key &key, const T &value)#
std::enable_if_t<kIsEigenType<Derived>, bool> Set(const Key &key, const Derived &value)#
std::enable_if_t<kIsEigenType<Derived>, bool> Set(const Key &key, const Derived &value)#
void SetNew(const Key &key, T &&value)#
void SetNew(const Key &key, T &&value)#
@@ -842,8 +842,8 @@
+index
MUST be valid for other.Class sym::Values
-
Class sym::Values
Values<NewScalar> Cast() const#
Values<NewScalar> Cast() const#
@@ -891,71 +891,71 @@
Class sym::Valuessize_t Cleanup()#
values.CreateIndex(values.Keys())
.
T At(const index_entry_t &entry) const#
T At(const index_entry_t &entry) const#
std::enable_if_t<!kIsEigenType<T>> Set(const index_entry_t &key, const T &value)#
std::enable_if_t<!kIsEigenType<T>> Set(const index_entry_t &key, const T &value)#
std::enable_if_t<kIsEigenType<Derived>> Set(const index_entry_t &key, const Derived &value)#
std::enable_if_t<kIsEigenType<Derived>> Set(const index_entry_t &key, const Derived &value)#
index_this
and index_other
. This purely copies slices of the data arrays.
+index_this
and index_other
. This purely copies slices of the data arrays.index_this
MUST be valid for this object; index_other
MUST be valid for other object.
-
Class sym::Values
-
-
Class sym::Values
File assert.hFunctions
diff --git a/api-cpp/file/dense__cholesky__solver_8h.html b/api-cpp/file/dense__cholesky__solver_8h.html
index 3e841af1..da218e89 100644
--- a/api-cpp/file/dense__cholesky__solver_8h.html
+++ b/api-cpp/file/dense__cholesky__solver_8h.html
@@ -751,17 +751,17 @@
inline std::string FormatFailure(const char *error, const char *func, const char *file, int line, const char *fmt, T&&... args)#
inline std::string FormatFailure(const char *error, const char *func, const char *file, int line, const char *fmt, T&&... args)#
File dense_cholesky_solver.htemplate<typename Scalar>
class DenseCholeskySolver
File dense_cholesky_solver.hPublic Functions
File dense_cholesky_solver.h
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const
File dense_cholesky_solver.h
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const
@@ -855,7 +855,7 @@ File dense_cholesky_solver.h
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)
File dense_cholesky_solver.hPrivate Members
File dense_linearizer.hPublic Types
File dense_linearizer.hPublic Functions
+
@@ -783,26 +783,26 @@
File dense_linearizer.h
File dense_linearizer.hPrivate Functions
File dense_linearizer.hPrivate Members
File dump_graph.hFunctions
void DumpGraph(const std::string &name, const std::vector<Key> &keys, const std::vector<Factor<Scalar>> &factors, std::ostream &out)#
void DumpGraph(const std::string &name, const std::vector<Key> &keys, const std::vector<Factor<Scalar>> &factors, std::ostream &out)#
keys
and factors
. For example, these can be obtained from an Optimizer
as optimizer.Keys()
and optimizer.Factors()
.keys
and factors
. For example, these can be obtained from an Optimizer
as optimizer.Keys()
and optimizer.Factors()
.
diff --git a/api-cpp/file/eigen__sparse__solver_8h.html b/api-cpp/file/eigen__sparse__solver_8h.html
index d5c68c08..2d010b3c 100644
--- a/api-cpp/file/eigen__sparse__solver_8h.html
+++ b/api-cpp/file/eigen__sparse__solver_8h.html
@@ -751,7 +751,7 @@
File eigen_sparse_solver.htemplate<typename Scalar, typename EigenSolver>
class EigenSparseSolver
using LinearSolver =
sym::EigenSparseSolver<double, Eigen::CholmodDecomposition<Eigen::SparseMatrix<double>>>;
@@ -766,12 +766,12 @@
File eigen_sparse_solver.hPublic Types
File eigen_sparse_solver.hPublic Functions
File eigen_sparse_solver.h
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const
File eigen_sparse_solver.h
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const
@@ -818,7 +818,7 @@
File eigen_sparse_solver.h
File eigen_sparse_solver.h
@@ -866,7 +866,7 @@ File eigen_sparse_solver.h
File eigen_sparse_solver.hPrivate Members
File factor.hTypedefs
File factor.hFunctions
Compute the combined set of keys to optimize from the given factors. Order using the given comparison function.
A residual term for optimization.
-Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation.
+Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation.
Public Types
Create directly from a (dense/sparse) hessian functor. This is the lowest-level constructor.
Create from a function that computes the (dense/sparse) jacobian. The hessian will be computed using the Gauss Newton approximation:
H = J.T * J
rhs = J.T * b
Evaluate the factor at the given linearization point and output just the numerical values of the residual.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian.
-This overload can only be called if IsSparse() is true; otherwise, it will throw
+This overload can only be called if IsSparse() is true; otherwise, it will throw
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
This overload can only be called if IsSparse is true; otherwise, it will throw
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
Get the optimized keys for this factor
Get all keys required to evaluate this factor
Create from a function that computes the jacobian. The hessian will be computed using the Gauss Newton approximation:
H = J.T * J
rhs = J.T * b
This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to func
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.
If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIAN
This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to func
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.
If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIAN
See symforce_factor_test.cc
for many examples.
Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.
Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.
This should be used in cases where computing J^T J using a matrix multiplication is slower than evaluating J^T J symbolically; for instance, if the jacobian is sparse or J^T J has structure so that CSE is very effective.
func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=FULL_LINEARIZATION (the default)
See symforce_factor_test.cc
for many examples.
Subclass of Optimizer for using Graduated Non-Convexity (GNC)
+Subclass of Optimizer for using Graduated Non-Convexity (GNC)
Assumes the convexity of the cost function is controlled by a hyperparameter mu. When mu == 0 the cost function should be convex and as mu goes to 1 the cost function should smoothly transition to a robust cost.
Public Types
Constructor that copies in factors and keys
[0, 0, 0]
.
More generally, the gravity argument should yield the true acceleration when added to the accelerometer measurement (after the measurement has been converted to the world frame).
Is a functor so as to store the preintegrated IMU measurements between two times. The residual computed is the local-coordinate difference between the final state (pose and velocity) and the state you would expect given the initial state and the IMU measurements.
-Based heavily on the on manifold ImuFactor found in GTSAM and on the paper:
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
+Based heavily on the on manifold ImuFactor found in GTSAM and on the paper:
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
“IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori
Estimation”, Robotics: Science and Systems (RSS), 2015.
@@ -884,27 +884,27 @@ File imu_factor.hPublic Types
-
-using Preintegrator = sym::ImuPreintegrator<Scalar>
+using Preintegrator = sym::ImuPreintegrator<Scalar>
-
-using Measurement = sym::PreintegratedImuMeasurements<Scalar>
+using Measurement = sym::PreintegratedImuMeasurements<Scalar>
@@ -912,26 +912,26 @@ File imu_factor.hPublic Functions
-
-explicit ImuFactor(const Preintegrator &preintegrator)
-Construct an ImuFactor from a preintegrator.
+explicit ImuFactor(const Preintegrator &preintegrator)
+Construct an ImuFactor from a preintegrator.
-
-ImuFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)
-Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)
+Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
-
-sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const
-Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const
+Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
--
-void operator()(const Pose3 &pose_i, const Vector3 &vel_i, const Pose3 &pose_j, const Vector3 &vel_j, const Vector3 &accel_bias_i, const Vector3 &gyro_bias_i, const Vector3 &gravity, const Scalar epsilon, Eigen::Matrix<Scalar, 9, 1> *const residual = nullptr, Eigen::Matrix<Scalar, 9, 24> *const jacobian = nullptr, Eigen::Matrix<Scalar, 24, 24> *const hessian = nullptr, Eigen::Matrix<Scalar, 24, 1> *const rhs = nullptr) const
+-
+void operator()(const Pose3 &pose_i, const Vector3 &vel_i, const Pose3 &pose_j, const Vector3 &vel_j, const Vector3 &accel_bias_i, const Vector3 &gyro_bias_i, const Vector3 &gravity, Scalar epsilon, Eigen::Matrix<Scalar, 9, 1> *residual = nullptr, Eigen::Matrix<Scalar, 9, 24> *jacobian = nullptr, Eigen::Matrix<Scalar, 24, 24> *hessian = nullptr, Eigen::Matrix<Scalar, 24, 1> *rhs = nullptr) const
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector direction.
-For full documentation, see ImuFactor.
+For full documentation, see ImuFactor.
Public Types
Construct an ImuFactor from a preintegrator.
+explicit ImuWithGravityDirectionFactor(const Preintegrator &preintegrator)Construct an ImuFactor from a preintegrator.
Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuWithGravityDirectionFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) constConstruct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector.
-For full documentation, see ImuFactor.
+For full documentation, see ImuFactor.
Public Types
Construct an ImuFactor from a preintegrator.
+explicit ImuWithGravityFactor(const Preintegrator &preintegrator)Construct an ImuFactor from a preintegrator.
Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuWithGravityFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) constConstruct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
Class to on-manifold preintegrate IMU measurements for usage in a SymForce optimization problem.
-For usage, see the doc-string of ImuFactor in imu_factor.h
+For usage, see the doc-string of ImuFactor in imu_factor.h
Public Types
Initialize with given accel_bias and gyro_bias
Integrate a new measurement.
Postcondition:
@@ -825,12 +825,12 @@Public Functions
Public Functions
Print implementation for Key.
+std::ostream &operator<<(std::ostream &os, const sym::Key &key)#Print implementation for Key.
Examples:
Key('C', 13) -> "C_13"
Key('f') -> "f"
Key('f', 32, 2) -> "f_32_2"
@@ -794,9 +794,9 @@ File key.hclass Key
-
#include <key.h>
-
+
Contains a letter plus an integral subscript and superscript. Can construct with a letter, a letter + sub, or a letter + sub + super, but not letter + super.
-TODO(hayk): Consider an abstraction where Key contains a type enum.
+TODO(hayk): Consider an abstraction where Key contains a type enum.
Public Types
@@ -806,12 +806,12 @@ File key.h
-
-using subscript_t = std::int64_t
+using subscript_t = std::int64_t
@@ -823,8 +823,8 @@ File key.h
--
-inline constexpr Key(letter_t letter, subscript_t sub = kInvalidSub, superscript_t super = kInvalidSuper)
+-
+inline constexpr Key(const letter_t letter, const subscript_t sub = kInvalidSub, const superscript_t super = kInvalidSuper)
@@ -834,32 +834,32 @@ File key.h
-
-inline constexpr letter_t Letter() const noexcept
+inline constexpr letter_t Letter() const noexcept
-
-inline constexpr subscript_t Sub() const noexcept
+inline constexpr subscript_t Sub() const noexcept
-
-inline constexpr superscript_t Super() const noexcept
+inline constexpr superscript_t Super() const noexcept
-
-inline constexpr Key WithLetter(const letter_t letter) const
+inline constexpr Key WithLetter(const letter_t letter) const
-
-inline constexpr Key WithSub(const subscript_t sub) const
+inline constexpr Key WithSub(const subscript_t sub) const
-
-inline constexpr Key WithSuper(const superscript_t super) const
+inline constexpr Key WithSuper(const superscript_t super) const
@@ -869,12 +869,12 @@ File key.h
-
-inline constexpr bool operator==(const Key &other) const noexcept
+inline constexpr bool operator==(const Key &other) const noexcept
Return true if a is LESS than b, in dictionary order of the tuple (letter, sub, super).
Fast Levenberg-Marquardt solver for nonlinear least squares problems specified by a linearization function. Supports on-manifold optimization and sparse solving, and attempts to minimize allocations after the first iteration.
-This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver.
+This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver.
TODO(aaron): Analyze what repeated allocations we do have, and get rid of them if possible
Not thread safe! Create one per thread.
Example usage:
constexpr const int M = 9;
@@ -846,27 +846,27 @@ File levenberg_marquardt_solver.hPublic Types
-
-using Scalar = ScalarType
+using Scalar = ScalarType
-
-using LinearSolver = LinearSolverType
+using LinearSolver = LinearSolverType
-
-using MatrixType = typename LinearSolverType::MatrixType
+using MatrixType = typename LinearSolverType::MatrixType
-
-using StateType = internal::LevenbergMarquardtState<MatrixType>
+using StateType = internal::LevenbergMarquardtState<MatrixType>
-
-using LinearizationType = Linearization<MatrixType>
+using LinearizationType = Linearization<MatrixType>
@@ -876,7 +876,7 @@ File levenberg_marquardt_solver.h
-
-using LinearizeFunc = std::function<void(const Values<Scalar>&, LinearizationType&)>
+using LinearizeFunc = std::function<void(const Values<Scalar>&, LinearizationType&)>
Private Functions
Returns the sparse matrix structure of matrix.
Return a default initialized sparse structure because arg is dense.
Returns coefficients of matrix. Overloads exist for both dense and sparse matrices to make writing generic code easier.
This version returns the non-zero values of an Eigen::SparseMatrix
Note: it returns a map, so be careful about mutating or disposing of matrix before you are finished with the output.
@@ -803,7 +803,7 @@Returns coefficients of matrix. Overloads exist for both dense and sparse matrices to make writing generic code easier.
Returns a const-ref to the argument.
Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
+Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
MatrixType is expected to be an Eigen MatrixX or SparseMatrix.
Public Types
Free function as an alternate way to call the Linearizer
+template<typename Scalar>Free function as an alternate way to call the Linearizer
Class for evaluating multiple Factors at the linearization point given by a Values.
+Class for evaluating multiple Factors at the linearization point given by a Values.
Stores the original Factors as well as the LinearizedFactors, and provides tools for aggregating keys and building a large jacobian / hessian for optimization.
-For efficiency, prefer calling Relinearize() instead of re-constructing this object!
+For efficiency, prefer calling Relinearize() instead of re-constructing this object!
Public Types
Construct a Linearizer from factors and optional keys
+Linearizer(const std::string &name, const std::vector<Factor<Scalar>> &factors, const std::vector<Key> &key_order = {}, bool include_jacobians = false)Construct a Linearizer from factors and optional keys
Update linearization at a new evaluation point
This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization.
TODO(aaron): This should be const except that it can initialize the object
@@ -818,17 +818,17 @@Allocate all factor storage and compute sparsity pattern. This does a lot of index computation on the first linearization, such that repeated linearization can be fast.
Update the sparse combined problem linearization from a single factor.
Update the combined residual and rhs, along with triplet lists for the sparse matrices, from a single factor
Check if a Linearization has the correct sizes, and if not, initialize it
+void EnsureLinearizationHasCorrectSize(SparseLinearization<Scalar> &linearization) const#Check if a Linearization has the correct sizes, and if not, initialize it
Index into iterations of the best iteration (containing the optimal Values)
+Index into iterations of the best iteration (containing the optimal Values)
The linearization at best_index (at optimized_values), filled out if populate_best_linearization=true
The sparsity pattern of the problem jacobian
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The permutation used by the linear solver
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The sparsity pattern of the cholesky factor
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
Simple wrapper to make it one function call.
Sensible default parameters for Optimizer
+Sensible default parameters for Optimizer
Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical.
+Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical.
Not thread safe! Create one per thread.
Example usage:
// Create a Values
sym::Key key0{'R', 0};
@@ -836,67 +836,67 @@ File optimizer.hPublic Types
-
-using Scalar = ScalarType
+using Scalar = ScalarType
-
-using NonlinearSolver = NonlinearSolverType
+using NonlinearSolver = NonlinearSolverType
-
-using FailureReason = typename NonlinearSolver::FailureReason
+using FailureReason = typename NonlinearSolver::FailureReason
-
-using MatrixType = typename NonlinearSolver::MatrixType
+using MatrixType = typename NonlinearSolver::MatrixType
-
-using Stats = OptimizationStats<MatrixType>
+using Stats = OptimizationStats<MatrixType>
-
-using LinearizerType = internal::LinearizerSelector_t<MatrixType>
+using LinearizerType = internal::LinearizerSelector_t<MatrixType>
Public Functions
Base constructor
Constructor that copies in factors and keys, with arguments for the nonlinear solver
Optimize the given values in-place
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction
populate_best_linearization – If true, the linearization at the best values will be filled out in the stats
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
+stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Linearize the problem around the given values
Get covariances for each optimized key at the given linearization
-Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. covariances_by_key
must not contain any keys that are not optimized by this Optimizer.
May not be called before either Optimize() or Linearize() has been called.
+Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. covariances_by_key
must not contain any keys that are not optimized by this Optimizer.
May not be called before either Optimize() or Linearize() has been called.
Get covariances for the given subset of keys at the given linearization
This version is potentially much more efficient than computing the covariances for all keys in the problem.
Currently requires that keys
corresponds to a set of keys at the start of the list of keys for the full problem, and in the same order. It uses the Schur complement trick, so will be most efficient if the hessian is of the following form, with C block diagonal:
A = ( B E )
@@ -1018,16 +1018,16 @@ File optimizer.h
-
-void ComputeCovariances(const Linearization<MatrixType> &linearization, const std::vector<Key> &keys, std::unordered_map<Key, MatrixX<Scalar>> *covariances_by_key)
+void ComputeCovariances(const Linearization<MatrixType> &linearization, const std::vector<Key> &keys, std::unordered_map<Key, MatrixX<Scalar>> *covariances_by_key)
-
-void ComputeFullCovariance(const Linearization<MatrixType> &linearization, MatrixX<Scalar> &covariance)
+void ComputeFullCovariance(const Linearization<MatrixType> &linearization, MatrixX<Scalar> &covariance)
Get the full problem covariance at the given linearization
Unlike ComputeCovariance and ComputeAllCovariances, this includes the off-diagonal blocks, i.e. the cross-covariances between different keys.
The ordering of entries here is the same as the ordering of the keys in the linearization, which can be accessed via optimizer.Linearizer().StateIndex()
.
-May not be called before either Optimize() or Linearize() has been called.
+May not be called before either Optimize() or Linearize() has been called.
- Parameters:
covariance – A matrix that will be filled out with the full problem covariance.
@@ -1037,25 +1037,25 @@ File optimizer.h
-
-const std::vector<Key> &Keys() const
+const std::vector<Key> &Keys() const
Get the optimized keys
-
-const std::vector<Factor<Scalar>> &Factors() const
+const std::vector<Factor<Scalar>> &Factors() const
Get the factors
-
-const LinearizerType &Linearizer() const
-Get the Linearizer object
+const LinearizerType &Linearizer() const
+Get the Linearizer object
-
-LinearizerType &Linearizer()
+LinearizerType &Linearizer()
@@ -1075,13 +1075,13 @@ File optimizer.hProtected Functions
-
-void IterateToConvergence(Values<Scalar> &values, int num_iterations, bool populate_best_linearization, Stats &stats)#
+void IterateToConvergence(Values<Scalar> &values, int num_iterations, bool populate_best_linearization, Stats &stats)#
Call nonlinear_solver_.Iterate() on the given values (updating in place) until out of iterations or converged
--
-NonlinearSolver::LinearizeFunc BuildLinearizeFunc(const bool check_derivatives)#
+-
+NonlinearSolver::LinearizeFunc BuildLinearizeFunc(bool check_derivatives)#
Build the linearize_func
functor for the underlying nonlinear solver
@@ -1092,13 +1092,13 @@ File optimizer.h
-
-void Initialize(const Values<Scalar> &values)#
+void Initialize(const Values<Scalar> &values)#
Do initialization that depends on having a values
Store a copy of the nonlinear factors. The Linearization object in the state keeps a pointer to this memory.
+std::vector<Factor<Scalar>> factors_#Store a copy of the nonlinear factors. The Linearization object in the state keeps a pointer to this memory.
The name of this optimizer to be used for printing debug information.
Underlying nonlinear solver class.
Functor for interfacing with the optimizer.
Initialize instance struct with accel_bias and gyro_bias and all other values zeroed out (scalars, vectors, and matrices) or set to the identity (DR).
Default constructor
Construct with a representative sparse matrix
Compute an efficient permutation matrix (ordering) for A and store internally.
Compute symbolic sparsity pattern for A and store internally.
Decompose A into A = L * D * L^T and store internally. A must have the same sparsity as the matrix used for construction. Returns true if factorization was successful, and false otherwise. NOTE(brad): Currently always returns true.
Returns x for A x = b, where x and b are dense.
Solves in place for x in A x = b, where x and b are dense.
Public Types
Public Types
Compute the numerical derivative of a function using a central difference approximation
TODO(aaron): Add a higher-order approximation to the derivative either as an option or as the default
@@ -790,12 +790,12 @@Prints entries with their keys, data slices, and values, like:
Valuesd entries=4 array=8 storage_dim=7 tangent_dim=6
R_1 [0:4] --> <Rot3d [0.563679, 0.0939464, 0, 0.820634]>
f_1 [5:6] --> 4.2
@@ -787,17 +787,17 @@ File values.hPublic Types
-
-using MapType = std::unordered_map<Key, index_entry_t>#
+using MapType = std::unordered_map<Key, index_entry_t>#
-
-using LcmType = typename ValuesLcmTypeHelper<Scalar>::Type#
+using LcmType = typename ValuesLcmTypeHelper<Scalar>::Type#
Expose the correct LCM type (values_t or valuesf_t)
@@ -812,54 +812,54 @@ File values.h
-
-explicit Values(std::initializer_list<Values<Scalar>> others)#
-Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list
-NOTE(alvin): others Values should not contain overlapping Keys
+explicit Values(std::initializer_list<Values<Scalar>> others)#
+Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list
+NOTE(alvin): others Values should not contain overlapping Keys
-
-explicit Values(const LcmType &msg)#
+explicit Values(const LcmType &msg)#
Construct from serialized form.
-
-template<typename T>
T At(const Key &key) const#
+template<typename T>
T At(const Key &key) const#
Retrieve a value by key.
-
-template<typename T>
std::enable_if_t<!kIsEigenType<T>, bool> Set(const Key &key, const T &value)#
+template<typename T>
std::enable_if_t<!kIsEigenType<T>, bool> Set(const Key &key, const T &value)#
Add or update a value by key. Returns true if added, false if updated.
Overload for non-Eigen types
-
-template<typename Derived>
std::enable_if_t<kIsEigenType<Derived>, bool> Set(const Key &key, const Derived &value)#
+template<typename Derived>
std::enable_if_t<kIsEigenType<Derived>, bool> Set(const Key &key, const Derived &value)#
Add or update a value by key. Returns true if added, false if updated.
Overload for Eigen types
-
-template<typename T>
void SetNew(const Key &key, T &&value)#
+template<typename T>
void SetNew(const Key &key, T &&value)#
Add a value by key, throws runtime_error if the key already exists.
This is useful when setting up initial values for a problem.
-
-void UpdateOrSet(const index_t &index, const Values<Scalar> &other)#
-Update or add keys to this Values base on other Values of different structure.
+void UpdateOrSet(const index_t &index, const Values<Scalar> &other)#
+Update or add keys to this Values base on other Values of different structure.
index
MUST be valid for other.
NOTE(alvin): it is less efficient than the Update methods below if index objects are created and cached. This method performs map lookup for each key of the index
@@ -877,8 +877,8 @@ File values.h
--
-std::vector<Key> Keys(const bool sort_by_offset = true) const#
+-
+std::vector<Key> Keys(bool sort_by_offset = true) const#
Get all keys.
NOTE(hayk): If we changed key storage to a sorted vector this could automatically be maintained and it would be more embedded friendly, but At(key) would become O(N) for linear search.
@@ -891,28 +891,28 @@ File values.h
-
-const MapType &Items() const#
+const MapType &Items() const#
Expose map type to allow iteration.
-
-template<typename NewScalar>
Values<NewScalar> Cast() const#
+template<typename NewScalar>
Values<NewScalar> Cast() const#
Cast to another Scalar type (returns a copy)
-
-bool Remove(const Key &key)#
+bool Remove(const Key &key)#
Remove the given key.
Only removes the index entry, does not change the data array. Returns true if removed, false if already not present.
-Call Cleanup() to re-pack the data array.
+Call Cleanup() to re-pack the data array.
@@ -926,71 +926,71 @@ File values.hsize_t Cleanup()#
Repack the data array to get rid of empty space from removed keys.
If regularly removing keys, it’s up to the user to call this appropriately to avoid storage growth. Returns the number of Scalar elements cleaned up from the data array.
-It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex().
+It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex().
-
-index_t CreateIndex(const std::vector<Key> &keys) const#
+index_t CreateIndex(const std::vector<Key> &keys) const#
Create an index from the given ordered subset of keys. This object can then be used for repeated efficient operations on that subset of keys.
If you want an index of all the keys, call values.CreateIndex(values.Keys())
.
An index will be INVALIDATED if the following happens:
-1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
NOTE(hayk): We could also add a simple UpdateIndex(&index) method, since the offset is the only thing that needs to get updated after repacking.
-
-index_entry_t IndexEntryAt(const Key &key) const#
+index_entry_t IndexEntryAt(const Key &key) const#
Retrieve an index entry by key. This performs a map lookup.
An index entry will be INVALIDATED if the following happens:
-1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
-
-optional<index_entry_t> MaybeIndexEntryAt(const Key &key) const#
+optional<index_entry_t> MaybeIndexEntryAt(const Key &key) const#
Retrieve an index entry by key, or the empty optional if the key is not present. This performs a map lookup.
An index entry will be INVALIDATED if the following happens:
-1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
-
-template<typename T>
T At(const index_entry_t &entry) const#
+template<typename T>
T At(const index_entry_t &entry) const#
Retrieve a value by index entry. This avoids a map lookup compared to At(key).
-
-template<typename T>
std::enable_if_t<!kIsEigenType<T>> Set(const index_entry_t &key, const T &value)#
+template<typename T>
std::enable_if_t<!kIsEigenType<T>> Set(const index_entry_t &key, const T &value)#
Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already.
Overload for non-Eigen types
-
-template<typename Derived>
std::enable_if_t<kIsEigenType<Derived>> Set(const index_entry_t &key, const Derived &value)#
+template<typename Derived>
std::enable_if_t<kIsEigenType<Derived>> Set(const index_entry_t &key, const Derived &value)#
Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already.
Overload for Eigen types
-
-void Update(const index_t &index, const Values<Scalar> &other)#
+void Update(const index_t &index, const Values<Scalar> &other)#
Efficiently update the keys given by this index from other into this. This purely copies slices of the data arrays, the index MUST be valid for both objects!
-
-void Update(const index_t &index_this, const index_t &index_other, const Values<Scalar> &other)#
-Efficiently update the keys from a different structured Values, given by index_this
and index_other
. This purely copies slices of the data arrays.
+void Update(const index_t &index_this, const index_t &index_other, const Values<Scalar> &other)#
+Efficiently update the keys from a different structured Values, given by index_this
and index_other
. This purely copies slices of the data arrays.
index_this
MUST be valid for this object; index_other
MUST be valid for other object.
--
-void Retract(const index_t &index, const Scalar *delta, const Scalar epsilon)#
+-
+void Retract(const index_t &index, const Scalar *delta, Scalar epsilon)#
Perform a retraction from an update vector.
- Parameters:
@@ -1004,14 +1004,14 @@ File values.h
--
-VectorX<Scalar> LocalCoordinates(const Values<Scalar> &others, const index_t &index, const Scalar epsilon)#
-Express this Values in the local coordinate of others Values, i.e., this \ominus others
+-
+VectorX<Scalar> LocalCoordinates(const Values<Scalar> &others, const index_t &index, Scalar epsilon)#
+Express this Values in the local coordinate of others Values, i.e., this \ominus others
- Parameters:
-others – The other Values that the local coordinate is relative to
-index – Ordered list of keys to include (MUST be valid for both this and others Values)
+others – The other Values that the local coordinate is relative to
+index – Ordered list of keys to include (MUST be valid for both this and others Values)
epsilon – Small constant to avoid singularities (do not use zero)
@@ -1020,18 +1020,18 @@ File values.h
-
-void FillLcmType(LcmType &msg, bool sort_keys = false) const#
+void FillLcmType(LcmType &msg, bool sort_keys = false) const#
Serialize to LCM.
Format an assertion failure.
Format an assertion failure with a custom message, and optional additional things to format into the message.
Write the factor graph in .dot format to the given stream
-The factor graph is represented as keys
and factors
. For example, these can be obtained from an Optimizer
as optimizer.Keys()
and optimizer.Factors()
.
The factor graph is represented as keys
and factors
. For example, these can be obtained from an Optimizer
as optimizer.Keys()
and optimizer.Factors()
.
Compute the combined set of keys to optimize from the given factors. Order using the given comparison function.
Print implementation for Key.
+std::ostream &operator<<(std::ostream &os, const sym::Key &key)#Print implementation for Key.
Examples:
Key('C', 13) -> "C_13"
Key('f') -> "f"
Key('f', 32, 2) -> "f_32_2"
@@ -982,19 +982,19 @@ Namespace sym
-
-template<typename Scalar>
sparse_matrix_structure_t GetSparseStructure(const Eigen::SparseMatrix<Scalar> &matrix)#
+template<typename Scalar>
sparse_matrix_structure_t GetSparseStructure(const Eigen::SparseMatrix<Scalar> &matrix)#
Returns the sparse matrix structure of matrix.
-
-template<typename Derived>
sparse_matrix_structure_t GetSparseStructure(const Eigen::EigenBase<Derived>&)#
+template<typename Derived>
sparse_matrix_structure_t GetSparseStructure(const Eigen::EigenBase<Derived>&)#
Return a default initialized sparse structure because arg is dense.
-
-template<typename Scalar>
Eigen::Map<const VectorX<Scalar>> JacobianValues(const Eigen::SparseMatrix<Scalar> &matrix)#
+template<typename Scalar>
Eigen::Map<const VectorX<Scalar>> JacobianValues(const Eigen::SparseMatrix<Scalar> &matrix)#
Returns coefficients of matrix. Overloads exist for both dense and sparse matrices to make writing generic code easier.
This version returns the non-zero values of an Eigen::SparseMatrix
Note: it returns a map, so be careful about mutating or disposing of matrix before you are finished with the output.
@@ -1002,32 +1002,32 @@ Namespace sym
-
-template<typename Scalar>
const MatrixX<Scalar> &JacobianValues(const MatrixX<Scalar> &matrix)#
+template<typename Scalar>
const MatrixX<Scalar> &JacobianValues(const MatrixX<Scalar> &matrix)#
Returns coefficients of matrix. Overloads exist for both dense and sparse matrices to make writing generic code easier.
Returns a const-ref to the argument.
-
-template<typename Scalar>
SparseLinearization<Scalar> Linearize(const std::vector<Factor<Scalar>> &factors, const Values<Scalar> &values, const std::vector<Key> &keys_to_optimize = {}, const std::string &linearizer_name = "Linearizer")#
-Free function as an alternate way to call the Linearizer
+template<typename Scalar>
SparseLinearization<Scalar> Linearize(const std::vector<Factor<Scalar>> &factors, const Values<Scalar> &values, const std::vector<Key> &keys_to_optimize = {}, const std::string &linearizer_name = "Linearizer")#
+Free function as an alternate way to call the Linearizer
-
-template<typename Scalar, typename NonlinearSolverType = LevenbergMarquardtSolver<Scalar>>
Optimizer<Scalar, NonlinearSolverType>::Stats Optimize(const optimizer_params_t ¶ms, const std::vector<Factor<Scalar>> &factors, Values<Scalar> &values, const Scalar epsilon = kDefaultEpsilon<Scalar>)#
+template<typename Scalar, typename NonlinearSolverType = LevenbergMarquardtSolver<Scalar>>
Optimizer<Scalar, NonlinearSolverType>::Stats Optimize(const optimizer_params_t ¶ms, const std::vector<Factor<Scalar>> &factors, Values<Scalar> &values, const Scalar epsilon = kDefaultEpsilon<Scalar>)#
Simple wrapper to make it one function call.
-
-template<typename Scalar, typename NonlinearSolverType = LevenbergMarquardtSolver<Scalar>>
Optimizer<Scalar, NonlinearSolverType>::Stats Optimize(const optimizer_params_t ¶ms, const std::vector<Factor<Scalar>> &factors, Values<Scalar> *values, const Scalar epsilon = kDefaultEpsilon<Scalar>)#
+template<typename Scalar, typename NonlinearSolverType = LevenbergMarquardtSolver<Scalar>>
Optimizer<Scalar, NonlinearSolverType>::Stats Optimize(const optimizer_params_t ¶ms, const std::vector<Factor<Scalar>> &factors, Values<Scalar> *values, const Scalar epsilon = kDefaultEpsilon<Scalar>)#
-
optimizer_params_t DefaultOptimizerParams()#
-Sensible default parameters for Optimizer
+Sensible default parameters for Optimizer
@@ -1054,7 +1054,7 @@ Namespace sym
-
-template<typename F, typename X>
auto NumericalDerivative(const F f, const X &x, const typename sym::StorageOps<X>::Scalar epsilon = kDefaultEpsilon<typename StorageOps<X>::Scalar>, const typename sym::StorageOps<X>::Scalar delta = 1e-2f)
+template<typename F, typename X>
auto NumericalDerivative(const F f, const X &x, const typename sym::StorageOps<X>::Scalar epsilon = kDefaultEpsilon<typename StorageOps<X>::Scalar>, const typename sym::StorageOps<X>::Scalar delta = 1e-2f)
Compute the numerical derivative of a function using a central difference approximation
TODO(aaron): Add a higher-order approximation to the derivative either as an option or as the default
@@ -1072,17 +1072,17 @@ Namespace sym
-
-template<typename T>
std::enable_if_t<kIsEigenType<T>, bool> IsApprox(const T &a, const T &b, const typename StorageOps<T>::Scalar epsilon)
+template<typename T>
std::enable_if_t<kIsEigenType<T>, bool> IsApprox(const T &a, const T &b, const typename StorageOps<T>::Scalar epsilon)
-
-template<typename T>
std::enable_if_t<!kIsEigenType<T>, bool> IsApprox(const T &a, const T &b, const typename StorageOps<T>::Scalar epsilon)
+template<typename T>
std::enable_if_t<!kIsEigenType<T>, bool> IsApprox(const T &a, const T &b, const typename StorageOps<T>::Scalar epsilon)
-
-template<typename Scalar>
std::ostream &operator<<(std::ostream &os, const Values<Scalar> &v)
+template<typename Scalar>
std::ostream &operator<<(std::ostream &os, const Values<Scalar> &v)
Prints entries with their keys, data slices, and values, like:
Valuesd entries=4 array=8 storage_dim=7 tangent_dim=6
R_1 [0:4] --> <Rot3d [0.563679, 0.0939464, 0, 0.820634]>
f_1 [5:6] --> 4.2
@@ -1100,17 +1100,17 @@ Namespace symtemplate<typename Scalar>
class DenseCholeskySolver#
-
#include <dense_cholesky_solver.h>
-
A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver.
+A thin wrapper around Eigen::LDLT for use in nonlinear solver classes like sym::LevenbergMarquardtSolver.
Public Types
@@ -1118,7 +1118,7 @@ Namespace symPublic Functions
-
-inline bool Factorize(const MatrixType &A)#
+inline bool Factorize(const MatrixType &A)#
Decompose A into A = P^T * L * D * L^T * P and store internally.
- Pre:
@@ -1132,7 +1132,7 @@ Namespace sym
-
-template<typename Rhs>
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
+template<typename Rhs>
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
- Returns:
x for A x = b, where x and b are dense
@@ -1145,7 +1145,7 @@ Namespace sym
-
-template<typename Rhs>
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
+template<typename Rhs>
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
Solves in place for x in A x = b, where x and b are dense
- Pre:
@@ -1195,7 +1195,7 @@ Namespace sym
-
-template<typename Derived>
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)#
+template<typename Derived>
inline void AnalyzeSparsityPattern(const Eigen::EigenBase<Derived>&)#
Defined to satisfy interface. No analysis is needed so is a no-op.
@@ -1209,12 +1209,12 @@ Namespace symPublic Types
-
-using LinearizedDenseFactor = typename Factor<Scalar>::LinearizedDenseFactor#
+using LinearizedDenseFactor = typename Factor<Scalar>::LinearizedDenseFactor#
-
-using LinearizationType = DenseLinearization<Scalar>#
+using LinearizationType = DenseLinearization<Scalar>#
@@ -1222,8 +1222,8 @@ Namespace symPublic Functions
-
-DenseLinearizer(const std::string &name, const std::vector<Factor<Scalar>> &factors, const std::vector<Key> &key_order = {}, bool include_jacobians = false)#
-Construct a Linearizer from factors and optional keys
+DenseLinearizer(const std::string &name, const std::vector<Factor<Scalar>> &factors, const std::vector<Key> &key_order = {}, bool include_jacobians = false)#
+Construct a Linearizer from factors and optional keys
- Parameters:
@@ -1239,26 +1239,26 @@ Namespace sym
-
bool IsInitialized() const#
-Returns whether Relinearize() has already been called once.
-Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size.
+Returns whether Relinearize() has already been called once.
+Matters because many calculations need to be called only on the first linearization that are then cached for subsequent use. Also, if Relinearize() has already been called, then the matrices in the linearization are expected to already be allocated to the right size.
-
-const std::vector<Key> &Keys() const#
+const std::vector<Key> &Keys() const#
The keys to optimize, in the order they appear in the state vector (i.e., in rhs).
-
-const std::unordered_map<key_t, index_entry_t> &StateIndex() const#
+const std::unordered_map<key_t, index_entry_t> &StateIndex() const#
A map from all optimized keys in the problem to an index entry for the corresponding optimized values (where the offset is into the problem state vector, i.e., the rhs of a linearization).
-Only read if IsInitialized() returns true (i.e., after the first call to Relinearize).
+Only read if IsInitialized() returns true (i.e., after the first call to Relinearize).
-
-void Relinearize(const Values<Scalar> &values, DenseLinearization<Scalar> &linearization)#
+void Relinearize(const Values<Scalar> &values, DenseLinearization<Scalar> &linearization)#
Update linearization at a new evaluation point. This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization.
On the first call to Relinearize, the matrices in linearization will be allocated and sized correctly. On subsequent calls, the matrices of linearization are expected to already be allocated and sized correctly.
TODO(aaron): This should be const except that it can initialize the object
@@ -1272,7 +1272,7 @@ Namespace symtemplate<typename Scalar, typename EigenSolver>
class EigenSparseSolver#
-
#include <eigen_sparse_solver.h>
-
A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver.
+A thin wrapper around Eigen’s Sparse Solver interface for use in nonlinear solver classes like sym::LevenbergMarquardtSolver.
Can be specialized with anything satisfying the SparseSolver concept.
For example, can be used like:
using LinearSolver =
sym::EigenSparseSolver<double, Eigen::CholmodDecomposition<Eigen::SparseMatrix<double>>>;
@@ -1287,12 +1287,12 @@ Namespace symPublic Types
@@ -1300,7 +1300,7 @@ Namespace symPublic Functions
-
-inline bool Factorize(const MatrixType &A)#
+inline bool Factorize(const MatrixType &A)#
Factorize A and store internally.
- Parameters:
@@ -1314,7 +1314,7 @@ Namespace sym
-
-template<typename Rhs>
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
+template<typename Rhs>
inline RhsType Solve(const Eigen::MatrixBase<Rhs> &b) const#
- Returns:
x for A x = b, where x and b are dense
@@ -1327,7 +1327,7 @@ Namespace sym
-
-template<typename Rhs>
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
+template<typename Rhs>
inline void SolveInPlace(Eigen::MatrixBase<Rhs> &b) const#
Solves in place for x in A x = b, where x and b are dense
Eigen solvers cannot actually solve in place, so this solves, then copies back into the argument.
@@ -1339,7 +1339,7 @@ Namespace sym
-
-inline MatrixType L() const#
+inline MatrixType L() const#
- Returns:
the lower triangular matrix L such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize and D is a diagonal matrix with positive diagonal entries, and P is a permutation matrix.
@@ -1352,7 +1352,7 @@ Namespace sym
-
-inline MatrixType D() const#
+inline MatrixType D() const#
- Returns:
the diagonal matrix D such that P^T * L * D * L^T * P = A, where A is the last matrix to have been factorized with this->Factorize, L is lower triangular with unit diagonal, and P is a permutation matrix
@@ -1378,7 +1378,7 @@ Namespace sym
-
-inline void AnalyzeSparsityPattern(const MatrixType &A)#
+inline void AnalyzeSparsityPattern(const MatrixType &A)#
Defined to satisfy interface. No analysis is needed so is a no-op.
@@ -1391,52 +1391,52 @@ Namespace sym
#include <factor.h>
A residual term for optimization.
-Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation.
+Created from a function and a set of Keys that act as inputs. Given a Values as an evaluation point, generates a linear approximation to the residual function. Contains templated helper constructors to simplify creation.
Public Types
-
-using Scalar = ScalarType#
+using Scalar = ScalarType#
-
-using LinearizedDenseFactor = typename LinearizedDenseFactorTypeHelper<Scalar>::Type#
+using LinearizedDenseFactor = typename LinearizedDenseFactorTypeHelper<Scalar>::Type#
-
-using LinearizedSparseFactor = typename LinearizedSparseFactorTypeHelper<Scalar>::Type#
+using LinearizedSparseFactor = typename LinearizedSparseFactorTypeHelper<Scalar>::Type#
-
-template<typename JacobianMatrixType>
using JacobianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, JacobianMatrixType*)>#
+template<typename JacobianMatrixType>
using JacobianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, JacobianMatrixType*)>#
-
-using DenseJacobianFunc = JacobianFunc<MatrixX<Scalar>>#
+using DenseJacobianFunc = JacobianFunc<MatrixX<Scalar>>#
-
-using SparseJacobianFunc = JacobianFunc<Eigen::SparseMatrix<Scalar>>#
+using SparseJacobianFunc = JacobianFunc<Eigen::SparseMatrix<Scalar>>#
-
-template<typename MatrixType>
using HessianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, MatrixType*, MatrixType*, VectorX<Scalar>*)>#
+template<typename MatrixType>
using HessianFunc = std::function<void(const Values<Scalar>&, const std::vector<index_entry_t>&, VectorX<Scalar>*, MatrixType*, MatrixType*, VectorX<Scalar>*)>#
-
-using DenseHessianFunc = HessianFunc<MatrixX<Scalar>>#
+using DenseHessianFunc = HessianFunc<MatrixX<Scalar>>#
-
-using SparseHessianFunc = HessianFunc<Eigen::SparseMatrix<Scalar>>#
+using SparseHessianFunc = HessianFunc<Eigen::SparseMatrix<Scalar>>#
@@ -1449,7 +1449,7 @@ Namespace sym
-
-Factor(DenseHessianFunc hessian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+Factor(DenseHessianFunc hessian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
Create directly from a (dense/sparse) hessian functor. This is the lowest-level constructor.
- Parameters:
@@ -1463,12 +1463,12 @@ Namespace sym
-
-Factor(SparseHessianFunc hessian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+Factor(SparseHessianFunc hessian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
-
-Factor(const DenseJacobianFunc &jacobian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+Factor(const DenseJacobianFunc &jacobian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
Create from a function that computes the (dense/sparse) jacobian. The hessian will be computed using the Gauss Newton approximation:
H = J.T * J
rhs = J.T * b
@@ -1486,76 +1486,76 @@ Namespace sym
-
-Factor(const SparseJacobianFunc &jacobian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+Factor(const SparseJacobianFunc &jacobian_func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
-
-void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output just the numerical values of the residual.
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
-
-void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, MatrixX<Scalar> *jacobian, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, MatrixX<Scalar> *jacobian, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
-
-void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, Eigen::SparseMatrix<Scalar> *jacobian, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+void Linearize(const Values<Scalar> &values, VectorX<Scalar> *residual, Eigen::SparseMatrix<Scalar> *jacobian, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output just the numerical values of the residual and jacobian.
-This overload can only be called if IsSparse() is true; otherwise, it will throw
+This overload can only be called if IsSparse() is true; otherwise, it will throw
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
-
-void Linearize(const Values<Scalar> &values, LinearizedDenseFactor &linearized_factor, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+void Linearize(const Values<Scalar> &values, LinearizedDenseFactor &linearized_factor, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
-
-LinearizedDenseFactor Linearize(const Values<Scalar> &values, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+LinearizedDenseFactor Linearize(const Values<Scalar> &values, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
-This overload can only be called if IsSparse() is false; otherwise, it will throw
+This overload can only be called if IsSparse() is false; otherwise, it will throw
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
-
-void Linearize(const Values<Scalar> &values, LinearizedSparseFactor &linearized_factor, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
+void Linearize(const Values<Scalar> &values, LinearizedSparseFactor &linearized_factor, const std::vector<index_entry_t> *maybe_index_entry_cache = nullptr) const#
Evaluate the factor at the given linearization point and output a LinearizedDenseFactor that contains the numerical values of the residual, jacobian, hessian, and right-hand-side.
This overload can only be called if IsSparse is true; otherwise, it will throw
- Parameters:
-maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
+maybe_index_entry_cache – Optional. If provided, should be the index entries for each of the inputs to the factor in the given Values. For repeated linearization, caching this prevents repeated hash lookups. Can be computed as values.CreateIndex(factor.AllKeys()).entries
.
@@ -1568,13 +1568,13 @@ Namespace sym
-
-const std::vector<Key> &OptimizedKeys() const#
+const std::vector<Key> &OptimizedKeys() const#
Get the optimized keys for this factor
-
-const std::vector<Key> &AllKeys() const#
+const std::vector<Key> &AllKeys() const#
Get all keys required to evaluate this factor
@@ -1583,14 +1583,14 @@ Namespace symPublic Static Functions
-
-template<typename Functor>
static Factor Jacobian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+template<typename Functor>
static Factor Jacobian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
Create from a function that computes the jacobian. The hessian will be computed using the Gauss Newton approximation:
H = J.T * J
rhs = J.T * b
-This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to func
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.
-If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIAN
+This version handles a variety of functors that take in individual input arguments rather than a Values object - the last two arguments to func
should be outputs for the residual and jacobian; arguments before that should be inputs to func
.
+If generating this factor from a single Python function, you probably want to use the Factor::Hessian constructor instead (it’ll likely result in faster linearization). If you really want to generate a factor and use this constructor, func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=STACKED_JACOBIAN
See symforce_factor_test.cc
for many examples.
- Parameters:
@@ -1604,8 +1604,8 @@ Namespace sym
-
-template<typename Functor>
static Factor Hessian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
-Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.
+template<typename Functor>
static Factor Hessian(Functor &&func, const std::vector<Key> &keys_to_func, const std::vector<Key> &keys_to_optimize = {})#
+Create from a functor that computes the full linearization, but takes in individual input arguments rather than a Values object. The last four arguments to func
should be outputs for the residual, jacobian, hessian, and rhs; arguments before that should be inputs to func
.
This should be used in cases where computing J^T J using a matrix multiplication is slower than evaluating J^T J symbolically; for instance, if the jacobian is sparse or J^T J has structure so that CSE is very effective.
func
can be generated easily by creating a Codegen object from a Python function which returns the residual, then calling with_linearization with linearization_mode=FULL_LINEARIZATION (the default)
See symforce_factor_test.cc
for many examples.
@@ -1630,7 +1630,7 @@ Namespace sym
-
-template<typename T>
struct function_traits<const T> : public sym::function_traits<T>
+template<typename T>
struct function_traits<const T> : public sym::function_traits<T>
@@ -1646,7 +1646,7 @@ Namespace sym
-
-using base_return_type = typename std::decay_t<return_type>
+using base_return_type = typename std::decay_t<return_type>
@@ -1654,23 +1654,23 @@ Namespace symPublic Static Attributes
-
-template<std::size_t N>
struct arg
+template<std::size_t N>
struct arg
-
Public Types
@@ -1700,7 +1700,7 @@ Namespace sym
-
-using base_return_type = typename std::decay_t<return_type>
+using base_return_type = typename std::decay_t<return_type>
Public Types
Subclass of Optimizer for using Graduated Non-Convexity (GNC)
+Subclass of Optimizer for using Graduated Non-Convexity (GNC)
Assumes the convexity of the cost function is controlled by a hyperparameter mu. When mu == 0 the cost function should be convex and as mu goes to 1 the cost function should smoothly transition to a robust cost.
Public Types
Constructor that copies in factors and keys
[0, 0, 0]
.
More generally, the gravity argument should yield the true acceleration when added to the accelerometer measurement (after the measurement has been converted to the world frame).
Is a functor so as to store the preintegrated IMU measurements between two times. The residual computed is the local-coordinate difference between the final state (pose and velocity) and the state you would expect given the initial state and the IMU measurements.
-Based heavily on the on manifold ImuFactor found in GTSAM and on the paper:
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
+Based heavily on the on manifold ImuFactor found in GTSAM and on the paper:
Christian Forster, Luca Carlone, Frank Dellaert, and Davide Scaramuzza,
“IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori
Estimation”, Robotics: Science and Systems (RSS), 2015.
@@ -1903,27 +1903,27 @@ Namespace symPublic Types
-
-using Preintegrator = sym::ImuPreintegrator<Scalar>#
+using Preintegrator = sym::ImuPreintegrator<Scalar>#
-
-using Measurement = sym::PreintegratedImuMeasurements<Scalar>#
+using Measurement = sym::PreintegratedImuMeasurements<Scalar>#
@@ -1931,26 +1931,26 @@ Namespace symPublic Functions
-
-explicit ImuFactor(const Preintegrator &preintegrator)#
-Construct an ImuFactor from a preintegrator.
+explicit ImuFactor(const Preintegrator &preintegrator)#
+Construct an ImuFactor from a preintegrator.
-
-ImuFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)#
-Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)#
+Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
-
-sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const#
-Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const#
+Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
--
-void operator()(const Pose3 &pose_i, const Vector3 &vel_i, const Pose3 &pose_j, const Vector3 &vel_j, const Vector3 &accel_bias_i, const Vector3 &gyro_bias_i, const Vector3 &gravity, const Scalar epsilon, Eigen::Matrix<Scalar, 9, 1> *const residual = nullptr, Eigen::Matrix<Scalar, 9, 24> *const jacobian = nullptr, Eigen::Matrix<Scalar, 24, 24> *const hessian = nullptr, Eigen::Matrix<Scalar, 24, 1> *const rhs = nullptr) const#
+-
+void operator()(const Pose3 &pose_i, const Vector3 &vel_i, const Pose3 &pose_j, const Vector3 &vel_j, const Vector3 &accel_bias_i, const Vector3 &gyro_bias_i, const Vector3 &gravity, Scalar epsilon, Eigen::Matrix<Scalar, 9, 1> *residual = nullptr, Eigen::Matrix<Scalar, 9, 24> *jacobian = nullptr, Eigen::Matrix<Scalar, 24, 24> *hessian = nullptr, Eigen::Matrix<Scalar, 24, 1> *rhs = nullptr) const#
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
@@ -1982,22 +1982,22 @@ Namespace sym
#include <imu_preintegrator.h>
Class to on-manifold preintegrate IMU measurements for usage in a SymForce optimization problem.
-For usage, see the doc-string of ImuFactor in imu_factor.h
+For usage, see the doc-string of ImuFactor in imu_factor.h
Public Types
Initialize with given accel_bias and gyro_bias
Integrate a new measurement.
Postcondition:
@@ -2042,12 +2042,12 @@A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector direction.
-For full documentation, see ImuFactor.
+For full documentation, see ImuFactor.
Public Types
Construct an ImuFactor from a preintegrator.
+explicit ImuWithGravityDirectionFactor(const Preintegrator &preintegrator)#Construct an ImuFactor from a preintegrator.
Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuWithGravityDirectionFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)#Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const#Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
A factor for using on-manifold IMU preintegration in a SymForce optimization problem, with the ability to optimize the gravity vector.
-For full documentation, see ImuFactor.
+For full documentation, see ImuFactor.
Public Types
Construct an ImuFactor from a preintegrator.
+explicit ImuWithGravityFactor(const Preintegrator &preintegrator)#Construct an ImuFactor from a preintegrator.
Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
+ImuWithGravityFactor(const Measurement &measurement, const SqrtInformation &sqrt_information)#Construct an ImuFactor from a (preintegrated) measurement and its corresponding sqrt information.
Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
+sym::Factor<Scalar> Factor(const std::vector<Key> &keys_to_func) const#Construct a Factor object that can be passed to an Optimizer object given the keys to be passed to the function.
Convenience method to avoid manually specifying which arguments are optimized.
Calculate a between factor on the product manifold of the pose and velocity where the prior is calculated from the preintegrated IMU measurements.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
Contains a letter plus an integral subscript and superscript. Can construct with a letter, a letter + sub, or a letter + sub + super, but not letter + super.
-TODO(hayk): Consider an abstraction where Key contains a type enum.
+TODO(hayk): Consider an abstraction where Key contains a type enum.
Public Types
Return true if a is LESS than b, in dictionary order of the tuple (letter, sub, super).
Fast Levenberg-Marquardt solver for nonlinear least squares problems specified by a linearization function. Supports on-manifold optimization and sparse solving, and attempts to minimize allocations after the first iteration.
-This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver.
+This assumes the problem structure is the same for the lifetime of the object - if the problem structure changes, create a new LevenbergMarquardtSolver.
TODO(aaron): Analyze what repeated allocations we do have, and get rid of them if possible
Not thread safe! Create one per thread.
Example usage:
constexpr const int M = 9;
@@ -2485,27 +2485,27 @@ Namespace symPublic Types
-
-using Scalar = ScalarType#
+using Scalar = ScalarType#
-
-using LinearSolver = LinearSolverType#
+using LinearSolver = LinearSolverType#
-
-using MatrixType = typename LinearSolverType::MatrixType#
+using MatrixType = typename LinearSolverType::MatrixType#
-
-using StateType = internal::LevenbergMarquardtState<MatrixType>#
+using StateType = internal::LevenbergMarquardtState<MatrixType>#
-
-using LinearizationType = Linearization<MatrixType>#
+using LinearizationType = Linearization<MatrixType>#
Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
+Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
MatrixType is expected to be an Eigen MatrixX or SparseMatrix.
Public Types
Class for evaluating multiple Factors at the linearization point given by a Values.
+Class for evaluating multiple Factors at the linearization point given by a Values.
Stores the original Factors as well as the LinearizedFactors, and provides tools for aggregating keys and building a large jacobian / hessian for optimization.
-For efficiency, prefer calling Relinearize() instead of re-constructing this object!
+For efficiency, prefer calling Relinearize() instead of re-constructing this object!
Public Types
Construct a Linearizer from factors and optional keys
+Linearizer(const std::string &name, const std::vector<Factor<Scalar>> &factors, const std::vector<Key> &key_order = {}, bool include_jacobians = false)#Construct a Linearizer from factors and optional keys
Update linearization at a new evaluation point
This is more efficient than reconstructing this object repeatedly. On the first call, it will allocate memory and perform analysis needed for efficient repeated relinearization.
TODO(aaron): This should be const except that it can initialize the object
@@ -2804,17 +2804,17 @@Index into iterations of the best iteration (containing the optimal Values)
+Index into iterations of the best iteration (containing the optimal Values)
The linearization at best_index (at optimized_values), filled out if populate_best_linearization=true
The sparsity pattern of the problem jacobian
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The permutation used by the linear solver
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The sparsity pattern of the cholesky factor
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical.
+Class for optimizing a nonlinear least-squares problem specified as a list of Factors. For efficient use, create once and call Optimize() multiple times with different initial guesses, as long as the factors remain constant and the structure of the Values is identical.
Not thread safe! Create one per thread.
Example usage:
// Create a Values
sym::Key key0{'R', 0};
@@ -2953,67 +2953,67 @@ Namespace symPublic Types
-
-using Scalar = ScalarType#
+using Scalar = ScalarType#
-
-using NonlinearSolver = NonlinearSolverType#
+using NonlinearSolver = NonlinearSolverType#
-
-using FailureReason = typename NonlinearSolver::FailureReason#
+using FailureReason = typename NonlinearSolver::FailureReason#
-
-using MatrixType = typename NonlinearSolver::MatrixType#
+using MatrixType = typename NonlinearSolver::MatrixType#
-
-using Stats = OptimizationStats<MatrixType>#
+using Stats = OptimizationStats<MatrixType>#
-
-using LinearizerType = internal::LinearizerSelector_t<MatrixType>#
+using LinearizerType = internal::LinearizerSelector_t<MatrixType>#
Public Functions
Base constructor
Constructor that copies in factors and keys, with arguments for the nonlinear solver
Optimize the given values in-place
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction
populate_best_linearization – If true, the linearization at the best values will be filled out in the stats
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
num_iterations – If < 0 (the default), uses the number of iterations specified by the params at construction
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Optimize the given values in-place
This overload takes the stats as an argument, and stores into there. This allows users to avoid reallocating memory for any of the entries in the stats, for use cases where that’s important.
stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
+stats – An OptimizationStats to fill out with the result - if filling out dynamically allocated fields here, will not reallocate if memory is already allocated in the required shape (e.g. for repeated calls to Optimize())
Linearize the problem around the given values
Get covariances for each optimized key at the given linearization
-Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. covariances_by_key
must not contain any keys that are not optimized by this Optimizer.
May not be called before either Optimize() or Linearize() has been called.
+Will reuse entries in covariances_by_key, allocating new entries so that the result contains exactly the set of keys optimized by this Optimizer. covariances_by_key
must not contain any keys that are not optimized by this Optimizer.
May not be called before either Optimize() or Linearize() has been called.
Get covariances for the given subset of keys at the given linearization
This version is potentially much more efficient than computing the covariances for all keys in the problem.
Currently requires that keys
corresponds to a set of keys at the start of the list of keys for the full problem, and in the same order. It uses the Schur complement trick, so will be most efficient if the hessian is of the following form, with C block diagonal:
A = ( B E )
@@ -3135,16 +3135,16 @@ Namespace sym
-
-void ComputeCovariances(const Linearization<MatrixType> &linearization, const std::vector<Key> &keys, std::unordered_map<Key, MatrixX<Scalar>> *covariances_by_key)#
+void ComputeCovariances(const Linearization<MatrixType> &linearization, const std::vector<Key> &keys, std::unordered_map<Key, MatrixX<Scalar>> *covariances_by_key)#
-
-void ComputeFullCovariance(const Linearization<MatrixType> &linearization, MatrixX<Scalar> &covariance)#
+void ComputeFullCovariance(const Linearization<MatrixType> &linearization, MatrixX<Scalar> &covariance)#
Get the full problem covariance at the given linearization
Unlike ComputeCovariance and ComputeAllCovariances, this includes the off-diagonal blocks, i.e. the cross-covariances between different keys.
The ordering of entries here is the same as the ordering of the keys in the linearization, which can be accessed via optimizer.Linearizer().StateIndex()
.
-May not be called before either Optimize() or Linearize() has been called.
+May not be called before either Optimize() or Linearize() has been called.
- Parameters:
covariance – A matrix that will be filled out with the full problem covariance.
@@ -3154,25 +3154,25 @@ Namespace sym
-
-const std::vector<Key> &Keys() const#
+const std::vector<Key> &Keys() const#
Get the optimized keys
-
-const std::vector<Factor<Scalar>> &Factors() const#
+const std::vector<Factor<Scalar>> &Factors() const#
Get the factors
-
-const LinearizerType &Linearizer() const#
-Get the Linearizer object
+const LinearizerType &Linearizer() const#
+Get the Linearizer object
-
-LinearizerType &Linearizer()#
+LinearizerType &Linearizer()#
Initialize instance struct with accel_bias and gyro_bias and all other values zeroed out (scalars, vectors, and matrices) or set to the identity (DR).
Default constructor
Construct with a representative sparse matrix
Compute an efficient permutation matrix (ordering) for A and store internally.
Compute symbolic sparsity pattern for A and store internally.
Decompose A into A = L * D * L^T and store internally. A must have the same sparsity as the matrix used for construction. Returns true if factorization was successful, and false otherwise. NOTE(brad): Currently always returns true.
Returns x for A x = b, where x and b are dense.
Solves in place for x in A x = b, where x and b are dense.
Expose the correct LCM type (values_t or valuesf_t)
Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list
-NOTE(alvin): others Values should not contain overlapping Keys
+explicit Values(std::initializer_list<Values<Scalar>> others)Construct from a list of other Values objects. The order of Keys are preserved by the order of the Values in the initializer list
+NOTE(alvin): others Values should not contain overlapping Keys
Construct from serialized form.
Return whether the key exists.
Retrieve a value by key.
Add or update a value by key. Returns true if added, false if updated.
Overload for non-Eigen types
Add or update a value by key. Returns true if added, false if updated.
Overload for Eigen types
Add a value by key, throws runtime_error if the key already exists.
This is useful when setting up initial values for a problem.
Update or add keys to this Values base on other Values of different structure.
+void UpdateOrSet(const index_t &index, const Values<Scalar> &other)Update or add keys to this Values base on other Values of different structure.
index
MUST be valid for other.
NOTE(alvin): it is less efficient than the Update methods below if index objects are created and cached. This method performs map lookup for each key of the index
Get all keys.
NOTE(hayk): If we changed key storage to a sorted vector this could automatically be maintained and it would be more embedded friendly, but At(key) would become O(N) for linear search.
@@ -3720,28 +3720,28 @@Expose map type to allow iteration.
Raw data buffer.
Cast to another Scalar type (returns a copy)
Remove the given key.
Only removes the index entry, does not change the data array. Returns true if removed, false if already not present.
-Call Cleanup() to re-pack the data array.
+Call Cleanup() to re-pack the data array.
Repack the data array to get rid of empty space from removed keys.
If regularly removing keys, it’s up to the user to call this appropriately to avoid storage growth. Returns the number of Scalar elements cleaned up from the data array.
-It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex().
+It will INVALIDATE all indices, offset increments, and pointers. Re-create an index with CreateIndex().
Create an index from the given ordered subset of keys. This object can then be used for repeated efficient operations on that subset of keys.
If you want an index of all the keys, call values.CreateIndex(values.Keys())
.
An index will be INVALIDATED if the following happens:
-1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with a contained key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
NOTE(hayk): We could also add a simple UpdateIndex(&index) method, since the offset is the only thing that needs to get updated after repacking.
Retrieve an index entry by key. This performs a map lookup.
An index entry will be INVALIDATED if the following happens:
-1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
Retrieve an index entry by key, or the empty optional if the key is not present. This performs a map lookup.
An index entry will be INVALIDATED if the following happens:
-1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
+1) Remove() is called with the indexed key, or RemoveAll() is called 2) Cleanup() is called to re-pack the data array
Retrieve a value by index entry. This avoids a map lookup compared to At(key).
Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already.
Overload for non-Eigen types
Update a value by index entry with no map lookup (compared to Set(key)). This does NOT add new values and assumes the key exists already.
Overload for Eigen types
Efficiently update the keys given by this index from other into this. This purely copies slices of the data arrays, the index MUST be valid for both objects!
Efficiently update the keys from a different structured Values, given by index_this
and index_other
. This purely copies slices of the data arrays.
Efficiently update the keys from a different structured Values, given by index_this
and index_other
. This purely copies slices of the data arrays.
index_this
MUST be valid for this object; index_other
MUST be valid for other object.
Perform a retraction from an update vector.
Express this Values in the local coordinate of others Values, i.e., this \ominus others
+Express this Values in the local coordinate of others Values, i.e., this \ominus others
others – The other Values that the local coordinate is relative to
index – Ordered list of keys to include (MUST be valid for both this and others Values)
others – The other Values that the local coordinate is relative to
index – Ordered list of keys to include (MUST be valid for both this and others Values)
epsilon – Small constant to avoid singularities (do not use zero)
Serialize to LCM.
Public Functions
Public Functions
Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
+Class for storing a problem linearization evaluated at a Values (i.e. a residual, jacobian, hessian, and rhs)
MatrixType is expected to be an Eigen MatrixX or SparseMatrix.
Public Types
Index into iterations of the best iteration (containing the optimal Values)
+Index into iterations of the best iteration (containing the optimal Values)
The linearization at best_index (at optimized_values), filled out if populate_best_linearization=true
The sparsity pattern of the problem jacobian
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The permutation used by the linear solver
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
The sparsity pattern of the cholesky factor
-Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
+Only filled if using sparse linear solver and Optimizer created with debug_stats = true. If not filled, row_indices field of sparse_matrix_structure_t and linear_solver_ordering will have size() = 0.
Initialize instance struct with accel_bias and gyro_bias and all other values zeroed out (scalars, vectors, and matrices) or set to the identity (DR).
Public Types
Public Types
Public Types
Public Types
Camera with a given pose, camera calibration, and an optionally specified image size.
If the image size is specified, we use it to check whether pixels (either given or computed by projection of 3D points into the image frame) are in the image frame and thus valid/invalid.
Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
diff --git a/api-gen-cpp/file/between__factor__pose2_8h.html b/api-gen-cpp/file/between__factor__pose2_8h.html index bef88f1f..02930ae8 100644 --- a/api-gen-cpp/file/between__factor__pose2_8h.html +++ b/api-gen-cpp/file/between__factor__pose2_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/between__factor__pose3_8h.html b/api-gen-cpp/file/between__factor__pose3_8h.html index 57cea445..59b6c0d6 100644 --- a/api-gen-cpp/file/between__factor__pose3_8h.html +++ b/api-gen-cpp/file/between__factor__pose3_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/between__factor__pose3__position_8h.html b/api-gen-cpp/file/between__factor__pose3__position_8h.html index 22fb7981..8dcfb988 100644 --- a/api-gen-cpp/file/between__factor__pose3__position_8h.html +++ b/api-gen-cpp/file/between__factor__pose3__position_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between between(a, b) and a_t_b.
In vector space terms that would be: (b - a) - a_t_b
In lie group terms: local_coordinates(a_t_b, between(a, b)) to_tangent(compose(inverse(a_t_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/between__factor__pose3__rotation_8h.html b/api-gen-cpp/file/between__factor__pose3__rotation_8h.html index f528896d..4e518356 100644 --- a/api-gen-cpp/file/between__factor__pose3__rotation_8h.html +++ b/api-gen-cpp/file/between__factor__pose3__rotation_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between between(a, b) and a_R_b.
In vector space terms that would be: (b - a) - a_R_b
In lie group terms: local_coordinates(a_R_b, between(a, b)) to_tangent(compose(inverse(a_R_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/between__factor__pose3__translation__norm_8h.html b/api-gen-cpp/file/between__factor__pose3__translation__norm_8h.html index 650af6aa..a1271229 100644 --- a/api-gen-cpp/file/between__factor__pose3__translation__norm_8h.html +++ b/api-gen-cpp/file/between__factor__pose3__translation__norm_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between translation_norm and (a.t - b.t).norm().
Args: sqrt_info: Square root information matrix to whiten residual. In this one dimensional case this is just 1/sigma. jacobian: (1x12) jacobian of res wrt args a (6), b (6) hessian: (12x12) Gauss-Newton hessian for args a (6), b (6) rhs: (12x1) Gauss-Newton rhs for args a (6), b (6)
Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/between__factor__rot3_8h.html b/api-gen-cpp/file/between__factor__rot3_8h.html index 8e822bec..7f3fb564 100644 --- a/api-gen-cpp/file/between__factor__rot3_8h.html +++ b/api-gen-cpp/file/between__factor__rot3_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
diff --git a/api-gen-cpp/file/double__sphere__camera__cal_8cc.html b/api-gen-cpp/file/double__sphere__camera__cal_8cc.html index 84932ab4..94dd3af7 100644 --- a/api-gen-cpp/file/double__sphere__camera__cal_8cc.html +++ b/api-gen-cpp/file/double__sphere__camera__cal_8cc.html @@ -750,12 +750,12 @@Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
diff --git a/api-gen-cpp/file/epsilon_8h.html b/api-gen-cpp/file/epsilon_8h.html index b3a25655..32b161b7 100644 --- a/api-gen-cpp/file/epsilon_8h.html +++ b/api-gen-cpp/file/epsilon_8h.html @@ -750,7 +750,7 @@kDefaultEpsilon is a small positive number for avoiding numerical singularities.
It is based on numerical epsilon (the difference between 1.0 and the next floating point number), scaled up for safety.
An example of where kDefaultEpsilon might be used is in the evaluation of sin(x - 1) / (x - 1), which, at x = 1, evaluates to 0/0, i.e., NaN. This is despite the fact that the limit as x -> 1 is well defined (itself equaling 1).
diff --git a/api-gen-cpp/file/equirectangular__camera__cal_8cc.html b/api-gen-cpp/file/equirectangular__camera__cal_8cc.html index 62bc8196..c5111ea0 100644 --- a/api-gen-cpp/file/equirectangular__camera__cal_8cc.html +++ b/api-gen-cpp/file/equirectangular__camera__cal_8cc.html @@ -750,12 +750,12 @@Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
diff --git a/api-gen-cpp/file/imu__manifold__preintegration__update_8h.html b/api-gen-cpp/file/imu__manifold__preintegration__update_8h.html index cb9ccbe2..2e5d2565 100644 --- a/api-gen-cpp/file/imu__manifold__preintegration__update_8h.html +++ b/api-gen-cpp/file/imu__manifold__preintegration__update_8h.html @@ -750,7 +750,7 @@An internal helper function to update a set of preintegrated IMU measurements with a new pair of gyroscope and accelerometer measurements. Returns the updated preintegrated measurements.
NOTE: If you are interested in this function, you should likely be using the IntegrateMeasurement
method of the PreintegratedImuMeasurements
class located in symforce/slam/imu_preintegration/preintegrated_imu_measurements.h
.
When integrating the first measurement, DR should be the identity, Dv, Dp, covariance, and the derivatives w.r.t. to bias should all be 0.
diff --git a/api-gen-cpp/file/imu__manifold__preintegration__update__auto__derivative_8h.html b/api-gen-cpp/file/imu__manifold__preintegration__update__auto__derivative_8h.html index 771fbdc7..b670f836 100644 --- a/api-gen-cpp/file/imu__manifold__preintegration__update__auto__derivative_8h.html +++ b/api-gen-cpp/file/imu__manifold__preintegration__update__auto__derivative_8h.html @@ -750,7 +750,7 @@Alternative to ImuManifoldPreintegrationUpdate that uses auto-derivatives. Exists only to help verify correctness of ImuManifoldPreintegrationUpdate. Should have the same output. Since this function is more expensive, there is no reason to use it instead.
An internal helper function to update a set of preintegrated IMU measurements with a new pair of gyroscope and accelerometer measurements. Returns the updated preintegrated measurements.
NOTE: If you are interested in this function, you should likely be using the IntegrateMeasurement
method of the PreintegratedImuMeasurements
class located in symforce/slam/imu_preintegration/preintegrated_imu_measurements.h
.
An internal helper function to linearize the difference between the orientation, velocity, and position at one time step and those of another time step. The expected difference is calculated from the preintegrated IMU measurements between those time steps.
NOTE: If you are looking for a residual for an IMU factor, do not use this. Instead use the one found in symforce/slam/imu_preintegration/imu_factor.h
.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
diff --git a/api-gen-cpp/file/internal__imu__unit__gravity__factor_8h.html b/api-gen-cpp/file/internal__imu__unit__gravity__factor_8h.html index 44ce5043..45724c12 100644 --- a/api-gen-cpp/file/internal__imu__unit__gravity__factor_8h.html +++ b/api-gen-cpp/file/internal__imu__unit__gravity__factor_8h.html @@ -750,7 +750,7 @@This function was autogenerated from a symbolic function. Do not modify by hand.
Symbolic function: internal_imu_unit_gravity_residual
Args: pose_i: Pose3 vel_i: Matrix31 pose_j: Pose3 vel_j: Matrix31 accel_bias_i: Matrix31 gyro_bias_i: Matrix31 DR: Rot3 Dv: Matrix31 Dp: Matrix31 sqrt_info: Matrix99 DR_D_gyro_bias: Matrix33 Dv_D_accel_bias: Matrix33 Dv_D_gyro_bias: Matrix33 Dp_D_accel_bias: Matrix33 Dp_D_gyro_bias: Matrix33 accel_bias_hat: Matrix31 gyro_bias_hat: Matrix31 gravity_direction: Unit3 gravity_norm: Scalar dt: Scalar epsilon: Scalar
diff --git a/api-gen-cpp/file/internal__imu__with__gravity__factor_8h.html b/api-gen-cpp/file/internal__imu__with__gravity__factor_8h.html index 8e53369d..6ae378a2 100644 --- a/api-gen-cpp/file/internal__imu__with__gravity__factor_8h.html +++ b/api-gen-cpp/file/internal__imu__with__gravity__factor_8h.html @@ -750,7 +750,7 @@An internal helper function to linearize the difference between the orientation, velocity, and position at one time step and those of another time step. The expected difference is calculated from the preintegrated IMU measurements between those time steps.
NOTE: If you are looking for a residual for an IMU factor, do not use this. Instead use the one found in symforce/slam/imu_preintegration/imu_factor.h
.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
diff --git a/api-gen-cpp/file/inverse__range__landmark__atan__gnc__factor_8h.html b/api-gen-cpp/file/inverse__range__landmark__atan__gnc__factor_8h.html index 84d9b7a9..180727b4 100644 --- a/api-gen-cpp/file/inverse__range__landmark__atan__gnc__factor_8h.html +++ b/api-gen-cpp/file/inverse__range__landmark__atan__gnc__factor_8h.html @@ -750,7 +750,7 @@Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark ray into the target spherical camera and comparing it against the correspondence.
The landmark is specified as a camera point in the source camera with an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark ray into the target spherical camera and comparing it against the correspondence.
The landmark is specified as a camera point in the source camera with an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
diff --git a/api-gen-cpp/file/polynomial__camera__cal_8cc.html b/api-gen-cpp/file/polynomial__camera__cal_8cc.html index 8cb66ecb..c24b3c7d 100644 --- a/api-gen-cpp/file/polynomial__camera__cal_8cc.html +++ b/api-gen-cpp/file/polynomial__camera__cal_8cc.html @@ -750,12 +750,12 @@Reprojects the landmark ray into the target camera and returns the delta between the correspondence and the reprojection.
The landmark is specified as a 3D point or ray (will be normalized) in the source camera; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera p_camera_source: The location of the landmark in the source camera coordinate, will be normalized target_pixel: The location of the correspondence in the target camera epsilon: Small positive value
diff --git a/api-gen-cpp/file/pose2_8cc.html b/api-gen-cpp/file/pose2_8cc.html index 1eb162a3..5b30c492 100644 --- a/api-gen-cpp/file/pose2_8cc.html +++ b/api-gen-cpp/file/pose2_8cc.html @@ -750,12 +750,12 @@Camera with a given pose, camera calibration, and an optionally specified image size.
@@ -765,7 +765,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/prior__factor__pose3_8h.html b/api-gen-cpp/file/prior__factor__pose3_8h.html index 23817de9..a37abb1e 100644 --- a/api-gen-cpp/file/prior__factor__pose3_8h.html +++ b/api-gen-cpp/file/prior__factor__pose3_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/prior__factor__pose3__position_8h.html b/api-gen-cpp/file/prior__factor__pose3__position_8h.html index 43a6b842..15b72936 100644 --- a/api-gen-cpp/file/prior__factor__pose3__position_8h.html +++ b/api-gen-cpp/file/prior__factor__pose3__position_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/prior__factor__pose3__rotation_8h.html b/api-gen-cpp/file/prior__factor__pose3__rotation_8h.html index 56b1c412..388b2628 100644 --- a/api-gen-cpp/file/prior__factor__pose3__rotation_8h.html +++ b/api-gen-cpp/file/prior__factor__pose3__rotation_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/prior__factor__rot2_8h.html b/api-gen-cpp/file/prior__factor__rot2_8h.html index fafcd5b1..cdddffab 100644 --- a/api-gen-cpp/file/prior__factor__rot2_8h.html +++ b/api-gen-cpp/file/prior__factor__rot2_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/prior__factor__rot3_8h.html b/api-gen-cpp/file/prior__factor__rot3_8h.html index 89808ca8..21f664a4 100644 --- a/api-gen-cpp/file/prior__factor__rot3_8h.html +++ b/api-gen-cpp/file/prior__factor__rot3_8h.html @@ -750,7 +750,7 @@Residual that penalizes the difference between a value and prior (desired / measured value).
In vector space terms that would be: prior - value
In lie group terms: to_tangent(compose(inverse(value), prior))
diff --git a/api-gen-cpp/file/rot2_8cc.html b/api-gen-cpp/file/rot2_8cc.html index 366a5c69..fc1c0021 100644 --- a/api-gen-cpp/file/rot2_8cc.html +++ b/api-gen-cpp/file/rot2_8cc.html @@ -750,12 +750,12 @@Reprojects the landmark ray into the target camera and returns the delta between the correspondence and the reprojection.
The landmark is specified as a 3D point or ray (will be normalized) in the source camera; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera p_camera_source: The location of the landmark in the source camera coordinate, will be normalized target_pixel: The location of the correspondence in the target camera epsilon: Small positive value
diff --git a/api-gen-cpp/file/type__ops_8h.html b/api-gen-cpp/file/type__ops_8h.html index 5d25034d..37817dc6 100644 --- a/api-gen-cpp/file/type__ops_8h.html +++ b/api-gen-cpp/file/type__ops_8h.html @@ -767,7 +767,7 @@Returns the shape of an Eigen type as a pair of ints (rows, cols)
Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
@@ -2271,7 +2271,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
@@ -2280,7 +2280,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
@@ -2289,7 +2289,7 @@Residual that penalizes the difference between between(a, b) and a_t_b.
In vector space terms that would be: (b - a) - a_t_b
In lie group terms: local_coordinates(a_t_b, between(a, b)) to_tangent(compose(inverse(a_t_b), compose(inverse(a), b)))
@@ -2298,7 +2298,7 @@Residual that penalizes the difference between between(a, b) and a_R_b.
In vector space terms that would be: (b - a) - a_R_b
In lie group terms: local_coordinates(a_R_b, between(a, b)) to_tangent(compose(inverse(a_R_b), compose(inverse(a), b)))
@@ -2307,14 +2307,14 @@Residual that penalizes the difference between translation_norm and (a.t - b.t).norm().
Args: sqrt_info: Square root information matrix to whiten residual. In this one dimensional case this is just 1/sigma. jacobian: (1x12) jacobian of res wrt args a (6), b (6) hessian: (12x12) Gauss-Newton hessian for args a (6), b (6) rhs: (12x1) Gauss-Newton rhs for args a (6), b (6)
Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
@@ -2323,7 +2323,7 @@Residual that penalizes the difference between between(a, b) and a_T_b.
In vector space terms that would be: (b - a) - a_T_b
In lie group terms: local_coordinates(a_T_b, between(a, b)) to_tangent(compose(inverse(a_T_b), compose(inverse(a), b)))
@@ -2332,7 +2332,7 @@Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
@@ -2341,7 +2341,7 @@Reprojects the landmark into the target camera and returns the delta from the correspondence to the reprojection.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
Args: source_pose: The pose of the source camera source_calibration: The source camera calibration target_pose: The pose of the target camera target_calibration: The target camera calibration source_inverse_range: The inverse range of the landmark in the source camera source_pixel: The location of the landmark in the source camera target_pixel: The location of the correspondence in the target camera epsilon: Small positive value camera_model_class: The subclass of CameraCal to use as the camera model
@@ -2350,7 +2350,7 @@An internal helper function to update a set of preintegrated IMU measurements with a new pair of gyroscope and accelerometer measurements. Returns the updated preintegrated measurements.
NOTE: If you are interested in this function, you should likely be using the IntegrateMeasurement
method of the PreintegratedImuMeasurements
class located in symforce/slam/imu_preintegration/preintegrated_imu_measurements.h
.
When integrating the first measurement, DR should be the identity, Dv, Dp, covariance, and the derivatives w.r.t. to bias should all be 0.
@@ -2360,7 +2360,7 @@Alternative to ImuManifoldPreintegrationUpdate that uses auto-derivatives. Exists only to help verify correctness of ImuManifoldPreintegrationUpdate. Should have the same output. Since this function is more expensive, there is no reason to use it instead.
An internal helper function to update a set of preintegrated IMU measurements with a new pair of gyroscope and accelerometer measurements. Returns the updated preintegrated measurements.
NOTE: If you are interested in this function, you should likely be using the IntegrateMeasurement
method of the PreintegratedImuMeasurements
class located in symforce/slam/imu_preintegration/preintegrated_imu_measurements.h
.
An internal helper function to linearize the difference between the orientation, velocity, and position at one time step and those of another time step. The expected difference is calculated from the preintegrated IMU measurements between those time steps.
NOTE: If you are looking for a residual for an IMU factor, do not use this. Instead use the one found in symforce/slam/imu_preintegration/imu_factor.h
.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
@@ -2386,7 +2386,7 @@This function was autogenerated from a symbolic function. Do not modify by hand.
Symbolic function: internal_imu_unit_gravity_residual
Args: pose_i: Pose3 vel_i: Matrix31 pose_j: Pose3 vel_j: Matrix31 accel_bias_i: Matrix31 gyro_bias_i: Matrix31 DR: Rot3 Dv: Matrix31 Dp: Matrix31 sqrt_info: Matrix99 DR_D_gyro_bias: Matrix33 Dv_D_accel_bias: Matrix33 Dv_D_gyro_bias: Matrix33 Dp_D_accel_bias: Matrix33 Dp_D_gyro_bias: Matrix33 accel_bias_hat: Matrix31 gyro_bias_hat: Matrix31 gravity_direction: Unit3 gravity_norm: Scalar dt: Scalar epsilon: Scalar
@@ -2395,7 +2395,7 @@An internal helper function to linearize the difference between the orientation, velocity, and position at one time step and those of another time step. The expected difference is calculated from the preintegrated IMU measurements between those time steps.
NOTE: If you are looking for a residual for an IMU factor, do not use this. Instead use the one found in symforce/slam/imu_preintegration/imu_factor.h
.
Time step i is the time of the first IMU measurement of the interval. Time step j is the time after the last IMU measurement of the interval.
@@ -2410,7 +2410,7 @@Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark into the target camera and comparing against the correspondence in the target camera.
The landmark is specified as a pixel in the source camera and an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark ray into the target spherical camera and comparing it against the correspondence.
The landmark is specified as a camera point in the source camera with an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.
Return the 2dof residual of reprojecting the landmark ray into the target spherical camera and comparing it against the correspondence.
The landmark is specified as a camera point in the source camera with an inverse range; this means the landmark is fixed in the source camera and always has residual 0 there (this 0 residual is not returned, only the residual in the target camera is returned).
The norm of the residual is whitened using the :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
. Whitening each component of the reprojection error separately would result in rejecting individual components as outliers. Instead, we minimize the whitened norm of the full reprojection error for each point. See :meth:ScalarNoiseModel.whiten_norm <symforce.opt.noise_models.ScalarNoiseModel.whiten_norm>
for more information on this, and :class:BarronNoiseModel <symforce.opt.noise_models.BarronNoiseModel>
for more information on the noise model.