Skip to content

Conversation

@ABesharat
Copy link
Contributor

This branch provides hardcoded biorthgonal and nns coefficients

…nal_coefficients' into azam/feature/hardcoded_biorthogonal_coefficients
…eature/hardcoded_biorthogonal_coefficients

# Conflicts:
#	SeQuant/domain/mbpt/biorthogonalization.cpp
@ABesharat ABesharat force-pushed the azam/feature/hardcoded_biorthogonal_coefficients branch from ae2bb57 to f8f3115 Compare January 18, 2026 23:31
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces hardcoded (exact) biorthogonalization coefficients and NNS projection weights to avoid numerical precision loss, and updates evaluation/tests to use the new NNS projection path.

Changes:

  • Add hardcoded biorthogonal coefficient generation (ranks 1–5) with memoization fallback to computed coefficients for higher ranks.
  • Introduce NNS projection weights (hardcoded up to rank 5, computed beyond) and TA/BTAS projection helpers.
  • Update spintracing normalization behavior and adjust evaluation unit tests accordingly.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
SeQuant/domain/mbpt/biorthogonalization.hpp Declares hardcoded coefficient APIs, adds NNS projector weights + TA/BTAS projection helpers.
SeQuant/domain/mbpt/biorthogonalization.cpp Implements hardcoded coefficient matrices, memoized coefficient selection, and computed NNS weights.
SeQuant/domain/mbpt/spin.cpp Simplifies S normalization factor handling in closed_shell_CC_spintrace_v2.
SeQuant/core/eval/result.hpp Removes Result-level NNS projection implementation (migrated out).
SeQuant/core/eval/eval.hpp Removes EvalMode::BiorthogonalNNSProject and evaluate_biorthogonal_nns_project.
tests/unit/test_eval_ta.cpp Updates TA evaluation to call the new NNS projection helper and adds rank-4 NNS test.
tests/unit/test_eval_btas.cpp Updates BTAS evaluation to call the new NNS projection helper and adjusts permutation averaging.
tests/unit/test_spin.cpp Updates comments/section naming reflecting new spintracing behavior.
CMakeLists.txt Adds compile definitions for evaluation-related feature macros.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

REQUIRE(norm(man2) == Catch::Approx(norm(eval2)));
TArrayD zero2;
zero2("0,1,2,3,4,5") = man2("0,1,2,3,4,5") - eval2("0,1,2,3,4,5");
REQUIRE(norm(zero1) == Catch::Approx(0).margin(
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assertion is checking norm(zero1) again after computing zero2. It should validate zero2 (the rank-3 residual case), otherwise the test can pass even if eval2 is wrong.

Suggested change
REQUIRE(norm(zero1) == Catch::Approx(0).margin(
REQUIRE(norm(zero2) == Catch::Approx(0).margin(

Copilot uses AI. Check for mistakes.
Comment on lines +101 to +103
/// \return Vector of NNS projector weights representing the last row
///
/// \throw std::runtime_error if n_particles is not in the range [1,5]
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for hardcoded_nns_projector claims it throws std::runtime_error when n_particles is out of range, but the function actually returns std::nullopt. Update the documentation (or change behavior) so callers know what to expect.

Suggested change
/// \return Vector of NNS projector weights representing the last row
///
/// \throw std::runtime_error if n_particles is not in the range [1,5]
/// \return Optional vector of NNS projector weights representing the last row;
/// returns std::nullopt if n_particles is not in the supported range.

Copilot uses AI. Check for mistakes.
Comment on lines +450 to +471
auto memoize = []<typename T>(container::map<std::pair<std::size_t, double>,
std::optional<T>>& cache,
std::mutex& mutex, std::condition_variable& cv,
std::pair<std::size_t, double> key,
auto compute_fn) -> const T& {
{
std::unique_lock<std::mutex> lock(mutex);
auto [it, inserted] = cache.try_emplace(key, std::nullopt);
if (!inserted) {
cv.wait(lock, [&] { return it->second.has_value(); });
return it->second.value();
}
}

T result = compute_fn();

{
std::lock_guard<std::mutex> lock(mutex);
cache[key] = std::move(result);
cv.notify_all();
return cache[key].value();
}
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The local memoize helper can deadlock if compute_fn throws: the inserted cache entry stays std::nullopt and any concurrent callers will wait forever on the condition_variable. Add exception handling to ensure the cache is updated/erased and cv.notify_all() is called on failure.

Copilot uses AI. Check for mistakes.
endif()
if (TARGET tiledarray)
target_link_libraries(SeQuant PUBLIC tiledarray)
target_compile_definitions(SeQuant PUBLIC SEQUANT_HAS_TILEDARRAY=1 SEQUANT_HAS_BTAS=1)
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sets SEQUANT_HAS_BTAS=1 whenever TARGET tiledarray exists. If BTAS availability is independent of TiledArray, this can incorrectly enable BTAS-dependent code paths/headers. Consider defining SEQUANT_HAS_BTAS only when BTAS is actually found/available (or drop it if BTAS is always required).

Suggested change
target_compile_definitions(SeQuant PUBLIC SEQUANT_HAS_TILEDARRAY=1 SEQUANT_HAS_BTAS=1)
target_compile_definitions(SeQuant PUBLIC SEQUANT_HAS_TILEDARRAY=1)

Copilot uses AI. Check for mistakes.
/// The NNS projector weights are obtained from the normalized pseudoinverse
/// of M: first compute M_pinv (the pseudoinverse), then normalize it by the
/// factor ((n_particles)!/rank(M)).
//// Finally, NNS projector = normalized_M_pinv · M.
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There’s an extra slash in the doc comment (//// Finally, ...). Consider changing it to a normal /// line so it renders correctly in generated docs.

Suggested change
//// Finally, NNS projector = normalized_M_pinv · M.
/// Finally, NNS projector = normalized_M_pinv · M.

Copilot uses AI. Check for mistakes.
#include <SeQuant/core/eval/eval.hpp>
#endif

#include <condition_variable>
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This header now uses std::optional, std::vector, std::mutex, and std::floating_point, but it only includes <condition_variable>. Add the required standard headers (at least , , , , and /) so the header is self-contained and doesn’t rely on transitive includes.

Suggested change
#include <condition_variable>
#include <condition_variable>
#include <optional>
#include <vector>
#include <mutex>
#include <concepts>
#include <cstddef>

Copilot uses AI. Check for mistakes.
/// performs symbolic biorthogonal transform of CC-like equation using
///(for rank-3 and higher
/// Wang-Knizia biorthogonalization (https://arxiv.org/abs/1805.00565) is used
/// uses hardcoded coefficients for ranks 1-6, computed coefficients for higher
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring says biorthogonal_transform uses hardcoded coefficients for ranks 1-6, but the implementation hardcodes only up to rank 5 (max_rank_hardcoded_biorth_coeffs = 5, and hardcoded_biorth_coeffs_first_row throws for >5). Please update the comment to match the actual supported range.

Suggested change
/// uses hardcoded coefficients for ranks 1-6, computed coefficients for higher
/// uses hardcoded coefficients for ranks 1-5, computed coefficients for higher

Copilot uses AI. Check for mistakes.
Comment on lines +186 to +199
static std::mutex cache_mutex;
static std::condition_variable cache_cv;
static container::map<CacheKey, CacheValue> cache;

CacheKey key{n_particles, threshold};

{
std::unique_lock<std::mutex> lock(cache_mutex);
auto [it, inserted] = cache.try_emplace(key, std::nullopt);
if (!inserted) {
cache_cv.wait(lock, [&] { return it->second.has_value(); });
return it->second.value();
}
}
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nns_projection_weights memoization can deadlock: if computing coefficients throws, the cache entry remains std::nullopt and any waiting threads will block forever. Consider storing an exception state (e.g., std::exception_ptr) or erasing the key and notifying in a catch block so waiters don’t hang.

Copilot uses AI. Check for mistakes.
Comment on lines +1183 to +1184
const auto nf = ex<Constant>(
rational{1, factorial(ext_idxs.size())}); // normalization factor for S
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change removes the previous (fact_n/(fact_n-1)) rescaling for ext rank > 2 and always uses 1/factorial(ext_idxs.size()). There is still code elsewhere applying the old rescaling (e.g., tests/integration/srcc.cpp:200-210). Please align those call sites/tests or centralize the normalization so spintrace-v2 and other spin-free generation paths stay consistent.

Suggested change
const auto nf = ex<Constant>(
rational{1, factorial(ext_idxs.size())}); // normalization factor for S
const auto n_ext = ext_idxs.size();
const auto nf = ex<Constant>(
rational{1, factorial(n_ext > 2 ? n_ext - 1 : n_ext)}); // normalization factor for S

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants