Conversation
f26c967 to
5fd5510
Compare
0999499 to
b9b1b52
Compare
b9b1b52 to
ddc1cbe
Compare
| RI_WEIGHT: 25, | ||
| RI_WEIGHT_DENOMINATOR: 100, | ||
| RI_THRESHOLD: 20_000_000 * 1_000_000, // in USDC | ||
| RI_MIN_COVER_AMOUNT: 100_000 * 1_000_000, // in USDC | ||
| RI_COVER_AMOUNT_PERCENTAGE: 80, | ||
| RI_COVER_AMOUNT_DENOMINATOR: 100, | ||
| RI_EPOCH_DURATION: 70, // days |
There was a problem hiding this comment.
do we want some of these as env var config?
src/store/riSubnetworks.json
Outdated
| @@ -0,0 +1,65 @@ | |||
| { | |||
| "0x51ad1265c8702c9e96ea61fe4088c2e22ed4418e000000000000000000000000": { | |||
There was a problem hiding this comment.
should be 01:
0x51ad1265c8702c9e96ea61fe4088c2e22ed4418e000000000000000000000001
There was a problem hiding this comment.
This will change when we have everything setup
src/store/riSubnetworks.json
Outdated
| "maxLimit": 50000000, | ||
| "limit": 50000000 | ||
| }, | ||
| "0x51ad1265c8702c9e96ea61fe4088c2e22ed4418e000000000000000000000001": { |
There was a problem hiding this comment.
should be 02:
0x51ad1265c8702c9e96ea61fe4088c2e22ed4418e000000000000000000000002
There was a problem hiding this comment.
This will change when we have everything setup
| return Zero; | ||
| } | ||
|
|
||
| const vaults = selectVaultProducts(store, product.productId); |
There was a problem hiding this comment.
in synchornizer its .id not .productId
const product = await chainApi.fetchProduct(productId);
store.dispatch({ type: SET_PRODUCT, payload: { ...product, id: productId } });
80e68d9 to
1f111c0
Compare
1f111c0 to
8cc21a8
Compare
… to Delegator ABI
…it delegator events
📝 WalkthroughWalkthroughThis PR adds Reinsurance Insurance (RI) vault integration to the cover router, enabling insurance coverage allocation to RI vaults alongside traditional pools. It includes AWS KMS-based quote signing, RI contract ABIs, RI capacity calculations, quote engine updates, event synchronization, and state management for RI data. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant QuoteRoute as Quote Route
participant QuoteEngine
participant ChainAPI
participant CapacityEngine
participant SignQuote
Client->>QuoteRoute: GET /v2/quote?useRiQuote=true
QuoteRoute->>QuoteEngine: quoteEngine(store, productId, amount, period, coverAsset, editedCoverId, paymentAsset, useRiVaults=true)
QuoteEngine->>CapacityEngine: calculateProductCapacity(store, productId, {...})
CapacityEngine->>ChainAPI: fetch RI vault data via selectors
CapacityEngine-->>QuoteEngine: poolCapacity + riCapacity
QuoteEngine->>QuoteEngine: riAmountInNXM = calculate RI split based on threshold/percentage
QuoteEngine->>ChainAPI: fetchRiAssetRate(assetId)
ChainAPI-->>QuoteEngine: assetRate
QuoteEngine->>QuoteEngine: allocateAcrossRiVaults(riAmountInNXM)
QuoteEngine-->>QuoteRoute: {poolQuote, riQuote}
QuoteRoute->>SignQuote: signRiQuote(riQuote)
SignQuote-->>QuoteRoute: EIP-712 signature
QuoteRoute-->>Client: {poolQuote, riRequest: {data, signature}}
sequenceDiagram
participant BlockListener
participant EventsAPI
participant Synchronizer
participant ChainAPI
participant Store
BlockListener->>EventsAPI: onBlock(blockNumber, blockTimestamp)
EventsAPI->>Synchronizer: updatesOnBlockMined(blockNumber, blockTimestamp)
Synchronizer->>ChainAPI: updateAssetRates()
Synchronizer->>ChainAPI: updateEpoch(blockTimestamp)
ChainAPI-->>Synchronizer: epochExpires
Synchronizer->>ChainAPI: fetchVaultNextEpochStart(vaultId)
ChainAPI-->>Synchronizer: nextEpochStart
Synchronizer->>Synchronizer: updateRiVaultCapacity(vaultIds)
ChainAPI->>ChainAPI: fetchVaultStake(vaultId, ...) / fetchVaultWithdrawals(vaultId)
ChainAPI-->>Synchronizer: activeStake, withdrawalAmount
Synchronizer->>Store: dispatch SET_RI_EPOCH_EXPIRIES
Synchronizer->>Store: dispatch SET_VAULT_STAKE
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
✨ Simplify code
Comment |
There was a problem hiding this comment.
Actionable comments posted: 17
♻️ Duplicate comments (3)
src/lib/quoteEngine.js (1)
164-164:⚠️ Potential issue | 🔴 CriticalUse the stored product id key when loading RI vaults for refunds.
selectProduct()returns objects written bysrc/lib/synchronizer.jsLines 30-31, which carryid, notproductId. Withproduct.productIdundefined here, edited RI quotes never find their existing vault allocations, so the refund path stays at zero.Suggested fix
- const vaults = selectVaultProducts(store, product.productId); + const vaults = selectVaultProducts(store, product.id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/quoteEngine.js` at line 164, The code calls selectVaultProducts(store, product.productId) but selectProduct() yields objects with id (not productId), so product.productId is undefined and RI vault lookups fail; change the call to use the stored id (i.e., selectVaultProducts(store, product.id)) so edited RI quotes correctly find existing vault allocations (refer to selectVaultProducts and selectProduct usages to locate the code).src/lib/capacityEngine.js (1)
144-146:⚠️ Potential issue | 🔴 CriticalThe stale
allocation.activeguard drops every RI allocation from capacity math.Neither
src/lib/chainApi.jsLines 169-174 norsrc/lib/synchronizer.jsLines 150-152 attaches anactiveflag, so this branch never subtracts existing RI allocations./capacitywill over-report available RI capacity and drift fromquoteEngine, which already counts these allocations.Suggested fix
const allocatedAmount = (vault.allocations || []).reduce((acc, allocation) => { // cover edit allocation - if (allocation.expiryTimestamp > now && allocation.coverId !== editedCover?.coverId && allocation.active) { + const expiryTimestamp = BigNumber.isBigNumber(allocation.expiryTimestamp) + ? allocation.expiryTimestamp + : BigNumber.from(allocation.expiryTimestamp || 0); + if (expiryTimestamp.gt(now) && allocation.coverId !== editedCover?.coverId) { acc = acc.add(allocation.amount); } return acc; }, BigNumber.from(0));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/capacityEngine.js` around lines 144 - 146, The reduce that computes allocatedAmount is incorrectly filtering allocations by allocation.active which is not set anywhere (so all RI allocations are ignored); update the predicate inside the reduce used to compute allocatedAmount (which iterates vault.allocations and checks allocation.expiryTimestamp and editedCover?.coverId) to treat allocations without an explicit active flag as active (e.g., consider allocation active if allocation.active === undefined || allocation.active === true) or remove the active check entirely so existing RI allocations are included in the subtraction.src/lib/synchronizer.js (1)
173-175:⚠️ Potential issue | 🟠 MajorUse
.lte()method for BigNumber comparison with epoch timestamps.
fetchVaultNextEpochStart()returns ethersBigNumbers from contract calls. Usingvalue <= timestamprelies on JavaScript's relational operator, which produces incorrect results withBigNumberobjects. This breaks epoch rollover detection.Suggested fix
- const expiredEpochs = Object.entries(epochExpires).filter(([key, value]) => value <= timestamp); + const expiredEpochs = Object.entries(epochExpires).filter(([, value]) => { + const expiry = BigNumber.isBigNumber(value) ? value : BigNumber.from(value); + return expiry.lte(timestamp); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/synchronizer.js` around lines 173 - 175, The comparison in updateEpoch is using JavaScript's <= on BigNumber objects (epochExpires values) which is incorrect; change the comparison to use BigNumber.lte by converting the timestamp to a BigNumber (e.g., ethers.BigNumber.from(timestamp)) or vice versa and replace value <= timestamp with value.lte(timestampBN); update references in updateEpoch and any related logic that consumes epochExpires (and cases where fetchVaultNextEpochStart returns BigNumber) so all epoch comparisons use BigNumber.lte.
🧹 Nitpick comments (4)
test/mocks/store.js (1)
661-665: Capturenowonce forepochExpires.Each entry calls
Date.now()independently, so these fixtures can differ by a second if the clock rolls over mid-object. That makes the mock nondeterministic for no benefit.Suggested change
epochExpires: { - 1: BigNumber.from(Math.floor(Date.now() / 1000)).add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), - 2: BigNumber.from(Math.floor(Date.now() / 1000)).add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), - 3: BigNumber.from(Math.floor(Date.now() / 1000)).add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), + 1: now.add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), + 2: now.add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), + 3: now.add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600), },Add a shared
const now = BigNumber.from(Math.floor(Date.now() / 1000));once abovestore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mocks/store.js` around lines 661 - 665, The epochExpires object currently calls Date.now() three times leading to nondeterministic seconds; capture the timestamp once by declaring a shared const now = BigNumber.from(Math.floor(Date.now() / 1000)) just above the store and replace the three BigNumber.from(Math.floor(Date.now() / 1000)) calls in epochExpires with now.add(...). Update references in the same file (epochExpires, store) so each entry uses now.add(RI_EPOCH_DURATION * 24 * 3600 + 365 * 24 * 3600) for deterministic fixtures.test/unit/synchronizer.js (1)
122-123: Replace the fixed 100ms sleep with a deterministic flush.This adds timing-based flakiness and a 100ms tax to each case. With the current immediate mocks, waiting for the next turn of the event loop is enough.
Suggested change
- // Allow async handler to complete - await new Promise(resolve => setTimeout(resolve, 100)); + await new Promise(resolve => setImmediate(resolve));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/synchronizer.js` around lines 122 - 123, Replace the fixed 100ms sleep in test/unit/synchronizer.js (the line using await new Promise(resolve => setTimeout(resolve, 100))) with a deterministic event-loop flush such as await new Promise(resolve => setImmediate(resolve)) or await new Promise(resolve => setTimeout(resolve, 0)) so the test yields to the next tick without adding a fixed delay; update that single await to one of these alternatives to remove timing flakiness and the 100ms tax.src/lib/riContracts/index.js (1)
11-11: Drop the startup vault dump.
console.log(vaults)looks like leftover debugging and will print the full RI vault map on every boot. That adds noise to prod logs and tests without changing behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/riContracts/index.js` at line 11, Remove the leftover debug print by deleting the console.log(vaults) statement so the RI vault map is not dumped at startup; locate the occurrence of console.log(vaults) that references the vaults variable in the riContracts module (index.js) and remove that line (or replace it with a proper debug-level logger guarded by an environment flag if needed).src/store/riSubnetworks.json (1)
10-29: Normalize or rename theproductsmap keys.Some entries are stored under keys like
"2"and"3"while theirproductIdvalues are5,8,21, etc. That makes direct lookup by product id unsafe and forces every caller to remember that this key means something else. Either key this object by the realproductIdor rename this level to make the slot semantics explicit.Also applies to: 47-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/store/riSubnetworks.json` around lines 10 - 29, The top-level map currently labeled as "products" uses arbitrary string keys ("2","3", etc.) that do not match each entry's productId (e.g., productId: 5, 8, 11), which makes lookups error-prone; fix by normalizing this structure: either re-key the object by the real productId values (use each entry's productId as the key) or rename the map (e.g., "slots" or "positions") to explicitly convey that the numeric string is a slot index, and update any consumers that iterate or index the "products" map accordingly (also apply the same change to the similar block around entries 47-56).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config.js`:
- Around line 36-55: The config schema is missing the providerUrl key referenced
by config.get('providerUrl') in signQuote.js; add a new config entry named
providerUrl to the exported config object (matching the style of
awsRegion/awsKmsKeyId entries) with a doc string like "The provider base URL", a
default of '' and an env name such as 'PROVIDER_URL' so strict config mode no
longer errors when signQuote.js calls config.get('providerUrl').
In `@src/lib/chainApi.js`:
- Around line 147-148: The code uses JavaScript `+` on an ethers BigNumber
(`currentEpoch + 1`) which is invalid; update the call that fetches withdrawals
on the vault (riContracts[`vault_${vaultId}`].withdrawals(...)) to use BigNumber
arithmetic, e.g., call `currentEpoch.add(1)` (or
`currentEpoch.add(BigNumber.from(1))`) instead of `currentEpoch + 1` so the
epoch argument is a correct BigNumber.
- Around line 185-189: The fetchRiAssetRate function is returning the
protocolAssetCorrelationId function object instead of its value; change the
access of riContracts[`asset_${assetId}`].protocolAssetCorrelationId to invoke
and await the getter (i.e., await
riContracts[`asset_${assetId}`].protocolAssetCorrelationId()) so the actual
correlation ID value is returned alongside the awaited getRate() call.
- Around line 160-162: The decode call is passing the encoded bytes wrapped in
an array and returning a single-element result; change the call in the place
where you call defaultAbiCoder.decode([...], [data]) (near where fetchCover is
used) to pass data directly as the second argument
(defaultAbiCoder.decode([...], data)) and then unwrap the returned array via
destructuring to get the actual value; apply the same fix in decodeRiData in
helpers.js (use constants.RI_DATA_FORMATS[...] as the types, pass data directly,
and destructure the single-result value).
In `@src/lib/constants.js`:
- Around line 48-49: The hardcoded FETCH_COVER_DATA_FROM_ID constant causes the
synchronizer (updateAll in src/lib/synchronizer.js) to skip still-active covers
after restarts; replace this date-sensitive constant with a persisted cursor or
a chain-derived lower bound: implement a stored cursor (e.g., in DB or file)
that updateAll reads and advances as it scans, falling back to a safe minimal ID
or a lower bound computed from chain state if no cursor exists, and
remove/ignore the fixed FETCH_COVER_DATA_FROM_ID usage so the synchronizer
initializes its scan range from the persisted cursor or chain-derived bound
instead.
In `@src/lib/helpers.js`:
- Around line 306-308: The decodeRiData helper wrongly passes the encoded
payload as an array; update decodeRiData to pass the raw BytesLike value (not
[data]) into defaultAbiCoder.decode using the ABI type from RI_DATA_FORMATS (and
if necessary normalize the input to a BytesLike via ethers.utils.arrayify or a
string check) so call defaultAbiCoder.decode([RI_DATA_FORMATS[dataFormat]],
data) (or arrayify(data) when required) inside the decodeRiData function.
In `@src/lib/quoteEngine.js`:
- Around line 448-450: When RI covers the full quote, calculatePoolAllocations
is being called with zero and still produces a zero-amount allocation causing a
bogus entry in poolsWithPremium; change the flow so that when
amountToAllocate.sub(riAmountInNXM) is zero (use the BigNumber/BN equality check
used elsewhere) you do not call calculatePoolAllocations and instead set
allocations = [] (or short-circuit building poolsWithPremium) to skip any pool
allocation; check the variables/funcs selectProductPriorityPoolsFixedPrice,
sortPools, calculatePoolAllocations, amountToAllocate, riAmountInNXM, and
poolsWithPremium when applying this guard.
- Around line 224-245: The sort comparator for vaults currently uses
a.availableCapacityInNXM.sub(b.availableCapacityInNXM).toNumber(), which will
overflow for 18-decimal BigNumbers; replace that subtraction+toNumber() with
BigNumber comparisons on availableCapacityInNXM (e.g., if
a.availableCapacityInNXM.gt(b.availableCapacityInNXM) return 1; if
a.availableCapacityInNXM.lt(b.availableCapacityInNXM) return -1; else return 0)
so the sort uses gt/lt equality checks instead of converting to a JS Number;
update the comparator inside the vaults .sort(...) call accordingly.
In `@src/lib/signQuote.js`:
- Around line 35-45: The AWS KMS signer is constructed with the wrong argument
order and unnecessary guard checks: change the AwsKmsSigner instantiation to
pass (region, keyId, provider) instead of (keyId, region, provider) in
signQuote.js (the AwsKmsSigner call), and tighten the environment-guard to only
require awsKmsKeyId and awsRegion (remove awsAccessKeyId/awsSecretAccessKey from
the if condition) so the correct values are validated before returning the
signer.
In `@src/lib/synchronizer.js`:
- Around line 220-249: updateRiData currently builds vaultProducts but never
seeds the epochExpires/expiries map used by quoteEngine and capacityEngine, so
on cold start expiries[vaultId] is undefined and RI is dropped; modify
updateRiData (or the RI bootstrap path that calls it) to initialize
expiries/epochExpires entries for each vaultId/populated key (e.g., using
store.getState().epoch or chainApi.fetchCurrentEpoch to set a sensible expiry
like currentEpoch + defaultDuration) so that updateEpoch can advance them and
engines see a valid expiries[vaultId]; update references to
expiries/epochExpires consistently where updateEpoch, quoteEngine,
capacityEngine expect them (look for symbols updateRiData, updateEpoch,
expiries, epochExpires, riSubnetworks, vaultProducts).
In `@src/routes/quote.js`:
- Around line 121-127: The OpenAPI block for the /v2/quote endpoint is out of
date: add the new query params and response field by documenting paymentAsset
(integer/number), useRiQuote (boolean) as request/query parameters for /v2/quote
and add quote.riRequest in the response schema; update any examples and the
/v2/quote operationId to reflect these fields so /v2/api/docs shows
paymentAsset, useRiQuote and the riRequest property returned by
quoteEngine/quote. Ensure the parameter names exactly match paymentAsset and
useRiQuote and the response object includes riRequest under quote to align with
the runtime usage in src/routes/quote.js and quoteEngine.
- Around line 123-127: The query param useRiQuote is a string so passing
?useRiQuote=false or ?useRiQuote=0 remains truthy; convert it to a real boolean
before calling quoteEngine by reading req.query.useRiQuote, treating
undefined/null as default true and otherwise mapping common falsy strings
('false','0','no') to false (case-insensitive) and everything else to true, then
pass that boolean into quoteEngine (symbol: useRiQuote / quoteEngine).
In `@src/store/reducer.js`:
- Around line 128-137: The reducer handling SET_VAULT_STAKE is incorrectly
writing the vault-level withdrawalAmount into every product row
(newVaultProducts[...] = { ... , withdrawalAmount }), which causes
double-counting when RI capacity sums per product; instead persist
withdrawalAmount at the vault scope or split it proportionally across products
using the same weights as productStakes. Update the SET_VAULT_STAKE branch to
(a) remove writing withdrawalAmount into each state.vaultProducts entry and
instead set state.vaults[vaultId].withdrawalAmount (or similar vault-level
field), or (b) if you must store per-product, compute perProductWithdrawal =
withdrawalAmount * (productStakes[productId] / totalProductStakes) and assign
that to each newVaultProducts[key]; adjust references that sum activeStake +
withdrawalAmount to read the vault-level field or the prorated
perProductWithdrawal accordingly.
In `@src/store/selectors.js`:
- Around line 73-76: The code reads pool.coverAmountInNxm (lowercase "nxm") but
the synced cover objects use coverAmountInNXM (uppercase "NXM"), causing
undefined values and BigNumber.from(...) to throw; update the selector loop that
iterates cover.poolAllocations to use pool.coverAmountInNXM (and treat it as a
BigNumber when appropriate), and make the same correction where
pool.coverAmountInNxm is referenced in the quote engine (the block around the RI
thresholding logic) so both selectors and quoteEngine use pool.coverAmountInNXM
consistently.
In `@test/mocks/chainApi.js`:
- Around line 68-74: The mock fetchRiAssetRate should return the
raw/pre-conversion RI rate and the real per-asset correlation id rather than
returning the already-converted store value; update the fetchRiAssetRate
implementation (used by updateRiAssetNXMRates) to pull the unconverted rate
fixture (e.g., data.riAssetSourceRates or whatever fixture holds the
pre-conversion value) for assetId and return that as assetRate, and set
protocolAssetCorrelationId from the per-asset mapping (e.g.,
data.protocolAssetCorrelationIds[assetId]) so tests exercise the same conversion
+ mapping logic as production.
In `@test/unit/capacityEngine.js`:
- Around line 951-954: The epochExpires entries are combining the BigNumber
period with numeric math using + which causes BigNumber coercion errors; update
the expressions inside epochExpires (the entries built with now.add(...)) to
perform BigNumber arithmetic via .add()/.mul() on period and any numeric
constants (e.g., use period.add(BigNumber.from(RI_EPOCH_DURATION).mul(24 *
3600)).add(100) or similar) so that period, RI_EPOCH_DURATION and the added
offsets are combined with BigNumber methods; apply the same change where
epochExpires is constructed later (the other block referencing period).
In `@test/unit/quoteEngine.js`:
- Around line 188-191: The test mutates the shared fixture mockStore by setting
mockStore.covers[1].start and .period, which leaks state; instead, clone the
cover entry locally within the test (e.g., deep-copy mockStore.covers[1] into a
local variable) and modify the clone’s start and period for this scenario (or
reassign mockStore to a fresh copy at test setup/teardown) so other tests/refund
cases are unaffected; target the code that touches mockStore.covers[1] in
test/unit/quoteEngine.js and ensure the test uses the cloned cover object rather
than mutating the imported fixture in-place.
---
Duplicate comments:
In `@src/lib/capacityEngine.js`:
- Around line 144-146: The reduce that computes allocatedAmount is incorrectly
filtering allocations by allocation.active which is not set anywhere (so all RI
allocations are ignored); update the predicate inside the reduce used to compute
allocatedAmount (which iterates vault.allocations and checks
allocation.expiryTimestamp and editedCover?.coverId) to treat allocations
without an explicit active flag as active (e.g., consider allocation active if
allocation.active === undefined || allocation.active === true) or remove the
active check entirely so existing RI allocations are included in the
subtraction.
In `@src/lib/quoteEngine.js`:
- Line 164: The code calls selectVaultProducts(store, product.productId) but
selectProduct() yields objects with id (not productId), so product.productId is
undefined and RI vault lookups fail; change the call to use the stored id (i.e.,
selectVaultProducts(store, product.id)) so edited RI quotes correctly find
existing vault allocations (refer to selectVaultProducts and selectProduct
usages to locate the code).
In `@src/lib/synchronizer.js`:
- Around line 173-175: The comparison in updateEpoch is using JavaScript's <= on
BigNumber objects (epochExpires values) which is incorrect; change the
comparison to use BigNumber.lte by converting the timestamp to a BigNumber
(e.g., ethers.BigNumber.from(timestamp)) or vice versa and replace value <=
timestamp with value.lte(timestampBN); update references in updateEpoch and any
related logic that consumes epochExpires (and cases where
fetchVaultNextEpochStart returns BigNumber) so all epoch comparisons use
BigNumber.lte.
---
Nitpick comments:
In `@src/lib/riContracts/index.js`:
- Line 11: Remove the leftover debug print by deleting the console.log(vaults)
statement so the RI vault map is not dumped at startup; locate the occurrence of
console.log(vaults) that references the vaults variable in the riContracts
module (index.js) and remove that line (or replace it with a proper debug-level
logger guarded by an environment flag if needed).
In `@src/store/riSubnetworks.json`:
- Around line 10-29: The top-level map currently labeled as "products" uses
arbitrary string keys ("2","3", etc.) that do not match each entry's productId
(e.g., productId: 5, 8, 11), which makes lookups error-prone; fix by normalizing
this structure: either re-key the object by the real productId values (use each
entry's productId as the key) or rename the map (e.g., "slots" or "positions")
to explicitly convey that the numeric string is a slot index, and update any
consumers that iterate or index the "products" map accordingly (also apply the
same change to the similar block around entries 47-56).
In `@test/mocks/store.js`:
- Around line 661-665: The epochExpires object currently calls Date.now() three
times leading to nondeterministic seconds; capture the timestamp once by
declaring a shared const now = BigNumber.from(Math.floor(Date.now() / 1000))
just above the store and replace the three BigNumber.from(Math.floor(Date.now()
/ 1000)) calls in epochExpires with now.add(...). Update references in the same
file (epochExpires, store) so each entry uses now.add(RI_EPOCH_DURATION * 24 *
3600 + 365 * 24 * 3600) for deterministic fixtures.
In `@test/unit/synchronizer.js`:
- Around line 122-123: Replace the fixed 100ms sleep in
test/unit/synchronizer.js (the line using await new Promise(resolve =>
setTimeout(resolve, 100))) with a deterministic event-loop flush such as await
new Promise(resolve => setImmediate(resolve)) or await new Promise(resolve =>
setTimeout(resolve, 0)) so the test yields to the next tick without adding a
fixed delay; update that single await to one of these alternatives to remove
timing flakiness and the 100ms tax.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e4e3f24e-e6a1-4cd7-a618-f336a251364b
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (31)
package.jsonsrc/config.jssrc/index.jssrc/lib/capacityEngine.jssrc/lib/chainApi.jssrc/lib/constants.jssrc/lib/eventsApi.jssrc/lib/helpers.jssrc/lib/quoteEngine.jssrc/lib/riContracts/Delegator.jsonsrc/lib/riContracts/Slasher.jsonsrc/lib/riContracts/Vault.jsonsrc/lib/riContracts/data.jsonsrc/lib/riContracts/index.jssrc/lib/signQuote.jssrc/lib/synchronizer.jssrc/routes/quote.jssrc/store/actions.jssrc/store/cache.jssrc/store/reducer.jssrc/store/riSubnetworks.jsonsrc/store/selectors.jstest/mocks/chainApi.jstest/mocks/store.jstest/unit/cache.jstest/unit/capacityEngine.jstest/unit/eventApi.jstest/unit/quoteEngine.jstest/unit/responses.jstest/unit/routes/quote.jstest/unit/synchronizer.js
| awsAccessKeyId: { | ||
| doc: 'The AWS credentials key ID', | ||
| default: '', | ||
| env: 'AWS_ACCESS_KEY_ID', | ||
| }, | ||
| awsSecretAccessKey: { | ||
| doc: 'The AWS credentials secret access key', | ||
| default: '', | ||
| env: 'AWS_SECRET_ACCESS_KEY', | ||
| }, | ||
| awsKmsKeyId: { | ||
| doc: 'The UUID of the AWS KMS key or AWS KMS key alias', | ||
| default: '', | ||
| env: 'AWS_KMS_KEY_ID', | ||
| }, | ||
| awsRegion: { | ||
| doc: 'The AWS region where the KMS key is located', | ||
| default: '', | ||
| env: 'AWS_REGION', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Config schema keys =="
rg -nP '^\s*[A-Za-z0-9_]+:\s*\{' src/config.js
echo
echo "== provider key lookups =="
rg -nP "config\.get\('provider'\)|config\.get\('providerUrl'\)" srcRepository: NexusMutual/cover-router
Length of output: 520
Add missing providerUrl config key to prevent RI signing failures.
src/lib/signQuote.js (line 33) reads config.get('providerUrl'), but the config schema at src/config.js only declares provider (line 20). In strict config mode, this causes a failure on the first RI signing request.
Fix
provider: {
doc: 'Providers URL',
env: 'PROVIDER_URL',
default: '',
},
+ providerUrl: {
+ doc: 'Providers URL (alias used by RI signer)',
+ env: 'PROVIDER_URL',
+ default: '',
+ },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config.js` around lines 36 - 55, The config schema is missing the
providerUrl key referenced by config.get('providerUrl') in signQuote.js; add a
new config entry named providerUrl to the exported config object (matching the
style of awsRegion/awsKmsKeyId entries) with a doc string like "The provider
base URL", a default of '' and an env name such as 'PROVIDER_URL' so strict
config mode no longer errors when signQuote.js calls config.get('providerUrl').
| const currentEpoch = await riContracts[`vault_${vaultId}`].currentEpoch(); | ||
| const withdrawalAmount = await riContracts[`vault_${vaultId}`].withdrawals(currentEpoch + 1); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In ethers.js v5, can an ethers.BigNumberbe incremented withcurrentEpoch + 1, or must currentEpoch.add(1) be used?
💡 Result:
In ethers.js v5, an ethers.BigNumber cannot be incremented using the + operator like currentEpoch + 1. Instead, the method currentEpoch.add(1) must be used. The official ethers.js v5 documentation lists arithmetic operations as methods on the BigNumber class: bigNumber.add(otherValue), bigNumber.sub(otherValue), etc. There is no mention of operator overloading for +, which JavaScript classes do not support natively. This contrasts with v6, where BigNumber is replaced by native BigInt, which does support +. Migration docs explicitly show v5 using .add(value2) vs v6 using value1 + value2. Examples attempting native JS operations on BigNumber-related values (e.g., formatted units) show + concatenates as strings, confirming no arithmetic operator support.
Citations:
- 1: https://docs.ethers.org/v5/api/utils/bignumber/
- 2: https://docs.ethers.org/v6-beta/migrating/
- 3: https://docs.ethers.org/v5/api/utils/bignumber
- 4: https://ethereum.stackexchange.com/questions/150478/adding-bignumbers-with-ethers
Replace currentEpoch + 1 with currentEpoch.add(1) for BigNumber arithmetic.
In ethers.js v5, BigNumber does not support the + operator for arithmetic. Attempting currentEpoch + 1 will fail or coerce to an unexpected value, causing incorrect epoch queries or runtime failures when connected to the actual contract.
Suggested fix
- const withdrawalAmount = await riContracts[`vault_${vaultId}`].withdrawals(currentEpoch + 1);
+ const withdrawalAmount = await riContracts[`vault_${vaultId}`].withdrawals(currentEpoch.add(1));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const currentEpoch = await riContracts[`vault_${vaultId}`].currentEpoch(); | |
| const withdrawalAmount = await riContracts[`vault_${vaultId}`].withdrawals(currentEpoch + 1); | |
| const currentEpoch = await riContracts[`vault_${vaultId}`].currentEpoch(); | |
| const withdrawalAmount = await riContracts[`vault_${vaultId}`].withdrawals(currentEpoch.add(1)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/chainApi.js` around lines 147 - 148, The code uses JavaScript `+` on
an ethers BigNumber (`currentEpoch + 1`) which is invalid; update the call that
fetches withdrawals on the vault
(riContracts[`vault_${vaultId}`].withdrawals(...)) to use BigNumber arithmetic,
e.g., call `currentEpoch.add(1)` (or `currentEpoch.add(BigNumber.from(1))`)
instead of `currentEpoch + 1` so the epoch argument is a correct BigNumber.
| const { start, period, productId, originalCoverId } = await fetchCover(coverId); | ||
| const coverAllocations = defaultAbiCoder.decode([constants.RI_DATA_FORMATS[dataFormat]], [data]); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In ethers.js v5, what does defaultAbiCoder.decode(['tuple(uint256 amount,uint256 vaultId,uint256 subnetworkId,uint256 providerId)[]'], data)return, and shoulddatabe passed directly or wrapped in[data]?
💡 Result:
In ethers.js v5, defaultAbiCoder.decode(['tuple(uint256 amount,uint256 vaultId,uint256 subnetworkId,uint256 providerId)[]'], data) returns a Result object, which is an array subclass containing a single element: an array of arrays. Each inner array represents one tuple (struct) with four elements corresponding to amount, vaultId, subnetworkId, and providerId (as BigNumber objects). If the ABI types had names, the Result would also provide named access (e.g., result[0][0].amount), but since the provided type string lacks names, access is positional only. The data should be passed directly as a hex string (e.g., '0x...'), not wrapped in [data]. The decode method expects the raw ABI-encoded bytes for the specified types; wrapping it in an array would treat it as an encoded array containing that data, which is incorrect for a single top-level array return value. Examples from docs show decode returns nested arrays for tuple and array types, e.g.: - decode(["uint[]", "string"], data) returns [ [1234n, 5678n], "Hello World" ] - decode(["uint", "tuple(uint256, string)"], data) returns [ 1234n, [ 5678n, "Hello World" ] ] For tuple(...)[], it analogously returns [ [ [amt0, vault0, sub0, prov0], [amt1, vault1, sub1, prov1], ... ] ]. Use result[0] to access the array of tuples.
Citations:
- 1: https://docs.ethers.org/v5/api/utils/abi/coder/
- 2: https://docs.ethers.org/v5/api/utils/abi/coder
- 3: https://docs.ethers.org/v5/api/utils/abi/interface/
- 4: https://registry.npmjs.org/abi-coder
🏁 Script executed:
fd -t f chainApi.jsRepository: NexusMutual/cover-router
Length of output: 110
🏁 Script executed:
cat -n src/lib/chainApi.js | sed -n '155,170p'Repository: NexusMutual/cover-router
Length of output: 816
🏁 Script executed:
rg -n "decodeRiData" --type jsRepository: NexusMutual/cover-router
Length of output: 372
🏁 Script executed:
rg -n "coverAllocations" src/lib/chainApi.js -A 3 -B 1Repository: NexusMutual/cover-router
Length of output: 506
🏁 Script executed:
rg -n "defaultAbiCoder.decode" src/lib/chainApi.js -A 2 -B 2Repository: NexusMutual/cover-router
Length of output: 336
🏁 Script executed:
cat -n src/lib/helpers.js | sed -n '300,325p'Repository: NexusMutual/cover-router
Length of output: 868
🏁 Script executed:
rg -n "RI_DATA_FORMATS" --type js -B 2 -A 5Repository: NexusMutual/cover-router
Length of output: 1748
🏁 Script executed:
cat -n src/lib/synchronizer.js | sed -n '130,140p'Repository: NexusMutual/cover-router
Length of output: 683
🏁 Script executed:
cat -n src/lib/constants.js | sed -n '60,65p'Repository: NexusMutual/cover-router
Length of output: 275
🏁 Script executed:
rg -n "coverAllocations\|allocations" src/lib/chainApi.js -A 2 -B 1 | head -40Repository: NexusMutual/cover-router
Length of output: 50
🏁 Script executed:
fd -t f "test.*\.js" | head -10Repository: NexusMutual/cover-router
Length of output: 50
🏁 Script executed:
rg -n "decodeRiData\|RI_DATA_FORMATS" test/ --type jsRepository: NexusMutual/cover-router
Length of output: 50
Fix defaultAbiCoder.decode call: pass data directly and unwrap the Result.
The encode bytes should be passed directly as data, not wrapped in [data]. Since decode with a single-element type array returns a Result with the value at index 0, unwrap it with destructuring. The same bug exists in decodeRiData() in helpers.js.
Suggested fixes
In chainApi.js (line 161):
- const coverAllocations = defaultAbiCoder.decode([constants.RI_DATA_FORMATS[dataFormat]], [data]);
+ const [coverAllocations] = defaultAbiCoder.decode([constants.RI_DATA_FORMATS[dataFormat]], data);In helpers.js (line 307):
-const decodeRiData = (data, dataFormat) => {
- return defaultAbiCoder.decode([RI_DATA_FORMATS[dataFormat]], [data]);
-};
+const decodeRiData = (data, dataFormat) => {
+ const [allocations] = defaultAbiCoder.decode([RI_DATA_FORMATS[dataFormat]], data);
+ return allocations;
+};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { start, period, productId, originalCoverId } = await fetchCover(coverId); | |
| const coverAllocations = defaultAbiCoder.decode([constants.RI_DATA_FORMATS[dataFormat]], [data]); | |
| const { start, period, productId, originalCoverId } = await fetchCover(coverId); | |
| const [coverAllocations] = defaultAbiCoder.decode([constants.RI_DATA_FORMATS[dataFormat]], data); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/chainApi.js` around lines 160 - 162, The decode call is passing the
encoded bytes wrapped in an array and returning a single-element result; change
the call in the place where you call defaultAbiCoder.decode([...], [data]) (near
where fetchCover is used) to pass data directly as the second argument
(defaultAbiCoder.decode([...], data)) and then unwrap the returned array via
destructuring to get the actual value; apply the same fix in decodeRiData in
helpers.js (use constants.RI_DATA_FORMATS[...] as the types, pass data directly,
and destructure the single-result value).
| const fetchRiAssetRate = async assetId => { | ||
| return { | ||
| assetRate: await riContracts[`asset_${assetId}`].getRate(), | ||
| protocolAssetCorrelationId: riContracts[`asset_${assetId}`].protocolAssetCorrelationId, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In ethers.js v5, do Solidity public variable getters need to be invoked as functions on a Contractinstance (for examplecontract.foo()vscontract.foo)?
💡 Result:
In ethers.js v5, Solidity public variable getters must be invoked as functions on a Contract instance, using contract.foo rather than contract.foo. Public state variables in Solidity automatically generate view getter functions in the ABI. Ethers.js treats these as regular view functions, requiring the function call syntax with parentheses and await. Examples: - Simple public uint: await contract.totalAllocPoint (not contract.totalAllocPoint) - Public array: await contract.poolInfo(2) - Public mapping: await contract.orderByAssetId(address, id) Direct property access like contract.foo (without ) is not supported and will not trigger the blockchain call. Using Human-Readable ABI requires defining getters explicitly as functions, e.g., 'function totalAllocPoint view returns (uint256)'. This behavior is consistent across ethers.js v5 documentation, GitHub discussions by maintainers, and Stack Exchange answers confirming the getter function invocation.
Citations:
- 1: How to read public variable from contract using ethers.js? ethers-io/ethers.js#1597
- 2: https://docs.ethers.io/v5/api/contract/contract/
- 3: https://docs.ethers.org/v5/single-page/
- 4: https://ethereum.stackexchange.com/questions/99565/how-to-read-public-variable-from-contract-using-ethers-js
- 5: https://docs.ethers.org/v5/api/utils/abi/formats/
🏁 Script executed:
# Check the actual code at lines 185-189
sed -n '185,189p' src/lib/chainApi.jsRepository: NexusMutual/cover-router
Length of output: 293
🏁 Script executed:
# Search for other potential instances of protocolAssetCorrelationId usage
rg 'protocolAssetCorrelationId' src/lib/chainApi.js -B 2 -A 2Repository: NexusMutual/cover-router
Length of output: 252
🏁 Script executed:
# Check the context around the function to understand the broader impact
sed -n '180,195p' src/lib/chainApi.jsRepository: NexusMutual/cover-router
Length of output: 501
Invoke protocolAssetCorrelationId() as a function call.
In ethers.js v5, public Solidity state variables automatically generate view getter functions that require function call syntax with parentheses and await. Accessing riContracts[asset_${assetId}].protocolAssetCorrelationId without invoking it returns a function object instead of the correlation ID value, causing the RI asset-rate sync to fail downstream.
Suggested fix
return {
assetRate: await riContracts[`asset_${assetId}`].getRate(),
- protocolAssetCorrelationId: riContracts[`asset_${assetId}`].protocolAssetCorrelationId,
+ protocolAssetCorrelationId: await riContracts[`asset_${assetId}`].protocolAssetCorrelationId(),
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/chainApi.js` around lines 185 - 189, The fetchRiAssetRate function is
returning the protocolAssetCorrelationId function object instead of its value;
change the access of riContracts[`asset_${assetId}`].protocolAssetCorrelationId
to invoke and await the getter (i.e., await
riContracts[`asset_${assetId}`].protocolAssetCorrelationId()) so the actual
correlation ID value is returned alongside the awaited getRate() call.
| // set to minimal cover id which is not expired | ||
| FETCH_COVER_DATA_FROM_ID: 992, | ||
| FETCH_COVER_DATA_FROM_ID: 3710, |
There was a problem hiding this comment.
Don't bootstrap from a moving cover floor.
src/lib/synchronizer.js:88-94 rebuilds the cover scan range from this constant on every updateAll(). Raising it to 3710 means any still-active cover below 3710 can be silently skipped after a restart or cache rebuild. This needs a persisted cursor or a lower bound derived from chain state instead of a date-sensitive constant.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/constants.js` around lines 48 - 49, The hardcoded
FETCH_COVER_DATA_FROM_ID constant causes the synchronizer (updateAll in
src/lib/synchronizer.js) to skip still-active covers after restarts; replace
this date-sensitive constant with a persisted cursor or a chain-derived lower
bound: implement a stored cursor (e.g., in DB or file) that updateAll reads and
advances as it scans, falling back to a safe minimal ID or a lower bound
computed from chain state if no cursor exists, and remove/ignore the fixed
FETCH_COVER_DATA_FROM_ID usage so the synchronizer initializes its scan range
from the persisted cursor or chain-derived bound instead.
| if (type === SET_VAULT_STAKE) { | ||
| const { vaultId, productIds, productStakes, withdrawalAmount } = payload; | ||
| const newVaultProducts = {}; | ||
| for (const productId of productIds) { | ||
| const key = `${productId}_${vaultId}`; | ||
| newVaultProducts[key] = { | ||
| ...state.vaultProducts[key], | ||
| activeStake: productStakes[productId], | ||
| withdrawalAmount, | ||
| }; |
There was a problem hiding this comment.
Don't copy one vault withdrawal into every product row.
withdrawalAmount is fetched once per vault, but this loop writes the same value onto every ${productId}_${vaultId} entry. The RI capacity path then sums activeStake + withdrawalAmount per vault product, so a vault shared by multiple products will count the same withdrawal multiple times and overstate available capacity. Store this at vault scope or prorate it with the same weights used for productStakes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/reducer.js` around lines 128 - 137, The reducer handling
SET_VAULT_STAKE is incorrectly writing the vault-level withdrawalAmount into
every product row (newVaultProducts[...] = { ... , withdrawalAmount }), which
causes double-counting when RI capacity sums per product; instead persist
withdrawalAmount at the vault scope or split it proportionally across products
using the same weights as productStakes. Update the SET_VAULT_STAKE branch to
(a) remove writing withdrawalAmount into each state.vaultProducts entry and
instead set state.vaults[vaultId].withdrawalAmount (or similar vault-level
field), or (b) if you must store per-product, compute perProductWithdrawal =
withdrawalAmount * (productStakes[productId] / totalProductStakes) and assign
that to each newVaultProducts[key]; adjust references that sum activeStake +
withdrawalAmount to read the vault-level field or the prorated
perProductWithdrawal accordingly.
| for (const pool of cover.poolAllocations) { | ||
| const coverAmount = BigNumber.isBigNumber(pool.coverAmountInNxm) | ||
| ? pool.coverAmountInNxm | ||
| : BigNumber.from(pool.coverAmountInNxm); |
There was a problem hiding this comment.
Read coverAmountInNXM from synced covers.
src/lib/chainApi.js Lines 95-96 stores pool allocations under coverAmountInNXM, and src/lib/synchronizer.js Lines 111-119 preserves that shape. pool.coverAmountInNxm is therefore undefined on live state, so BigNumber.from(...) throws the first time RI thresholding inspects an on-chain cover. The same typo is repeated in src/lib/quoteEngine.js Lines 359-363.
Suggested fix
- const coverAmount = BigNumber.isBigNumber(pool.coverAmountInNxm)
- ? pool.coverAmountInNxm
- : BigNumber.from(pool.coverAmountInNxm);
+ const rawCoverAmount = pool.coverAmountInNXM ?? pool.coverAmountInNxm ?? 0;
+ const coverAmount = BigNumber.isBigNumber(rawCoverAmount)
+ ? rawCoverAmount
+ : BigNumber.from(rawCoverAmount);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const pool of cover.poolAllocations) { | |
| const coverAmount = BigNumber.isBigNumber(pool.coverAmountInNxm) | |
| ? pool.coverAmountInNxm | |
| : BigNumber.from(pool.coverAmountInNxm); | |
| for (const pool of cover.poolAllocations) { | |
| const rawCoverAmount = pool.coverAmountInNXM ?? pool.coverAmountInNxm ?? 0; | |
| const coverAmount = BigNumber.isBigNumber(rawCoverAmount) | |
| ? rawCoverAmount | |
| : BigNumber.from(rawCoverAmount); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/store/selectors.js` around lines 73 - 76, The code reads
pool.coverAmountInNxm (lowercase "nxm") but the synced cover objects use
coverAmountInNXM (uppercase "NXM"), causing undefined values and
BigNumber.from(...) to throw; update the selector loop that iterates
cover.poolAllocations to use pool.coverAmountInNXM (and treat it as a BigNumber
when appropriate), and make the same correction where pool.coverAmountInNxm is
referenced in the quote engine (the block around the RI thresholding logic) so
both selectors and quoteEngine use pool.coverAmountInNXM consistently.
| fetchRiAssetRate: async assetId => { | ||
| const rate = data.riAssetRates[assetId]; | ||
| return { | ||
| assetRate: BigNumber.from(rate), | ||
| protocolAssetCorrelationId: 0, | ||
| }; | ||
| }, |
There was a problem hiding this comment.
Mock the pre-conversion RI rate, not state.riAssetRates.
updateRiAssetNXMRates() derives riAssetRates from { assetRate, protocolAssetCorrelationId }, so returning data.riAssetRates[assetId] here feeds the already-converted store value back into that calculation. That makes RI rate-sync tests inaccurate, and the hard-coded protocolAssetCorrelationId: 0 means they never cover the per-asset mapping used in production.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/mocks/chainApi.js` around lines 68 - 74, The mock fetchRiAssetRate
should return the raw/pre-conversion RI rate and the real per-asset correlation
id rather than returning the already-converted store value; update the
fetchRiAssetRate implementation (used by updateRiAssetNXMRates) to pull the
unconverted rate fixture (e.g., data.riAssetSourceRates or whatever fixture
holds the pre-conversion value) for assetId and return that as assetRate, and
set protocolAssetCorrelationId from the per-asset mapping (e.g.,
data.protocolAssetCorrelationIds[assetId]) so tests exercise the same conversion
+ mapping logic as production.
| epochExpires: { | ||
| 1: now.add(RI_EPOCH_DURATION * 24 * 3600 + period + 100), | ||
| 2: now.add(RI_EPOCH_DURATION * 24 * 3600 + period + 100), | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file and locate the relevant lines
head -n 960 test/unit/capacityEngine.js | tail -n 20Repository: NexusMutual/cover-router
Length of output: 715
🏁 Script executed:
# Also check around lines 1021-1023
sed -n '1015,1030p' test/unit/capacityEngine.jsRepository: NexusMutual/cover-router
Length of output: 454
🏁 Script executed:
# Let's look for where `period` is defined in the file
rg -n "period\s*=" test/unit/capacityEngine.js | head -20Repository: NexusMutual/cover-router
Length of output: 415
🏁 Script executed:
# Check imports to understand what `now` and `RI_EPOCH_DURATION` are
head -n 50 test/unit/capacityEngine.js | grep -E "import|require|BigNumber|ethers"Repository: NexusMutual/cover-router
Length of output: 601
🏁 Script executed:
# Search for SECONDS_PER_DAY definition
rg -n "SECONDS_PER_DAY" test/unit/capacityEngine.js | head -5Repository: NexusMutual/cover-router
Length of output: 256
🏁 Script executed:
# Check the constants file
rg -n "SECONDS_PER_DAY" src/lib/constants.jsRepository: NexusMutual/cover-router
Length of output: 119
🏁 Script executed:
# Check what RI_EPOCH_DURATION is
rg -n "RI_EPOCH_DURATION" test/unit/capacityEngine.js | head -3Repository: NexusMutual/cover-router
Length of output: 225
🏁 Script executed:
# Check constants for RI_EPOCH_DURATION
rg -n "RI_EPOCH_DURATION" src/lib/constants.jsRepository: NexusMutual/cover-router
Length of output: 103
🏁 Script executed:
# Check what `now` is - likely a BigNumber parameter passed to the test
sed -n '945,960p' test/unit/capacityEngine.jsRepository: NexusMutual/cover-router
Length of output: 530
🏁 Script executed:
# Look for where `now` is defined in this test function
sed -n '920,960p' test/unit/capacityEngine.jsRepository: NexusMutual/cover-router
Length of output: 1347
🏁 Script executed:
# Check the describe/it block to understand the test setup
sed -n '900,950p' test/unit/capacityEngine.jsRepository: NexusMutual/cover-router
Length of output: 1728
🏁 Script executed:
# Check what getCurrentTimestamp returns
rg -n "getCurrentTimestamp" test/unit/utils.js -A 5Repository: NexusMutual/cover-router
Length of output: 629
Use .add() method to properly combine BigNumber values.
The period variable is a BigNumber (from SECONDS_PER_DAY.mul(30)), and using the + operator to combine it with other values will cause type coercion issues. In ethers v5, BigNumber arithmetic requires method-based operations.
Suggested fix
epochExpires: {
- 1: now.add(RI_EPOCH_DURATION * 24 * 3600 + period + 100),
- 2: now.add(RI_EPOCH_DURATION * 24 * 3600 + period + 100),
+ 1: now.add(RI_EPOCH_DURATION * 24 * 3600).add(period).add(100),
+ 2: now.add(RI_EPOCH_DURATION * 24 * 3600).add(period).add(100),
},Also applies to: 1021-1023
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/capacityEngine.js` around lines 951 - 954, The epochExpires entries
are combining the BigNumber period with numeric math using + which causes
BigNumber coercion errors; update the expressions inside epochExpires (the
entries built with now.add(...)) to perform BigNumber arithmetic via
.add()/.mul() on period and any numeric constants (e.g., use
period.add(BigNumber.from(RI_EPOCH_DURATION).mul(24 * 3600)).add(100) or
similar) so that period, RI_EPOCH_DURATION and the added offsets are combined
with BigNumber methods; apply the same change where epochExpires is constructed
later (the other block referencing period).
| const startTime = now.sub(TRANCHE_DURATION + 14 * SECONDS_PER_DAY); | ||
| mockStore.covers[1].start = startTime.toNumber(); | ||
| // Extend period to ensure cover is still active (original period was 30 days) | ||
| mockStore.covers[1].period = 120 * 24 * 3600; // 120 days to ensure it's still active |
There was a problem hiding this comment.
Avoid mutating the shared fixture in-place.
mockStore is imported once at file scope, so changing covers[1].period here leaks into the later refund cases and any other suite that reuses the same fixture. Clone/reset the cover per test so these RI scenarios stay isolated.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/quoteEngine.js` around lines 188 - 191, The test mutates the shared
fixture mockStore by setting mockStore.covers[1].start and .period, which leaks
state; instead, clone the cover entry locally within the test (e.g., deep-copy
mockStore.covers[1] into a local variable) and modify the clone’s start and
period for this scenario (or reassign mockStore to a fresh copy at test
setup/teardown) so other tests/refund cases are unaffected; target the code that
touches mockStore.covers[1] in test/unit/quoteEngine.js and ensure the test uses
the cloned cover object rather than mutating the imported fixture in-place.
Description
Describe the changes made in this PR. A link to the issue is enough if it contains all the relevant information.
Testing
Explain how you tested your changes to ensure they work as expected.
Checklist
Summary by CodeRabbit
Release Notes
New Features
Chores