Skip to content

likenft-indexer: batch BookNFT eth_getLogs calls to reduce Alchemy CU…#308

Closed
williamchong wants to merge 2 commits intolikecoin:mainfrom
williamchong:main
Closed

likenft-indexer: batch BookNFT eth_getLogs calls to reduce Alchemy CU…#308
williamchong wants to merge 2 commits intolikecoin:mainfrom
williamchong:main

Conversation

@williamchong
Copy link
Member

… spend

Batch per-contract acquire-book-nft-events tasks into groups of 50 (configurable via TASK_ACQUIRE_BOOKNFT_BATCH_SIZE). The inner handler already supports multiple addresses, so this change is purely in the task wrapping and enqueuing layer.

  • Add TaskAcquireBookNFTBatchSize config field
  • Change lifecycle task payload to accept []string with backward compat
  • Batch lifecycles before enqueuing, one asynq task per batch
  • Enqueue inside WithEnqueueing callback to avoid orphaned lifecycles
  • Guard against negative fetch count when queue is full
  • Add per-address fallback when batched eth_getLogs exceeds response limits

This comment was marked as outdated.

This comment was marked as outdated.

This comment was marked as outdated.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +36 to 39
func NewTypeAcquireBookNFTEventsTaskPayloadWithLifecycle(contractAddresses []string) (*asynq.Task, error) {
payload, err := json.Marshal(AcquireBookNFTEventsTaskPayloadWithLifecyclePayload{
ContractAddress: contractAddress,
ContractAddresses: contractAddresses,
})
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rolling deploy compatibility risk: NewTypeAcquireBookNFTEventsTaskPayloadWithLifecycle now only serializes ContractAddresses. Any older worker binary that still expects ContractAddress will treat new tasks as empty and likely fail. If mixed versions can run concurrently, consider a rollout guard (e.g., keep batch size = 1 until all workers are updated) and/or populate ContractAddress when len(contractAddresses)==1 so single-address tasks remain readable by older code.

Copilot uses AI. Check for mistakes.
Comment on lines +62 to +71
// Each queued task now represents a batch of contracts.
// NOTE: TaskAcquireBookNFTMaxQueueLength is effectively a limit on contracts
// in flight (tasks * batchSize), not a limit on the raw number of queued tasks.
estimatedContractsInFlight := currentLength * batchSize
maxContractsInFlight := config.TaskAcquireBookNFTMaxQueueLength
numberOfItemsToBeFetched := maxContractsInFlight - estimatedContractsInFlight
if numberOfItemsToBeFetched <= 0 {
mylogger.Info("queue is full, skipping", "estimatedContractsInFlight", estimatedContractsInFlight)
return nil
}
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TaskAcquireBookNFTMaxQueueLength is now used as a contracts-in-flight limit (tasks * batchSize), but the name/env var still reads like a task-queue-length limit. This can lead to surprising production behavior/configuration mistakes (e.g., default 500 becomes only 10 tasks at batchSize=50). Consider introducing a clearly named config for contract-level concurrency (or renaming the field and keeping the old env var as a deprecated alias).

Suggested change
// Each queued task now represents a batch of contracts.
// NOTE: TaskAcquireBookNFTMaxQueueLength is effectively a limit on contracts
// in flight (tasks * batchSize), not a limit on the raw number of queued tasks.
estimatedContractsInFlight := currentLength * batchSize
maxContractsInFlight := config.TaskAcquireBookNFTMaxQueueLength
numberOfItemsToBeFetched := maxContractsInFlight - estimatedContractsInFlight
if numberOfItemsToBeFetched <= 0 {
mylogger.Info("queue is full, skipping", "estimatedContractsInFlight", estimatedContractsInFlight)
return nil
}
// Each queued task represents a batch of contracts. The configuration
// TaskAcquireBookNFTMaxQueueLength is interpreted as a limit on the number
// of queued tasks (pending + active). We derive how many contracts we can
// fetch from that task-level limit and the configured batch size.
maxTasksInQueue := config.TaskAcquireBookNFTMaxQueueLength
numberOfTasksToEnqueue := maxTasksInQueue - currentLength
if numberOfTasksToEnqueue <= 0 {
mylogger.Info("queue is full, skipping", "currentLength", currentLength, "maxTasksInQueue", maxTasksInQueue)
return nil
}
numberOfItemsToBeFetched := numberOfTasksToEnqueue * batchSize

Copilot uses AI. Check for mistakes.
rickmak and others added 2 commits March 4, 2026 17:15
Command run outside ignition for reference

```
export VELIKE=0xE55C2b91E688BE70e5BbcEdE3792d723b4766e2B
export LIKE=0x1EE5DD1794C28F559f94d2cc642BaE62dC3be5cf
export OLD_REWARD=0x465629cedF312B77C48602D5AfF1Ecb4FEb1Bf62
export NEW_REWARD=0x5806B7fb388C2D3D894fE40Ba4598938b01496BA
export RPC=https://base-sepolia.g.alchemy.com/v2/<API>
```

cast send 0x5806B7fb388C2D3D894fE40Ba4598938b01496BA \
  "addReward(address,uint256,uint256,uint256)" \
  0x355F3b91529b030EB57F545177A3AC687f9d08E3 \
  500000000000 \
  1772668800 \
  1780614000 \
  --account likecoin-deployer.eth --rpc-url $RPC

cast send $VELIKE "setLockTime(uint256)" 0 \
  --account likecoin-deployer.eth --rpc-url $RPC

cast send $VELIKE "setRewardContract(address)" $OLD_REWARD \
  --account likecoin-deployer.eth \
  --rpc-url $RPC

cast send $NEW_REWARD "initTotalStaked()" \
  --account likecoin-deployer.eth \
  --rpc-url $RPC

cast send $VELIKE "setLegacyRewardContract(address,bool)" $OLD_REWARD true \
  --account likecoin-deployer.eth --rpc-url $RPC
… spend

Batch per-contract acquire-book-nft-events tasks into groups of 50
(configurable via TASK_ACQUIRE_BOOKNFT_BATCH_SIZE). The inner handler
already supports multiple addresses, so this change is purely in the
task wrapping and enqueuing layer.

- Add TaskAcquireBookNFTBatchSize config field
- Change lifecycle task payload to accept []string with backward compat
- Batch lifecycles before enqueuing, one asynq task per batch
- Enqueue inside WithEnqueueing callback to avoid orphaned lifecycles
- Guard against negative fetch count when queue is full
- Add per-address fallback when batched eth_getLogs exceeds response limits
@rickmak
Copy link
Member

rickmak commented Mar 5, 2026

Merge via 312da07

@rickmak rickmak closed this Mar 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants