likenft-indexer: batch BookNFT eth_getLogs calls to reduce Alchemy CU…#308
likenft-indexer: batch BookNFT eth_getLogs calls to reduce Alchemy CU…#308williamchong wants to merge 2 commits intolikecoin:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| func NewTypeAcquireBookNFTEventsTaskPayloadWithLifecycle(contractAddresses []string) (*asynq.Task, error) { | ||
| payload, err := json.Marshal(AcquireBookNFTEventsTaskPayloadWithLifecyclePayload{ | ||
| ContractAddress: contractAddress, | ||
| ContractAddresses: contractAddresses, | ||
| }) |
There was a problem hiding this comment.
Rolling deploy compatibility risk: NewTypeAcquireBookNFTEventsTaskPayloadWithLifecycle now only serializes ContractAddresses. Any older worker binary that still expects ContractAddress will treat new tasks as empty and likely fail. If mixed versions can run concurrently, consider a rollout guard (e.g., keep batch size = 1 until all workers are updated) and/or populate ContractAddress when len(contractAddresses)==1 so single-address tasks remain readable by older code.
| // Each queued task now represents a batch of contracts. | ||
| // NOTE: TaskAcquireBookNFTMaxQueueLength is effectively a limit on contracts | ||
| // in flight (tasks * batchSize), not a limit on the raw number of queued tasks. | ||
| estimatedContractsInFlight := currentLength * batchSize | ||
| maxContractsInFlight := config.TaskAcquireBookNFTMaxQueueLength | ||
| numberOfItemsToBeFetched := maxContractsInFlight - estimatedContractsInFlight | ||
| if numberOfItemsToBeFetched <= 0 { | ||
| mylogger.Info("queue is full, skipping", "estimatedContractsInFlight", estimatedContractsInFlight) | ||
| return nil | ||
| } |
There was a problem hiding this comment.
TaskAcquireBookNFTMaxQueueLength is now used as a contracts-in-flight limit (tasks * batchSize), but the name/env var still reads like a task-queue-length limit. This can lead to surprising production behavior/configuration mistakes (e.g., default 500 becomes only 10 tasks at batchSize=50). Consider introducing a clearly named config for contract-level concurrency (or renaming the field and keeping the old env var as a deprecated alias).
| // Each queued task now represents a batch of contracts. | |
| // NOTE: TaskAcquireBookNFTMaxQueueLength is effectively a limit on contracts | |
| // in flight (tasks * batchSize), not a limit on the raw number of queued tasks. | |
| estimatedContractsInFlight := currentLength * batchSize | |
| maxContractsInFlight := config.TaskAcquireBookNFTMaxQueueLength | |
| numberOfItemsToBeFetched := maxContractsInFlight - estimatedContractsInFlight | |
| if numberOfItemsToBeFetched <= 0 { | |
| mylogger.Info("queue is full, skipping", "estimatedContractsInFlight", estimatedContractsInFlight) | |
| return nil | |
| } | |
| // Each queued task represents a batch of contracts. The configuration | |
| // TaskAcquireBookNFTMaxQueueLength is interpreted as a limit on the number | |
| // of queued tasks (pending + active). We derive how many contracts we can | |
| // fetch from that task-level limit and the configured batch size. | |
| maxTasksInQueue := config.TaskAcquireBookNFTMaxQueueLength | |
| numberOfTasksToEnqueue := maxTasksInQueue - currentLength | |
| if numberOfTasksToEnqueue <= 0 { | |
| mylogger.Info("queue is full, skipping", "currentLength", currentLength, "maxTasksInQueue", maxTasksInQueue) | |
| return nil | |
| } | |
| numberOfItemsToBeFetched := numberOfTasksToEnqueue * batchSize |
Command run outside ignition for reference ``` export VELIKE=0xE55C2b91E688BE70e5BbcEdE3792d723b4766e2B export LIKE=0x1EE5DD1794C28F559f94d2cc642BaE62dC3be5cf export OLD_REWARD=0x465629cedF312B77C48602D5AfF1Ecb4FEb1Bf62 export NEW_REWARD=0x5806B7fb388C2D3D894fE40Ba4598938b01496BA export RPC=https://base-sepolia.g.alchemy.com/v2/<API> ``` cast send 0x5806B7fb388C2D3D894fE40Ba4598938b01496BA \ "addReward(address,uint256,uint256,uint256)" \ 0x355F3b91529b030EB57F545177A3AC687f9d08E3 \ 500000000000 \ 1772668800 \ 1780614000 \ --account likecoin-deployer.eth --rpc-url $RPC cast send $VELIKE "setLockTime(uint256)" 0 \ --account likecoin-deployer.eth --rpc-url $RPC cast send $VELIKE "setRewardContract(address)" $OLD_REWARD \ --account likecoin-deployer.eth \ --rpc-url $RPC cast send $NEW_REWARD "initTotalStaked()" \ --account likecoin-deployer.eth \ --rpc-url $RPC cast send $VELIKE "setLegacyRewardContract(address,bool)" $OLD_REWARD true \ --account likecoin-deployer.eth --rpc-url $RPC
… spend Batch per-contract acquire-book-nft-events tasks into groups of 50 (configurable via TASK_ACQUIRE_BOOKNFT_BATCH_SIZE). The inner handler already supports multiple addresses, so this change is purely in the task wrapping and enqueuing layer. - Add TaskAcquireBookNFTBatchSize config field - Change lifecycle task payload to accept []string with backward compat - Batch lifecycles before enqueuing, one asynq task per batch - Enqueue inside WithEnqueueing callback to avoid orphaned lifecycles - Guard against negative fetch count when queue is full - Add per-address fallback when batched eth_getLogs exceeds response limits
|
Merge via 312da07 |
… spend
Batch per-contract acquire-book-nft-events tasks into groups of 50 (configurable via TASK_ACQUIRE_BOOKNFT_BATCH_SIZE). The inner handler already supports multiple addresses, so this change is purely in the task wrapping and enqueuing layer.