Skip to content

Conversation

sebastianst
Copy link
Member

@sebastianst sebastianst commented Aug 12, 2025

A DA footprint block limit is introduced to mitigate DA spam and prevent priority fee auctions. By tracking
DA footprint alongside gas, this approach adjusts the block gas limit to account for high estimated DA impact (resulting from estimating a transaction's compressed size, including calldata and other metadata that's part of a transaction's batch data) without altering individual transaction gas mechanics. Preliminary analysis shows minimal impact on most blocks on production networks like Base or OP Mainnet.

Closes ethereum-optimism/optimism#17009.

@sebastianst sebastianst force-pushed the seb/calldata-block-limit branch from dbe366c to 372ebdd Compare August 12, 2025 15:05
@sebastianst sebastianst changed the title Jovian: Calldata footprint block limit Jovian: DA footprint block limit Sep 3, 2025
@sebastianst sebastianst force-pushed the seb/calldata-block-limit branch 5 times, most recently from 7dc1ff4 to 3b6635e Compare September 4, 2025 15:16
geoknee
geoknee previously requested changes Sep 4, 2025
Copy link
Contributor

@geoknee geoknee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From today's review session:

  • We would like to double check the data analysis to ensure it matches napkin math e.g. around throttling
  • We would like to consider if we can modify the design to allow for a higher target compared to the gas limit
  • We are aligned on the overall design as the best we have yet considered for addressing the core problem
  • We are also aligned on having the scalar be configurable


The following tables show, for each analyzed chain, the resulting statistics.
* *Scalar*: DA footprint gas scalar value.
* *Effective Limit*: The DA usage limit that the given gas scalar would imply (`block_gas_limit / da_footprint_gas_scalar`), in estimated compressed bytes.
Copy link
Contributor

@niran niran Sep 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also an implied "Effective Target" for DA usage of block_gas_limit / da_footprint_gas_scalar / elasticity_multiplier. For chains with elasticity_multiplier == 2 and desired DA limits much lower than the L1's total capacity, this isn't much of an issue. But Base has elasticity_multiplier == 3, which results in a target that is 1/3 of the limit.

One approach that would address this would be to move from a configurable da_footprint_gas_scalar to a configurable da_usage_target and da_usage_limit. For each block, we'd first ensure that da_usage_estimate <= da_usage_limit for the block to be valid. Then to calculate the change in base fee, we'd calculate the excess_da_usage_estimate, then multiply it by block_gas_limit / da_usage_limit to get a excess_da_footprint value that is comparable to excess_gas_used, and can potentially be negative. We'd calculate da_footprint = min(block_gas_limit / elasticity_multiplier + excess_da_footprint, block_gas_limit), which can also potentially be negative. Finally, we'd set gas_used = max(sum(tx.gas_used), da_footprint).

Copy link
Contributor

@niran niran Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can proceed with just a single scalar. Here's how I think we'd approach setting the scalar.

No chain can sustain more than 64 kb/s of compressed data because that's the target throughput for the entire blobspace, so we all have l1_target_throughput = 64000.

da_footprint_gas_scalar = block_gas_limit / (block_time * l1_target_throughput * estimation_ratio * elasticity_multiplier)

For Base, our estimated DA sizes in production seem to be about 50-100% higher than the actual sizes, giving us an estimation_ratio = 1.5. With an elasticity of 3, a block time of two seconds, and a block gas limit of 150M, we get:

da_footprint_gas_scalar = 150,000,000 / (2 * 64000 * 1.5 * 3) = 260.4

The base fee should only begin to rise when the DA footprint is above the block gas target. To double-check that, we calculate da_usage_target = block_gas_limit / elasticity_multiplier / da_footprint_gas_scalar = 150,000,000 / 3 / 260 = 192,307, which is a good point for Base to start pricing out DA usage. Unfortunately, this produces a da_usage_limit = block_gas_limit / da_footprint_gas_scalar = 150,000,000 / 260 = 576,923, which is significantly higher than the always_throttle value we already use. But I think that's okay! We would get base fees that increase when estimated DA usage is above 192kb per block, and would rely on sequencer throttling to enforce the maximum in practice.

Assuming an estimation_ratio = 1.5 for all OP Stack chains, here are the values that price out DA throughput above 64 kb/s.

OP Mainnet = 40,000,000 / (2 * 64000 * 1.5 * 2) = 104.2
Ink = 30,000,000 / (1 * 64000 * 1.5 * 2) = 156.25
Unichain = 30,000,000 / (1 * 64000 * 1.5 * 2) = 156.25
Soneium = 40,000,000 / (2 * 64000 * 1.5 * 2) = 104.2
Mode = 30,000,000 / (2 * 64000 * 1.5 * 2) = 78.125
World Chain = 60,000,000 / (2 * 64000 * 1.5 * 2) = 156.25

Since most chains don't need to be concerned about a DA target that prices out usage, they can focus on using the scalar for the DA limit rather than the target. Multiplying each of those values by elasticity_multiplier * blob_target / blob_limit = 2 * 6 / 9 = 4/3 will produce scalars that prevent a single chain from ever exceeding the throughput of the blob limit on its own. Scalar values higher than that can be used to target a particular share of blob throughput.

Setting all chains to da_footprint_gas_scalar = 260 shouldn't cause problems for any chain (though World Chain would want to change this if they approach 60% of blob throughput). The original proposal of 800 is probably fine for chains that expect to use less than 10% of blob throughput.

Copy link
Contributor

@niran niran Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be even simpler to have the configurable value be something like estimated_da_target, which would be the L1's target throughput per second, expressed as the estimated FastLZ equivalent. In other words, estimated_da_target = l1_target_throughput * estimation_ratio. Then this configured value would only need to change when blob capacity changes, and it would likely be the same for every OP Stack chain. The actual da_footprint_gas_scalar would be calculated on demand by block_gas_limit / (block_time * estimated_da_target * elasticity_multiplier). All of those values come from the chain specification or system config.

(This approach also works if we want estimated_da_limit to be what we configure instead of the target. But configuring either the target or the limit seems smoother to operate than configuring the scalar directly.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about this on a call, and having the effective DA limit and target be proportional to the block gas limit is actual a good design for most OP Stack chains because they'll use more data as they grow. There are only a handful of OP Stack chains that are concerned about exceeding the L1 DA capacity (e.g. Base, World Chain, OP Mainnet), and with the right documentation, those chains can just take special care to adjust the da_footprint_gas_scalar whenever they adjust the block gas limit or elasticity multiplier.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @niran for your thorough analysis of the scalar value! After fixing several flaws in our analysis, we have updated the numbers for all chains and it seems that a scalar value of 400 would have minimal to no impact: back testing with this scalar showed that for almost no blocks do their DA footprint exceed the gas used. This value would be slightly higher than your proposed default of 260, but given that the observed actual impact would be minimal, I'd favor to go with this higher value as a default to achieve stronger ecosystem-wide protection against DA spam by default. Individual chains may still choose to increase their DA usage targets and limits by lowering the scalar.

Starting with a scalar of 600 did we observe meaningful impact for some chains, where the DA footprint went over the total gas used for a significant percentage. Interestingly, for the Base random sample, even with 600 it did go over the gas usage only in 5.4% of blocks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

400 sounds like a sensible default to me! These are the shares of current DA capacity that each chain would be able to use before they begin to price out transactions:

Chain Share
Base 65.1%
Ink 39.1%
Unichain 39.1%
World Chain 39.1%
OP Mainnet 26.1%
Soneium 26.1%
Mode 19.5%

I suspect Base and World Chain exceed those shares regularly, so they'd want to set a scalar lower than 400. All other chains should have good outcomes with 400.

Comment on lines +73 to +71
as the sum total of all transaction's gas used. However, it is proposed to just repurpose a block's `gas_used` field to
hold the maximum over both resources' totals:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect gas_used header fields that don't match the sum of the gas used by each transaction to break something somewhere. I don't know what would break, but I'd be surprised if nothing is relying on that invariant.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would certainly have an impact on some users, especially data analytics services. It is touched upon briefly in the "Impact on Developer Experience" section. It doesn't seem to break anything inside node software though, current implementations just work with this change and seemingly don't rely on this invariant.

I have added two alternatives to the Alternatives section. We may introduce a new field gasMetered (like in the ethresearch post that inspired this proposal) or repurpose the currently unused blobGasUsed field instead. I begin to like the latter alternative.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, blobGasUsed seems like the safest approach to me, and we could always change it later if some sort of L2 blob feature comes along. The downside is that the name of the field will be confusing, but I think that's fine. Adding a new header field is the cleanest, but I think we'd be likely to break things since we've never added a header field before.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are internally aligning on using blobGasUsed. I suggest we leave this design doc to describe the initial approach to preserve the history of how this feature evolved. The blobGasUsed variant is mentioned in the alternatives.

@sebastianst sebastianst force-pushed the seb/calldata-block-limit branch from 87ed36d to 51b008b Compare September 22, 2025 21:20
@mslipper mslipper dismissed geoknee’s stale review September 23, 2025 20:22

approved on slack

@mslipper mslipper merged commit 706cfee into main Sep 23, 2025
4 checks passed
@mslipper mslipper deleted the seb/calldata-block-limit branch September 23, 2025 20:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[DA Footprint Limit] Write design doc
6 participants