Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add: columns to Eth2Processor and BlockProcessor #6862

Closed
wants to merge 55 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
eebbdc5
init: columns to the block/eth2 processor
agnxsh Jan 19, 2025
3ff4c33
add columns to message router
agnxsh Jan 20, 2025
222cff0
add columns to initializers of Eth2 and BlockProcessor
agnxsh Jan 20, 2025
21d771a
save progress
agnxsh Jan 20, 2025
d3309d5
save progress 2
agnxsh Jan 20, 2025
d4139e0
add column to block verifier
agnxsh Jan 20, 2025
1197f09
save progress, need to rework untrusted syncing
agnxsh Jan 21, 2025
e73379e
add column support to light forward sync
agnxsh Jan 21, 2025
7c75875
save progress test sync manager
agnxsh Jan 21, 2025
c1a2013
fix createDataColumns
agnxsh Jan 21, 2025
38b0421
fix more
agnxsh Jan 21, 2025
8a1825b
added fulu message handlers for column subnets
agnxsh Jan 21, 2025
3dfb2af
activated data column sidecar processing at Fulu
agnxsh Jan 21, 2025
7d79166
fix compilation issues
agnxsh Jan 22, 2025
9756cce
added to T list
agnxsh Jan 22, 2025
d527648
other fixes
agnxsh Jan 22, 2025
2f5e216
fix test
agnxsh Jan 22, 2025
1fc210a
fix result situation in get data column sidecars
agnxsh Jan 24, 2025
514bb3c
fix message router issue
agnxsh Jan 24, 2025
4819e43
gate blob publishing upto deneb
agnxsh Jan 24, 2025
71fdf66
fix message router blob and column progressions
agnxsh Jan 24, 2025
8128944
drop dataColumnOpt from message router
agnxsh Jan 24, 2025
9b5feb6
reversing rman blockVerifier order
agnxsh Jan 25, 2025
3592d34
fixes
agnxsh Jan 25, 2025
e7bc436
several fixes
agnxsh Jan 25, 2025
4077bb4
added debug logs for devnet testing
agnxsh Jan 25, 2025
749a5a9
add blobsOpt isSome check
agnxsh Jan 25, 2025
ceff705
fix copyright years
agnxsh Jan 25, 2025
d82c3f5
couple of fixes and debug logs
agnxsh Jan 26, 2025
9351c26
fix issue
agnxsh Jan 27, 2025
1fef674
resolved review comments, enabled more debug logs, fixed a couple of …
agnxsh Jan 27, 2025
7b3304b
fix indentation
agnxsh Jan 27, 2025
97a190f
limit processBlobSidecar < Fulu
agnxsh Jan 27, 2025
947a71b
try to gate a few operations to < Fulu
agnxsh Jan 27, 2025
3762c2c
gate more
agnxsh Jan 27, 2025
0afb4a2
halt rman blob loop post fulu fork epoch
agnxsh Jan 28, 2025
9a6f749
removed debugEchoes
agnxsh Jan 28, 2025
bdd3d50
don't ignore data column sidecars, even if you already have block
agnxsh Jan 28, 2025
609ed1d
typo
agnxsh Jan 28, 2025
80c36a7
fix upgrade to fulu function
agnxsh Jan 28, 2025
302407d
modify upgrade to fulu
agnxsh Jan 28, 2025
656ce11
gate blob publishing upto < Fulu
agnxsh Jan 28, 2025
c6ef55a
fix indentation
agnxsh Jan 28, 2025
6cc5ae2
fix fulu state upgrade
agnxsh Jan 28, 2025
5d9badc
updated processDataColumnSidecar ordering
agnxsh Jan 28, 2025
bebe9cb
add upto Capella message handlers and data column sidecars topics in …
agnxsh Jan 28, 2025
4cd6413
fix message handlers
agnxsh Jan 29, 2025
90ad075
fix copyright year
agnxsh Jan 29, 2025
7c7bc22
another fix in upgrade to fulu
agnxsh Jan 29, 2025
c434daf
refine DA checking more
agnxsh Jan 29, 2025
c5461a1
log out time taken to reconstruct
agnxsh Feb 1, 2025
07ce6ef
log out sidecar comms and proof len for testing
agnxsh Feb 1, 2025
a7af551
address review comments
agnxsh Feb 5, 2025
27f2723
fix typo
agnxsh Feb 5, 2025
4cf8a58
use disjoint
agnxsh Feb 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 70 additions & 3 deletions beacon_chain/beacon_chain_file.nim
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# beacon_chain
# Copyright (c) 2018-2024 Status Research & Development GmbH
# Copyright (c) 2018-2025 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
Expand Down Expand Up @@ -79,6 +79,8 @@ const
int(ConsensusFork.Phase0) .. int(high(ConsensusFork))
BlobForkCodeRange =
MaxForksCount .. (MaxForksCount + int(high(ConsensusFork)) - int(ConsensusFork.Deneb))
DataColumnForkCodeRange =
MaxForksCount * 2 .. (MaxForksCount * 2 + int(high(ConsensusFork)) - int(ConsensusFork.Fulu))

func getBlockForkCode(fork: ConsensusFork): uint64 =
uint64(fork)
Expand All @@ -94,6 +96,13 @@ func getBlobForkCode(fork: ConsensusFork): uint64 =
of ConsensusFork.Phase0 .. ConsensusFork.Capella:
raiseAssert "Blobs are not supported for the fork"

func getDataColumnForkCode(fork: ConsensusFork): uint64 =
case fork
of ConsensusFork.Fulu:
uint64(MaxForksCount)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because of invalid code range it provides invalid codes which overlaps with blobs range.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolved in a7af551

of ConsensusFork.Phase0 .. ConsensusFork.Electra:
raiseAssert "Data columns are not supported for the fork"

proc init(t: typedesc[ChainFileError], k: ChainFileErrorType,
m: string): ChainFileError =
ChainFileError(kind: k, message: m)
Expand Down Expand Up @@ -134,7 +143,8 @@ proc checkKind(kind: uint64): Result[void, string] =
if res > uint64(high(int)):
return err("Unsuppoted chunk kind value")
int(res)
if (hkind in BlockForkCodeRange) or (hkind in BlobForkCodeRange):
if (hkind in BlockForkCodeRange) or (hkind in BlobForkCodeRange) or
(hkind in DataColumnForkCodeRange):
ok()
else:
err("Unsuppoted chunk kind value")
Expand Down Expand Up @@ -260,6 +270,12 @@ template getBlobChunkKind(kind: ConsensusFork, last: bool): uint64 =
else:
getBlobForkCode(kind)

template getDataColumnChunkKind(kind: ConsensusFork,last: bool): uint64 =
if last:
maskKind(getDataColumnForkCode(kind))
else:
getDataColumnForkCode(kind)

proc getBlockConsensusFork(header: ChainFileHeader): ConsensusFork =
let hkind = unmaskKind(header.kind)
if int(hkind) in BlockForkCodeRange:
Expand All @@ -275,6 +291,10 @@ template isBlob(h: ChainFileHeader | ChainFileFooter): bool =
let hkind = unmaskKind(h.kind)
int(hkind) in BlobForkCodeRange

template isDataColumn(h: ChainFileHeader | ChainFileFooter): bool =
let hkind = unmaskKind(h.kind)
int(hkind) in DataColumnForkCodeRange

template isLast(h: ChainFileHeader | ChainFileFooter): bool =
h.kind.isLast()

Expand All @@ -291,7 +311,8 @@ proc setTail*(chandle: var ChainFileHandle, bdata: BlockData) =
chandle.data.tail = Opt.some(bdata)

proc store*(chandle: ChainFileHandle, signedBlock: ForkedSignedBeaconBlock,
blobs: Opt[BlobSidecars]): Result[void, string] =
blobs: Opt[BlobSidecars], dataColumns: Opt[DataColumnSidecars]):
Result[void, string] =
let origOffset =
updateFilePos(chandle.handle, 0'i64, SeekPosition.SeekEnd).valueOr:
return err(ioErrorMsg(error))
Expand Down Expand Up @@ -342,6 +363,36 @@ proc store*(chandle: ChainFileHandle, signedBlock: ForkedSignedBeaconBlock,
discard fsync(chandle.handle)
return err(IncompleteWriteError)

if dataColumns.isSome():
let dataColumnSidecars =
dataColumns.get
for index, dataColumn in dataColumnSidecars.pairs():
let
kind =
getDataColumnChunkKind(signedBlock.kind, (index + 1) ==
len(dataColumnSidecars))
(data, plainSize) =
block:
let res = SSZ.encode(dataColumn[])
(snappy.encode(res), len(res))
slot = dataColumn[].signed_block_header.message.slot
buffer = Chunk.init(kind, uint64(slot), uint32(plainSize), data)

setFilePos(chandle.handle, 0'i64, SeekPosition.SeekEnd).isOkOr:
discard truncate(chandle.handle, origOffset)
discard fsync(chandle.handle)
return err(ioErrorMsg(error))

let
wrote = writeFile(chandle.handle, buffer).valueOr:
discard truncate(chandle.handle, origOffset)
discard fsync(chandle.handle)
return err(ioErrorMsg(error))
if wrote != uint(len(buffer)):
discard truncate(chandle.handle, origOffset)
discard fsync(chandle.handle)
return err(IncompleteWriteError)

fsync(chandle.handle).isOkOr:
discard truncate(chandle.handle, origOffset)
return err(ioErrorMsg(error))
Expand Down Expand Up @@ -550,6 +601,22 @@ proc decodeBlob(
return err("Incorrect blob format")
ok(blob)

proc decodeDataColumn(
header: ChainFileHeader,
data: openArray[byte],
): Result[DataColumnSidecar, string] =
if header.plainSize > uint32(MaxChunkSize):
return err("Size of data column is enormously big")

let
decompressed = snappy.decode(data, uint32(header.plainSize))
dataColumn =
try:
SSZ.decode(decompressed, DataColumnSidecar)
except SerializationError:
return err("Incorrect data column format")
ok(dataColumn)

proc getChainFileTail*(handle: IoHandle): Result[Opt[BlockData], string] =
var sidecars: BlobSidecars
while true:
Expand Down
3 changes: 2 additions & 1 deletion beacon_chain/consensus_object_pools/block_pools_types.nim
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# beacon_chain
# Copyright (c) 2018-2024 Status Research & Development GmbH
# Copyright (c) 2018-2025 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
Expand Down Expand Up @@ -288,6 +288,7 @@ type
BlockData* = object
blck*: ForkedSignedBeaconBlock
blob*: Opt[BlobSidecars]
dataColumn*: Opt[DataColumnSidecars]

OnBlockAdded*[T: ForkyTrustedSignedBeaconBlock] = proc(
blckRef: BlockRef, blck: T, epochRef: EpochRef,
Expand Down
49 changes: 39 additions & 10 deletions beacon_chain/consensus_object_pools/blockchain_list.nim
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# beacon_chain
# Copyright (c) 2018-2024 Status Research & Development GmbH
# Copyright (c) 2018-2025 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
Expand All @@ -9,6 +9,7 @@

import std/sequtils, stew/io2, chronicles, chronos, metrics,
../spec/forks,
../spec/peerdas_helpers,
../[beacon_chain_file, beacon_clock],
../sszdump

Expand Down Expand Up @@ -128,16 +129,17 @@ proc setTail*(clist: ChainListRef, bdata: BlockData) =
clist.handle = Opt.some(handle)

proc store*(clist: ChainListRef, signedBlock: ForkedSignedBeaconBlock,
blobs: Opt[BlobSidecars]): Result[void, string] =
blobs: Opt[BlobSidecars], dataColumns: Opt[DataColumnSidecars]):
Result[void, string] =
if clist.handle.isNone():
let
filename = clist.path.chainFilePath()
flags = {ChainFileFlag.Repair, ChainFileFlag.OpenAlways}
handle = ? ChainFileHandle.init(filename, flags)
clist.handle = Opt.some(handle)
store(handle, signedBlock, blobs)
store(handle, signedBlock, blobs, dataColumns)
else:
store(clist.handle.get(), signedBlock, blobs)
store(clist.handle.get(), signedBlock, blobs, dataColumns)

proc checkBlobs(signedBlock: ForkedSignedBeaconBlock,
blobsOpt: Opt[BlobSidecars]): Result[void, VerifierError] =
Expand Down Expand Up @@ -167,9 +169,31 @@ proc checkBlobs(signedBlock: ForkedSignedBeaconBlock,
return err(VerifierError.Invalid)
ok()

proc checkDataColumns*(signedBlock: ForkedSignedBeaconBlock,
dataColumnsOpt: Opt[DataColumnSidecars]):
Result[void, VerifierError] =
withBlck(signedBlock):
when consensusFork >= ConsensusFork.Fulu:
if dataColumnsOpt.isSome:
let dataColumns = dataColumnsOpt.get()
if dataColumns.len > 0:
for i in 0..<dataColumns.len:
let r =
verify_data_column_sidecar_kzg_proofs(dataColumns[i][])
if r.isErr:
debug "Data column validation failed",
blockRoot = shortLog(forkyBlck.root),
dataColumn = shortLog(dataColumns[i][]),
blck = shortLog(forkyBlck.message),
signature = shortLog(forkyBlck.signature),
msg = r.error()
return err(VerifierError.Invalid)


proc addBackfillBlockData*(
clist: ChainListRef, signedBlock: ForkedSignedBeaconBlock,
blobsOpt: Opt[BlobSidecars]): Result[void, VerifierError] =
blobsOpt: Opt[BlobSidecars], dataColumnsOpt: Opt[DataColumnSidecars]):
Result[void, VerifierError] =
doAssert(not(isNil(clist)))

logScope:
Expand All @@ -182,15 +206,17 @@ proc addBackfillBlockData*(

if clist.tail.isNone():
? checkBlobs(signedBlock, blobsOpt)
? checkDataColumns(signedBlock, dataColumnsOpt)

let storeBlockTick = Moment.now()

store(clist, signedBlock, blobsOpt).isOkOr:
store(clist, signedBlock, blobsOpt, dataColumnsOpt).isOkOr:
fatal "Unexpected failure while trying to store data",
filename = chainFilePath(clist.path), reason = error
quit 1

let bdata = BlockData(blck: signedBlock, blob: blobsOpt)
let bdata = BlockData(blck: signedBlock, blob: blobsOpt,
dataColumn: dataColumnsOpt)
clist.setTail(bdata)
if clist.head.isNone():
clist.setHead(bdata)
Expand Down Expand Up @@ -219,10 +245,11 @@ proc addBackfillBlockData*(
return err(VerifierError.MissingParent)

? checkBlobs(signedBlock, blobsOpt)
? checkDataColumns(signedBlock, dataColumnsOpt)

let storeBlockTick = Moment.now()

store(clist, signedBlock, blobsOpt).isOkOr:
store(clist, signedBlock, blobsOpt, dataColumnsOpt).isOkOr:
fatal "Unexpected failure while trying to store data",
filename = chainFilePath(clist.path), reason = error
quit 1
Expand All @@ -231,17 +258,19 @@ proc addBackfillBlockData*(
verify_block_duration = shortLog(storeBlockTick - verifyBlockTick),
store_block_duration = shortLog(Moment.now() - storeBlockTick)

clist.setTail(BlockData(blck: signedBlock, blob: blobsOpt))
clist.setTail(BlockData(blck: signedBlock, blob: blobsOpt, dataColumn: dataColumnsOpt))

ok()

proc untrustedBackfillVerifier*(
clist: ChainListRef,
signedBlock: ForkedSignedBeaconBlock,
blobs: Opt[BlobSidecars],
dataColumns: Opt[DataColumnSidecars],
maybeFinalized: bool
): Future[Result[void, VerifierError]] {.
async: (raises: [CancelledError], raw: true).} =
let retFuture = newFuture[Result[void, VerifierError]]()
retFuture.complete(clist.addBackfillBlockData(signedBlock, blobs))
retFuture.complete(clist.addBackfillBlockData(signedBlock, blobs,
dataColumns))
retFuture
13 changes: 6 additions & 7 deletions beacon_chain/consensus_object_pools/data_column_quarantine.nim
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# beacon_chain
# Copyright (c) 2018-2024 Status Research & Development GmbH
# Copyright (c) 2018-2025 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
Expand Down Expand Up @@ -139,13 +139,12 @@ func hasMissingDataColumns*(quarantine: DataColumnQuarantine,
index: idx)
if dc_identifier notin quarantine.data_columns:
inc col_counter
if quarantine.supernode and col_counter != NUMBER_OF_COLUMNS:
return false
elif quarantine.supernode == false and
col_counter != max(SAMPLES_PER_SLOT, CUSTODY_REQUIREMENT):
return false
else:
if quarantine.supernode and col_counter == NUMBER_OF_COLUMNS:
return true
if quarantine.supernode == false and
col_counter == max(SAMPLES_PER_SLOT, CUSTODY_REQUIREMENT):
return true
false

func hasEnoughDataColumns*(quarantine: DataColumnQuarantine,
blck: fulu.SignedBeaconBlock): bool =
Expand Down
Loading