-
Notifications
You must be signed in to change notification settings - Fork 250
doc: advice provider layout for recursive verifier #2056
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
bobbinth
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thank you. I left a couple of questions inline.
| /// | -------- | ---------------------- | -------------- | ----------------------------------------------------- | | ||
| /// | 0 | variable_len_pi_size | 1 | Size of variable length PI in Felt | | ||
| /// | 1-n | public_inputs_data | varies | Input/output stacks + Program digest + kernel digests | | ||
| /// | n+1-n+4 | aux_randomness | 4 | β = (β₀, β₁), ɑ = (ɑ₀, ɑ₁) (filled later) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "filled later" mean here? (and same question for line 55 below).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I was trying to say, in a succinct way, is that these values are out-of-order with respect to the interaction transcript. Thus, in the advice tape, we push place holders for these values and only later on in the interaction can we fill these place holders with the appropriate values.
Probably "filled late" is just confusing without enough context, so I have just removed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "placeholder" mean in this context? My understanding was that we put real values onto the advice stack, but then use the "optimistically" - and only later confirm that the values we used were indeed derived correctly. Or does it mean something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used to signify that it is an out-of-sequence value on the stack that is filled later on in the interaction.
This is probably just confusing and not informative, so I have removed it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah - so it meant that we insert these values into the advice stack later (e.g., using something adv.push_mapval instruction)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just on the Rust size. We basically, push say a [0 ,0] and record the index before this push. Later on when we can generate the random challenge, we can do something like advice_tape[index] = alpha_0 and advice_tape[index + 1] = alpha_1.
Overall, this is just a Rust implementation detail i.e., it could be done differently
| /// | Position | Content | Size (in Felt) | Description | | ||
| /// | -------- | ---------------------- | -------------- | ----------------------------------------------------- | | ||
| /// | 0 | variable_len_pi_size | 1 | Size of variable length PI in Felt | | ||
| /// | 1-n | public_inputs_data | varies | Input/output stacks + Program digest + kernel digests | | ||
| /// | n+1-n+4 | aux_randomness | 4 | β = (β₀, β₁), ɑ = (ɑ₀, ɑ₁) | | ||
| /// | n+5 | num_kernel_procedures | 1 | Count of kernel procedure digests | | ||
| /// | n+6-m | trace_commitments | 4 or 8 | Main/auxiliary trace segment commitments | | ||
| /// | m+1-p | constraint_commitment | 4 | Constraint composition commitment | | ||
| /// | p+1-p+2 | alpha_deep_placeholder | 2 | Deep composition randomness | | ||
| /// | p+3-q | ood_evaluations | varies | Out-of-domain trace and constraint evals | | ||
| /// | q+1-r | fri_commitments | varies | FRI layer commitment digests | | ||
| /// | r+1-s | fri_remainder_poly | varies | FRI remainder polynomial coefficients | | ||
| /// | s+1 | pow_nonce | 1 | Proof-of-work nonce | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for this PR, but a few thoughts on these (specifically in the context of #2045):
It is not yet clear to me how these would get onto the advice stack. Generally, there are two options for accomplishing this:
- There could be a specialized event that, based on the data on the operand stack, would identify the relevant data in the advice map. Based on this data it would construct the advice stack.
- There would be an entry in the advice map with this data. The key for this entry would be something we can derive deterministically from the data on the operand stack.
Some of the data seems to be specific to Miden VM proofs. For example, num_kernel_procedures is probably not something that's relevant for a generic STARK proof.
It may make sense to remove things like variable_len_pi_size and public_inputs_data from the advice stack. Instead, they these could be loaded onto the advice stack using their commitments (via the adv.push_mapvaln instruction). For example, if the commitment to public inputs is on the operand stack, and there is an entry in the advice map for PUB_INPUTS_COMMITMENT |-> [PUB INPUTS DATA], we could load the public inputs onto the advice stack from within MASM.
Lastly, I wonder if it is possible to have the data on the advice stack basically be the transcript of the interaction between the prover and the verifier. Then, the verifier would be able to read it off the advice stack, absorb it into the hasher state, and then, at the end of proof verification, compare it to the transcript "commitment" that it may get via regular inputs (or in some other way).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for this PR, but a few thoughts on these (specifically in the context of #2045):
It is not yet clear to me how these would get onto the advice stack. Generally, there are two options for accomplishing this:
1. There could be a specialized event that, based on the data on the operand stack, would identify the relevant data in the advice map. Based on this data it would construct the advice stack.
Could you elaborate more here? Especially the "identify the relevant data" part?
2. There would be an entry in the advice map with this data. The key for this entry would be something we can derive deterministically from the data on the operand stack.
Indeed, this would be similar to the proposal in #2045 , where this is derived from the public inputs, AIR commitment and protocol identifier.
Some of the data seems to be specific to Miden VM proofs. For example,
num_kernel_proceduresis probably not something that's relevant for a generic STARK proof.
Not really, num_kernel_procedures is nothing but the number of messages/interactions in the first (and only) group/table of variable length public inputs. So as part of the transformation from Proof to its parts in the advice provider, we have to count the number of messages per table in the variable length public inputs and add this number to the advice stack.
It may make sense to remove things like
variable_len_pi_sizeandpublic_inputs_datafrom the advice stack. Instead, they these could be loaded onto the advice stack using their commitments (via theadv.push_mapvalninstruction). For example, if the commitment to public inputs is on the operand stack, and there is an entry in the advice map forPUB_INPUTS_COMMITMENT |-> [PUB INPUTS DATA], we could load the public inputs onto the advice stack from within MASM.
I think it is cleaner to keep it here as it doesn't make sense, neither conceptually nor performance wise, to commit to them together with the public inputs.
Lastly, I wonder if it is possible to have the data on the advice stack basically be the transcript of the interaction between the prover and the verifier. Then, the verifier would be able to read it off the advice stack, absorb it into the hasher state, and then, at the end of proof verification, compare it to the transcript "commitment" that it may get via regular inputs (or in some other way).
Not sure I follow completely. Is the intention here to have the data on the advice stack be exactly the one, without any for example out of order advice data, resulting from the interactions between prover and verifier?
If so, then I think this is a significant limitation and in my mind unnecessary.
Describe your changes
Closes #1918
Putting in draft until we conclude the discussions in #2045 and #2008
A few things left to fix:
stdlib/tests/crypto/stark/verifier_recursive/mod.rsin order to accommodate the move to the generic verifier model. I think the main thing that we need to do is encapsulate the logic for handling of public inputs, which will need to be specific to each "specific" recursive verifier.Checklist before requesting a review
nextaccording to naming convention.