Skip to content

Conversation

@amkCha
Copy link
Collaborator

@amkCha amkCha commented Dec 8, 2025

bit_ror64 is tested on all test vectors generated by the ror.go from the issue

It now uses the same recursion as for bit_shr


Note

Adds u64 rotate-right bit_ror64 zkasm routine with tests and vectors, and fixes a comment typo in bit_shl.

  • ASM Util:
    • Add testdata/asm/util/bit_ror64.zkasm implementing bit_ror64 (u64 rotate-right) via recursive bit decomposition (u5u1) with word-split handling.
    • Add exhaustive test vectors in testdata/asm/util/bit_ror64.accepts.
    • Integrate test in pkg/test/assembly_util_test.go to run asm/util/bit_ror64 (BLS12_377).
  • Misc:
    • Fix comment typo "decompoise"→"decompose" in testdata/asm/util/bit_shl.zkasm.

Written by Cursor Bugbot for commit bcf4b73. This will update automatically on new commits. Configure here.

@amkCha amkCha marked this pull request as ready for review December 8, 2025 14:57
Copy link
Collaborator

@DavePearce DavePearce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My opinion on this, is that I understand why you've done it this way. However, adding two new functions bit_shr64 and bit_shl64 is definitely not ideal when it could be done directly with a single function bit_ror64.

The structure of bit_ror64 follows that of bit_shr, except that we don't discard the shifted bits. Specifically, consider this:

fn bit_shr256_u6(word u256, n u6) -> (res u256) {
  var b u1
  var m u5
  var msw u256
  var lsw u32
  ;; decompoise shift
  b,m = n
  ;;
  if b!=0 goto apply
  res = bit_shr256_u5(word,m)
  return
apply:
  msw, lsw = word
  res = bit_shr256_u5(msw,m)
  return
}

Here, the lsw is simply discarded when, in fact, we need to put it back "on the top". Very roughly it would be something like this:

 msw, lsw = word
  res = bit_ror64_u5((2^32*lsw) + msw,m)
  return

@amkCha
Copy link
Collaborator Author

amkCha commented Dec 9, 2025

My opinion on this, is that I understand why you've done it this way. However, adding two new functions bit_shr64 and bit_shl64 is definitely not ideal when it could be done directly with a single function bit_ror64.

The structure of bit_ror64 follows that of bit_shr, except that we don't discard the shifted bits. Specifically, consider this:

fn bit_shr256_u6(word u256, n u6) -> (res u256) {
  var b u1
  var m u5
  var msw u256
  var lsw u32
  ;; decompoise shift
  b,m = n
  ;;
  if b!=0 goto apply
  res = bit_shr256_u5(word,m)
  return
apply:
  msw, lsw = word
  res = bit_shr256_u5(msw,m)
  return
}

Here, the lsw is simply discarded when, in fact, we need to put it back "on the top". Very roughly it would be something like this:

 msw, lsw = word
  res = bit_ror64_u5((2^32*lsw) + msw,m)
  return

💯 I refactored bit_ror64 with the same recursion used in bit_shr and removed the 2 extra utils function added

Copy link
Collaborator

@DavePearce DavePearce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some suggestions around unnecessary carry lines, but otherwise looks good.

@amkCha amkCha enabled auto-merge (squash) December 10, 2025 09:02
@amkCha amkCha disabled auto-merge December 10, 2025 09:03
@amkCha amkCha enabled auto-merge (squash) December 10, 2025 09:06
@amkCha amkCha merged commit e84a8d7 into main Dec 10, 2025
26 checks passed
@amkCha amkCha deleted the feat/bit_ror branch December 10, 2025 09:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants