update api to be compat with js-sdk#4
Conversation
📝 WalkthroughWalkthroughThis PR releases version 2.0.0 of the Reclaim Python SDK, introducing dynamic attestor-based proof verification, expanded configuration options (modal customization, redirect URLs, claim creation type overrides), lifecycle session management, comprehensive JSON serialization support, and JavaScript SDK parity. Breaking changes include unified Context schema and deprecated methods forwarded with warnings. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ReclaimProofRequest
participant Backend as Backend<br/>(Reclaim)
participant Attestors as Attestors<br/>(Pool)
Note over Client,Attestors: Request Creation & Verification Flow (v2.0.0)
rect rgba(100, 150, 200, 0.5)
Note over Client,ReclaimProofRequest: Step 1: Initialization
Client->>ReclaimProofRequest: init(appId, appSecret, providerId)
ReclaimProofRequest->>Backend: POST /session (signature, timestamp)
Backend-->>ReclaimProofRequest: sessionId, resolvedProviderVersion
ReclaimProofRequest->>ReclaimProofRequest: _generate_signature()
ReclaimProofRequest-->>Client: request ready
end
rect rgba(200, 150, 100, 0.5)
Note over Client,ReclaimProofRequest: Step 2: Configuration & URL Generation
Client->>ReclaimProofRequest: set_context(address, message)
Client->>ReclaimProofRequest: set_redirect_url(url, method, body)
Client->>ReclaimProofRequest: set_cancel_redirect_url(url)
Client->>ReclaimProofRequest: set_modal_options(options)
Client->>ReclaimProofRequest: get_request_url()
ReclaimProofRequest-->>Client: QR/deep link with context, redirects, modal config
end
rect rgba(150, 200, 100, 0.5)
Note over Client,ReclaimProofRequest: Step 3: Session Management
Client->>ReclaimProofRequest: start_session(params)
loop Poll Status
ReclaimProofRequest->>Backend: GET /status/{sessionId}
Backend-->>ReclaimProofRequest: session status, proofs[]
end
ReclaimProofRequest-->>Client: callback(proof) on completion
end
rect rgba(200, 100, 150, 0.5)
Note over ReclaimProofRequest,Attestors: Step 4: Proof Verification (Attestor-Based)
Client->>ReclaimProofRequest: verify_proof(proof)
ReclaimProofRequest->>Attestors: GET /attestors (queries pool)
Attestors-->>ReclaimProofRequest: list of attestor addresses
ReclaimProofRequest->>ReclaimProofRequest: assert_valid_signed_claim(proof, attestors)
ReclaimProofRequest->>ReclaimProofRequest: recover_signers() from signatures
ReclaimProofRequest->>ReclaimProofRequest: check: ∃ signer ∈ attestors?
ReclaimProofRequest-->>Client: boolean (verified / not verified)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
📝 Documentation updates detected! New suggestion: Add Python SDK documentation with full JS SDK API parity Tip: Filter the Dashboard by labels or assignees to focus on what matters to you 🔎 |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
README.md (2)
318-331:⚠️ Potential issue | 🟡 MinorFlask route uses
async defbut Flask doesn't support async natively.Standard Flask doesn't support async route handlers out of the box. Users would need
pip install flask[async](Flask 2.0+) or use Quart instead. Consider using a synchronous function or adding a note about async requirements.🔧 Suggested fix using sync function
`@app.route`('/receive-proofs', methods=['POST']) - async def receive_proofs(): + def receive_proofs(): proof_payload = request.get_json(silent=True) if not proof_payload: return jsonify({'status': 'error', 'message': 'Invalid JSON payload'}), 400 - is_verified = await handle_proof_webhook(proof_payload) + is_verified = asyncio.run(handle_proof_webhook(proof_payload))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 318 - 331, The route handler receive_proofs is declared async but standard Flask route functions must be synchronous; either convert receive_proofs to a regular def and call the async handler via an event loop (e.g., asyncio.run or asyncio.get_event_loop().run_until_complete(handle_proof_webhook(...))) or explicitly document that the project requires an async-capable server (Flask 2+ with flask[async] or Quart) and replace Flask with Quart; update the README to reflect the chosen approach and adjust the call site to handle_proof_webhook accordingly so awaiting works in the chosen execution model.
277-277:⚠️ Potential issue | 🟡 MinorExample uses deprecated
add_contextmethod.According to the CHANGELOG,
add_contextis deprecated in favor ofset_context. Update the example to use the recommended method.🔧 Proposed fix
- proof_request.add_context('0x00000000000', 'Example context') + proof_request.set_context('0x00000000000', 'Example context')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 277, Update the README example to stop using the deprecated method add_context and call the new set_context method on the same object (proof_request.set_context) instead; locate the example line with proof_request.add_context('0x00000000000', 'Example context') and replace it with an invocation of proof_request.set_context using the same arguments so the docs reflect the CHANGELOG recommendation.src/reclaim_python_sdk/reclaim.py (1)
84-95:⚠️ Potential issue | 🔴 CriticalRestore the
claimInfo→identifierintegrity check here.
assert_valid_signed_claim()only proves that an attestor signed the claim payload. It no longer verifies thatclaimData.identifierstill matchesclaimData.provider/claimData.parameters/claimData.context, so tampered claim metadata can now pass as long as the identifier/signature pair is valid.transform_for_onchain()still treats those fields as separateclaimInfo, so this check is load-bearing.Suggested fix
attestors = await get_attestors() claim_data: ProviderClaimData = proof.claimData + expected_identifier = get_identifier_from_claim_info( + ClaimInfo( + provider=claim_data.provider, + parameters=claim_data.parameters, + context=claim_data.context, + ) + ) + if expected_identifier != claim_data.identifier: + raise ProofNotVerifiedError("Identifier mismatch") + signed_claim = SignedClaim( claim=claim_data, signatures=[🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/reclaim_python_sdk/reclaim.py` around lines 84 - 95, Reintroduce the integrity check that ensures claimData.identifier matches the identifier derived from its metadata: after obtaining claim_data (ProviderClaimData) and before/after constructing SignedClaim, compute the expected identifier from claim_data.provider, claim_data.parameters, and claim_data.context using the same algorithm/serialization used by transform_for_onchain(), then compare it to claim_data.identifier and raise an error (or assert) if they differ; keep the call to assert_valid_signed_claim(attestors) as-is but ensure this identifier check runs alongside it so tampered metadata cannot pass even with a valid signature.
🧹 Nitpick comments (6)
Makefile (1)
7-9: Consider keeping the Node-dependent checks out of the defaulttesttarget.
make testnow requires npm/Node even when someone only wants the Python suite, anddeployinherits that requirement transitively. A separatetest-js-compattarget, composed explicitly in CI/release flows, would keep local Python workflows less brittle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 7 - 9, The default Makefile test target currently runs Node/npm steps (the test target), which forces npm/Node for Python-only runs and for targets that depend on test (e.g., deploy); remove the Node-dependent lines from the test target and create a new phony target named test-js-compat that contains the Node steps (cd tests/js_compat && npm install); update any targets that should run JS checks in CI/release (e.g., replace invocations of test with test-js-compat where needed) and ensure deploy (or other targets) no longer transitively depend on the original Node step by referencing only the trimmed test target or adding explicit dependencies as appropriate.tests/test_request.py (2)
69-69: Clean up internal reasoning comment.This comment appears to be internal development notes rather than meaningful documentation. Consider removing or rewriting it.
🔧 Proposed fix
- "claimCreationType": "STANDALONE", # In JS JS sets "createClaim" if STANDALONE was passed, wait we copied JS. Wait Python JS test sets ClaimCreationType.STANDALONE, which evaluates to "createClaim"? Let's just output createClaim or STANDALONE. Python has STANDALONE. + "claimCreationType": "STANDALONE",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_request.py` at line 69, The inline internal reasoning comment next to the "claimCreationType": "STANDALONE" test value is developer noise; remove that sentence or replace it with a concise, meaningful comment explaining why "STANDALONE" is the expected value in this test (e.g., "Use STANDALONE to assert Python enum serializes to expected string"), locating the comment near the "claimCreationType": "STANDALONE" entry in tests/test_request.py and updating it accordingly.
173-181: Test normalization may mask unintended differences.The test modifies
original_requestduring comparison (removingtimestamp, overwritingsdkVersion). While this handles known cross-platform differences, it could also mask real bugs. Consider documenting why these normalizations are needed or using a separate expected dict.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_request.py` around lines 173 - 181, The test currently mutates original_request (removing "timestamp" and overwriting "sdkVersion") which can hide regressions; instead create a separate expected dict (e.g., expected_request = original_request.copy() or deep copy) and apply the normalization steps to that copy (remove "timestamp" only from expected_request if missing in request_json, set expected_request["sdkVersion"] from request_json, and apply any _options mapping fallback) then assertEqual(request_json, expected_request); also add a short comment explaining why each normalization (timestamp, sdkVersion, _options) is necessary to avoid masking real bugs.src/reclaim_python_sdk/utils/session_utils.py (1)
72-73: Remove redundant in-function imports.
StatusUrlErrorandStatusUrlResponseare already imported at the module level (lines 4-5). These local imports are unnecessary.🔧 Proposed fix
async def fetch_status_url(session_id: str) -> StatusUrlResponse: - from .errors import StatusUrlError - from .types import StatusUrlResponse - validate_function_params([🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/reclaim_python_sdk/utils/session_utils.py` around lines 72 - 73, Remove the redundant in-function imports of StatusUrlError and StatusUrlResponse: the module already imports these symbols at top-level, so delete the local "from .errors import StatusUrlError" and "from .types import StatusUrlResponse" lines inside the function (e.g., in the function that currently re-imports them) and use the module-level StatusUrlError and StatusUrlResponse directly.tests/js_compat/generate_js.js (1)
4-11: Mock fetch response may be missing fields expected by the SDK.The mock returns only
sessionIdandproviderId, but based on the Python SDK'sInitSessionResponse(which requiresresolvedProviderVersion), there may be a mismatch. While this works for the current test since you overridesessionIddirectly after initialization, consider aligning the mock response with the actual API contract.💡 Suggested enhancement
global.fetch = async (...args) => { return { ok: true, status: 200, - json: async () => ({ sessionId: 'mocked-session', providerId: '123' }), - text: async () => '{"sessionId": "mocked-session", "providerId": "123"}' + json: async () => ({ sessionId: 'mocked-session', providerId: '123', resolvedProviderVersion: '' }), + text: async () => '{"sessionId": "mocked-session", "providerId": "123", "resolvedProviderVersion": ""}' }; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/js_compat/generate_js.js` around lines 4 - 11, The mock global.fetch in tests/js_compat/generate_js.js returns only sessionId and providerId which can mismatch the SDK's expected InitSessionResponse (notably the resolvedProviderVersion field); update the mocked JSON/text responses provided by global.fetch to include resolvedProviderVersion (and any other required fields from InitSessionResponse) so code paths that read resolvedProviderVersion, sessionId, or providerId (e.g., during initialization/InitSessionResponse parsing) see a realistic API contract.tests/js_compat/test_compat.py (1)
42-58: Tolerance logic has a no-op branch and asymmetric handling.Line 43-44 has a conditional that does nothing (
pass). Also, the timestamp fallback check on line 55-56 only checks iftimestampdiffers buttimeStampexists ind2_clean, not vice versa. This could miss mismatches in the other direction.🔧 Suggested improvement
def assertDictsMatchTolerance(self, d1, d2, name): d1_clean = self.clean_dict(d1) d2_clean = self.clean_dict(d2) - - # JS timeStamp/timestamp legacy fallback - if 'timeStamp' in d1_clean and 'timestamp' in d1_clean and d1_clean['timeStamp'] == d1_clean['timestamp']: - pass keys1 = set(d1_clean.keys()) keys2 = set(d2_clean.keys()) if keys1 != keys2: self.fail(f"[{name}] Key mismatch!\nJS Keys only: {keys1 - keys2}\nPY Keys only: {keys2 - keys1}") for k in keys1: if d1_clean[k] != d2_clean[k]: # allow for timestamp vs timeStamp fallback difference - if k == 'timestamp' and 'timeStamp' in d2_clean: - continue + if k in ('timestamp', 'timeStamp'): + continue # Already normalized before calling this method self.fail(f"[{name}] Value mismatch for {k}:\nJS={d1_clean[k]}\nPY={d2_clean[k]}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/js_compat/test_compat.py` around lines 42 - 58, The current tolerance logic contains a no-op when both 'timeStamp' and 'timestamp' exist with the same value and also only skips mismatches asymmetrically; update the code that currently checks those keys in tests/js_compat/test_compat.py to normalize duplicates before key comparison and make the mismatch skip symmetric: when both 'timeStamp' and 'timestamp' exist in either d1_clean or d2_clean and their values are equal, remove the redundant 'timeStamp' key from that dict (e.g., delete d1_clean['timeStamp'] or d2_clean['timeStamp'] as appropriate) so keys1/keys2 reflect the canonical set, and change the value-compare conditional that currently checks k == 'timestamp' and 'timeStamp' in d2_clean to instead allow the skip when ('timeStamp' in d1_clean or 'timeStamp' in d2_clean) so both directions are tolerated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 3: The banner <img> tag is missing an alt attribute; update the <img
src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png"
/> element to include a meaningful alt text (e.g., alt="Reclaim Protocol Python
SDK banner") so the image is accessible to screen readers and meets
accessibility requirements.
- Around line 342-355: The try/except block around the call to verify_proof has
incorrect indentation causing a SyntaxError; fix it by indenting the lines that
belong inside the try block (the call to verify_proof and subsequent if/else
handling of is_valid) so they are nested under try, keeping the except Exception
as e: aligned with try; ensure variables like is_valid and the prints/returns
remain inside the try and the except only contains the error handling print and
return False.
In `@src/reclaim_python_sdk.egg-info/PKG-INFO`:
- Around line 374-389: The example in handle_proof_webhook has incorrect
indentation: move the call to verify_proof (is_valid = await
verify_proof(proof)) inside the try block and ensure the corresponding if/else
and the except Exception as e: block are aligned correctly under that try so the
function is valid Python; locate the snippet in handle_proof_webhook and adjust
indentation accordingly (Proof.from_json, verify_proof, handle_proof_webhook are
the symbols to find), and then update the original source README/long
description used to generate this PKG-INFO so the fixed example is regenerated
for PyPI consumers.
In `@src/reclaim_python_sdk/reclaim.py`:
- Around line 485-491: The set_context logic currently catches all exceptions
and always raises AddContextError, preventing the documented InvalidParamError
from escaping; update the exception handling in set_context so that
InvalidParamError (the validation raised when address or message are missing) is
allowed to propagate—e.g., in the except block check for isinstance(e,
InvalidParamError) and re-raise it, and only convert/log other exceptions to
AddContextError; reference Context, InvalidParamError, AddContextError, logger,
and self._context when making the change.
- Around line 556-646: The start_session method uses asyncio (asyncio.sleep and
asyncio.create_task) but asyncio is not imported, causing a NameError; fix by
importing asyncio (preferably at the module top) or adding "import asyncio" at
the top of this file so start_session, its inner _poll_session coroutine, and
the asyncio.create_task(...) call can run; locate the start_session function and
ensure asyncio is imported before any use of asyncio.sleep or
asyncio.create_task.
- Around line 577-584: The async callbacks shown in the docs are never awaited
when invoked from _poll_session (and related call sites), so update the
invocation logic in reclaim.py (where on_success/on_error/on_timeout are called
inside _poll_session) to check if the callable result is awaitable (e.g., using
inspect.isawaitable or isinstance(result, Awaitable)) and await it when needed;
keep calling sync functions normally. Also update StartSessionParams type hints
in types.py (currently Callable[["Proof"], None] and Callable[[Exception],
None]) to accept both sync and async handlers (e.g., Callable[[...], Union[None,
Awaitable[None]]] or Optional[Callable[..., Awaitable[None]] | Callable[...,
None]]), and import typing.Awaitable/Coroutine as required so static type
checkers reflect the change.
- Around line 607-610: The code currently treats
SessionStatus.PROOF_GENERATION_FAILED as non-terminal and keeps
sleeping/retrying; change this in the polling loop (where status_v2 is compared
to SessionStatus.PROOF_GENERATION_FAILED) so that instead of awaiting
sleep/continue it treats it as terminal: update the session state (or record the
failure) and raise ProviderFailedError (including relevant context like
session_id and any provider error info) to break out of start_session()’s
background task rather than polling forever; remove the endless sleep/continue
and ensure any cleanup or task cancellation logic after raising is exercised.
In `@src/reclaim_python_sdk/utils/proof_utils.py`:
- Around line 74-76: The except block in proof_utils.py currently catches
Exception and re-raises a generic Exception, losing the original traceback;
replace this by raising a domain-specific error (either an existing class from
errors.py or add a new AttestorFetchError) and preserve the exception chain
using "from err" (i.e., raise AttestorFetchError("...") from err), and keep the
logger call but change its level if desired; locate the except block that
references logger.info('Error getting attestors...') and modify it to raise the
domain-specific error from the caught err.
- Around line 57-76: The verification currently calls get_attestors() which
fetches the live global attestor list; change the verification flow to anchor
witnesses to the claim by first using the Proof object's witnesses field
(Proof.witnesses) if present, and only fall back to a deterministic selection
routine (call fetch_witness_list_for_claim(claim) which uses
claim.epoch/identifier/timestamp) when proof.witnesses is absent, removing
reliance on get_attestors() for claim verification; update any function that
currently invokes get_attestors() during verification to use proof.witnesses or
fetch_witness_list_for_claim(...) instead and ensure types/normalization
(lowercasing) match existing expectations.
In `@src/reclaim_python_sdk/utils/session_utils.py`:
- Around line 93-96: The except block for Status URL retrieval catches Exception
as err but loses the original traceback when re-raising; update the handler in
session_utils.py (the except block that references session_id, logger, and
StatusUrlError) to re-raise the new StatusUrlError using exception chaining
(raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}')
from err) and log the original exception (e.g., use logger.exception or include
err) so the original context is preserved.
- Around line 79-83: The requests.get call constructing the status fetch (using
DEFAULT_RECLAIM_STATUS_URL and session_id in session_utils.py) lacks a timeout
and can hang; add a timeout argument (e.g., timeout=5 or a configurable
DEFAULT_REQUEST_TIMEOUT constant) to the requests.get(...) call and update the
surrounding try/except to explicitly catch requests.exceptions.Timeout and
requests.exceptions.RequestException so you can log/handle timeouts and other
network errors cleanly rather than blocking indefinitely.
- Around line 66-69: The except block in update session handling loses the
original exception context; change the logging to include the traceback (e.g.,
use logger.exception or logger.error with str(err)) and re-raise the custom
UpdateSessionError using exception chaining (raise UpdateSessionError(f'Error
updating session with sessionId: {session_id}') from err) so the original
exception is preserved — mirror the fix used in fetch_status_url and update the
block referencing session_id, logger, and UpdateSessionError accordingly.
---
Outside diff comments:
In `@README.md`:
- Around line 318-331: The route handler receive_proofs is declared async but
standard Flask route functions must be synchronous; either convert
receive_proofs to a regular def and call the async handler via an event loop
(e.g., asyncio.run or
asyncio.get_event_loop().run_until_complete(handle_proof_webhook(...))) or
explicitly document that the project requires an async-capable server (Flask 2+
with flask[async] or Quart) and replace Flask with Quart; update the README to
reflect the chosen approach and adjust the call site to handle_proof_webhook
accordingly so awaiting works in the chosen execution model.
- Line 277: Update the README example to stop using the deprecated method
add_context and call the new set_context method on the same object
(proof_request.set_context) instead; locate the example line with
proof_request.add_context('0x00000000000', 'Example context') and replace it
with an invocation of proof_request.set_context using the same arguments so the
docs reflect the CHANGELOG recommendation.
In `@src/reclaim_python_sdk/reclaim.py`:
- Around line 84-95: Reintroduce the integrity check that ensures
claimData.identifier matches the identifier derived from its metadata: after
obtaining claim_data (ProviderClaimData) and before/after constructing
SignedClaim, compute the expected identifier from claim_data.provider,
claim_data.parameters, and claim_data.context using the same
algorithm/serialization used by transform_for_onchain(), then compare it to
claim_data.identifier and raise an error (or assert) if they differ; keep the
call to assert_valid_signed_claim(attestors) as-is but ensure this identifier
check runs alongside it so tampered metadata cannot pass even with a valid
signature.
---
Nitpick comments:
In `@Makefile`:
- Around line 7-9: The default Makefile test target currently runs Node/npm
steps (the test target), which forces npm/Node for Python-only runs and for
targets that depend on test (e.g., deploy); remove the Node-dependent lines from
the test target and create a new phony target named test-js-compat that contains
the Node steps (cd tests/js_compat && npm install); update any targets that
should run JS checks in CI/release (e.g., replace invocations of test with
test-js-compat where needed) and ensure deploy (or other targets) no longer
transitively depend on the original Node step by referencing only the trimmed
test target or adding explicit dependencies as appropriate.
In `@src/reclaim_python_sdk/utils/session_utils.py`:
- Around line 72-73: Remove the redundant in-function imports of StatusUrlError
and StatusUrlResponse: the module already imports these symbols at top-level, so
delete the local "from .errors import StatusUrlError" and "from .types import
StatusUrlResponse" lines inside the function (e.g., in the function that
currently re-imports them) and use the module-level StatusUrlError and
StatusUrlResponse directly.
In `@tests/js_compat/generate_js.js`:
- Around line 4-11: The mock global.fetch in tests/js_compat/generate_js.js
returns only sessionId and providerId which can mismatch the SDK's expected
InitSessionResponse (notably the resolvedProviderVersion field); update the
mocked JSON/text responses provided by global.fetch to include
resolvedProviderVersion (and any other required fields from InitSessionResponse)
so code paths that read resolvedProviderVersion, sessionId, or providerId (e.g.,
during initialization/InitSessionResponse parsing) see a realistic API contract.
In `@tests/js_compat/test_compat.py`:
- Around line 42-58: The current tolerance logic contains a no-op when both
'timeStamp' and 'timestamp' exist with the same value and also only skips
mismatches asymmetrically; update the code that currently checks those keys in
tests/js_compat/test_compat.py to normalize duplicates before key comparison and
make the mismatch skip symmetric: when both 'timeStamp' and 'timestamp' exist in
either d1_clean or d2_clean and their values are equal, remove the redundant
'timeStamp' key from that dict (e.g., delete d1_clean['timeStamp'] or
d2_clean['timeStamp'] as appropriate) so keys1/keys2 reflect the canonical set,
and change the value-compare conditional that currently checks k == 'timestamp'
and 'timeStamp' in d2_clean to instead allow the skip when ('timeStamp' in
d1_clean or 'timeStamp' in d2_clean) so both directions are tolerated.
In `@tests/test_request.py`:
- Line 69: The inline internal reasoning comment next to the
"claimCreationType": "STANDALONE" test value is developer noise; remove that
sentence or replace it with a concise, meaningful comment explaining why
"STANDALONE" is the expected value in this test (e.g., "Use STANDALONE to assert
Python enum serializes to expected string"), locating the comment near the
"claimCreationType": "STANDALONE" entry in tests/test_request.py and updating it
accordingly.
- Around line 173-181: The test currently mutates original_request (removing
"timestamp" and overwriting "sdkVersion") which can hide regressions; instead
create a separate expected dict (e.g., expected_request =
original_request.copy() or deep copy) and apply the normalization steps to that
copy (remove "timestamp" only from expected_request if missing in request_json,
set expected_request["sdkVersion"] from request_json, and apply any _options
mapping fallback) then assertEqual(request_json, expected_request); also add a
short comment explaining why each normalization (timestamp, sdkVersion,
_options) is necessary to avoid masking real bugs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 69a91cd9-a5b3-4a39-b0a3-120a9f295963
⛔ Files ignored due to path filters (3)
dist/reclaim_python_sdk-1.0.3-py3-none-any.whlis excluded by!**/dist/**dist/reclaim_python_sdk-1.0.3.tar.gzis excluded by!**/dist/**,!**/*.gztests/js_compat/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (34)
CHANGELOG.mdLICENSEMakefileREADME.mdbuild/lib/reclaim_python_sdk/__init__.pybuild/lib/reclaim_python_sdk/contract_data/abi.pybuild/lib/reclaim_python_sdk/reclaim.pybuild/lib/reclaim_python_sdk/smart_contract.pybuild/lib/reclaim_python_sdk/utils/__init__.pybuild/lib/reclaim_python_sdk/utils/constants.pybuild/lib/reclaim_python_sdk/utils/errors.pybuild/lib/reclaim_python_sdk/utils/interfaces.pybuild/lib/reclaim_python_sdk/utils/logger.pybuild/lib/reclaim_python_sdk/utils/proof_utils.pybuild/lib/reclaim_python_sdk/utils/session_utils.pybuild/lib/reclaim_python_sdk/utils/types.pybuild/lib/reclaim_python_sdk/utils/validation_utils.pybuild/lib/reclaim_python_sdk/utils/validators.pybuild/lib/reclaim_python_sdk/witness.pysetup.pysrc/reclaim_python_sdk.egg-info/PKG-INFOsrc/reclaim_python_sdk.egg-info/SOURCES.txtsrc/reclaim_python_sdk/reclaim.pysrc/reclaim_python_sdk/utils/constants.pysrc/reclaim_python_sdk/utils/errors.pysrc/reclaim_python_sdk/utils/proof_utils.pysrc/reclaim_python_sdk/utils/session_utils.pytests/__init__.pytests/js_compat/.gitignoretests/js_compat/__init__.pytests/js_compat/generate_js.jstests/js_compat/package.jsontests/js_compat/test_compat.pytests/test_request.py
💤 Files with no reviewable changes (15)
- build/lib/reclaim_python_sdk/utils/session_utils.py
- build/lib/reclaim_python_sdk/contract_data/abi.py
- build/lib/reclaim_python_sdk/reclaim.py
- build/lib/reclaim_python_sdk/utils/validators.py
- build/lib/reclaim_python_sdk/witness.py
- build/lib/reclaim_python_sdk/utils/constants.py
- build/lib/reclaim_python_sdk/smart_contract.py
- build/lib/reclaim_python_sdk/utils/proof_utils.py
- build/lib/reclaim_python_sdk/utils/types.py
- build/lib/reclaim_python_sdk/init.py
- build/lib/reclaim_python_sdk/utils/init.py
- build/lib/reclaim_python_sdk/utils/errors.py
- build/lib/reclaim_python_sdk/utils/interfaces.py
- build/lib/reclaim_python_sdk/utils/logger.py
- build/lib/reclaim_python_sdk/utils/validation_utils.py
| @@ -1,6 +1,14 @@ | |||
| <div> | |||
| <div> | |||
| <img src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png" /> | |||
There was a problem hiding this comment.
Add alt text to the banner image for accessibility.
The image is missing alternate text, which is required for accessibility compliance.
🔧 Proposed fix
- <img src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png" />
+ <img src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png" alt="Reclaim Protocol Python SDK Banner" />📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <img src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png" /> | |
| <img src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png" alt="Reclaim Protocol Python SDK Banner" /> |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 3-3: Images should have alternate text (alt text)
(MD045, no-alt-text)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 3, The banner <img> tag is missing an alt attribute;
update the <img
src="https://raw.githubusercontent.com/reclaimprotocol/.github/main/assets/banners/Python-SDK.png"
/> element to include a meaningful alt text (e.g., alt="Reclaim Protocol Python
SDK banner") so the image is accessible to screen readers and meets
accessibility requirements.
| try: | ||
| # Verify a single proof or array of proofs | ||
| is_valid = await verify_proof(proof) | ||
|
|
||
| if is_valid: | ||
| print("Proof is valid and was signed by Reclaim Protocol attestors!") | ||
| return True | ||
| else: | ||
| print("Proof is invalid or signatures failed verification.") | ||
| return False | ||
| except Exception as e: | ||
| print(f"Error verifying proof: {e}") | ||
| return False | ||
| ``` |
There was a problem hiding this comment.
Fix indentation error in try/except block.
The code inside the try block has incorrect indentation which would cause a syntax error.
🔧 Proposed fix
try:
- # Verify a single proof or array of proofs
- is_valid = await verify_proof(proof)
-
- if is_valid:
- print("Proof is valid and was signed by Reclaim Protocol attestors!")
- return True
- else:
- print("Proof is invalid or signatures failed verification.")
- return False
+ # Verify a single proof or array of proofs
+ is_valid = await verify_proof(proof)
+
+ if is_valid:
+ print("Proof is valid and was signed by Reclaim Protocol attestors!")
+ return True
+ else:
+ print("Proof is invalid or signatures failed verification.")
+ return False
except Exception as e:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Verify a single proof or array of proofs | |
| is_valid = await verify_proof(proof) | |
| if is_valid: | |
| print("Proof is valid and was signed by Reclaim Protocol attestors!") | |
| return True | |
| else: | |
| print("Proof is invalid or signatures failed verification.") | |
| return False | |
| except Exception as e: | |
| print(f"Error verifying proof: {e}") | |
| return False | |
| ``` | |
| try: | |
| # Verify a single proof or array of proofs | |
| is_valid = await verify_proof(proof) | |
| if is_valid: | |
| print("Proof is valid and was signed by Reclaim Protocol attestors!") | |
| return True | |
| else: | |
| print("Proof is invalid or signatures failed verification.") | |
| return False | |
| except Exception as e: | |
| print(f"Error verifying proof: {e}") | |
| return False |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 342 - 355, The try/except block around the call to
verify_proof has incorrect indentation causing a SyntaxError; fix it by
indenting the lines that belong inside the try block (the call to verify_proof
and subsequent if/else handling of is_valid) so they are nested under try,
keeping the except Exception as e: aligned with try; ensure variables like
is_valid and the prints/returns remain inside the try and the except only
contains the error handling print and return False.
| async def handle_proof_webhook(proof_json_request): | ||
| proof = Proof.from_json(proof_json_request) | ||
|
|
||
| try: | ||
| # Verify a single proof or array of proofs | ||
| is_valid = await verify_proof(proof) | ||
|
|
||
| if is_valid: | ||
| print("Proof is valid and was signed by Reclaim Protocol attestors!") | ||
| return True | ||
| else: | ||
| print("Proof is invalid or signatures failed verification.") | ||
| return False | ||
| except Exception as e: | ||
| print(f"Error verifying proof: {e}") | ||
| return False |
There was a problem hiding this comment.
Fix the indentation in this published example.
is_valid = await verify_proof(proof) is outside the try block as rendered, so this snippet is invalid Python. Since this file is generated, please fix the source README/long description too so PyPI users can copy it verbatim.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk.egg-info/PKG-INFO` around lines 374 - 389, The example
in handle_proof_webhook has incorrect indentation: move the call to verify_proof
(is_valid = await verify_proof(proof)) inside the try block and ensure the
corresponding if/else and the except Exception as e: block are aligned correctly
under that try so the function is valid Python; locate the snippet in
handle_proof_webhook and adjust indentation accordingly (Proof.from_json,
verify_proof, handle_proof_webhook are the symbols to find), and then update the
original source README/long description used to generate this PKG-INFO so the
fixed example is regenerated for PyPI consumers.
| try: | ||
| if not address or not message: | ||
| raise InvalidParamError("Address and message are required") | ||
|
|
||
| self._context = Context(contextAddress=address, contextMessage=message) | ||
| except Exception as e: | ||
| logger.info(f"Error adding context: {str(e)}") | ||
| raise AddContextError("Error adding context") from e | ||
| logger.info(f"Error setting context: {str(e)}") | ||
| raise AddContextError("Error setting context") from e |
There was a problem hiding this comment.
Let the documented InvalidParamError escape from set_context.
Missing address / message currently raises InvalidParamError, but the blanket except Exception immediately converts it to AddContextError. Callers can’t catch the validation error your docstring advertises anymore.
Suggested fix
- try:
- if not address or not message:
- raise InvalidParamError("Address and message are required")
+ if not address or not message:
+ raise InvalidParamError("Address and message are required")
+ try:
self._context = Context(contextAddress=address, contextMessage=message)
except Exception as e:
logger.info(f"Error setting context: {str(e)}")
raise AddContextError("Error setting context") from e📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| if not address or not message: | |
| raise InvalidParamError("Address and message are required") | |
| self._context = Context(contextAddress=address, contextMessage=message) | |
| except Exception as e: | |
| logger.info(f"Error adding context: {str(e)}") | |
| raise AddContextError("Error adding context") from e | |
| logger.info(f"Error setting context: {str(e)}") | |
| raise AddContextError("Error setting context") from e | |
| if not address or not message: | |
| raise InvalidParamError("Address and message are required") | |
| try: | |
| self._context = Context(contextAddress=address, contextMessage=message) | |
| except Exception as e: | |
| logger.info(f"Error setting context: {str(e)}") | |
| raise AddContextError("Error setting context") from e |
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 487-487: Abstract raise to an inner function
(TRY301)
[warning] 487-487: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 490-490: Use explicit conversion flag
Replace with conversion flag
(RUF010)
[warning] 491-491: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/reclaim.py` around lines 485 - 491, The set_context
logic currently catches all exceptions and always raises AddContextError,
preventing the documented InvalidParamError from escaping; update the exception
handling in set_context so that InvalidParamError (the validation raised when
address or message are missing) is allowed to propagate—e.g., in the except
block check for isinstance(e, InvalidParamError) and re-raise it, and only
convert/log other exceptions to AddContextError; reference Context,
InvalidParamError, AddContextError, logger, and self._context when making the
change.
| async def start_session(self, params: StartSessionParams) -> None: | ||
| """ | ||
| Starts the proof request session and monitors for proof submission | ||
|
|
||
| This method begins polling the session status to detect when | ||
| a proof has been generated and submitted. It handles both default Reclaim callbacks | ||
| and custom callback URLs. | ||
|
|
||
| For default callbacks: Verifies proofs automatically and passes them to onSuccess | ||
| For custom callbacks: Monitors submission status and notifies via onSuccess when complete. | ||
|
|
||
| Args: | ||
| params (StartSessionParams): Contains on_success and on_error callbacks. | ||
|
|
||
| Raises: | ||
| SessionNotStartedError: When session ID is not defined | ||
| ProofNotVerifiedError: When proof verification fails (default callback only) | ||
| ProofSubmissionFailedError: When proof submission fails (custom callback only) | ||
| ProviderFailedError: When proof generation fails with timeout | ||
|
|
||
| Example: | ||
| >>> async def handle_success(proof): | ||
| ... print('Proof received:', proof) | ||
| >>> async def handle_error(error): | ||
| ... print('Error:', error) | ||
| >>> await proof_request.start_session(StartSessionParams( | ||
| ... on_success=handle_success, | ||
| ... on_error=handle_error | ||
| ... )) | ||
| """ | ||
| if not self._session_id: | ||
| message = "Session can't be started due to undefined value of sessionId" | ||
| logger.info(message) | ||
| raise SessionNotStartedError(message) | ||
|
|
||
| logger.info('Starting session') | ||
| session_update_polling_interval = 3 | ||
|
|
||
| from .utils.errors import ProviderFailedError, ErrorDuringVerificationError, ProofSubmissionFailedError | ||
|
|
||
| # In JS: setInterval. Here we use an asyncio background task polling. | ||
| async def _poll_session(): | ||
| while True: | ||
| try: | ||
| status_url_response = await fetch_status_url(self._session_id) | ||
| if not status_url_response.session: | ||
| await asyncio.sleep(session_update_polling_interval) | ||
| continue | ||
|
|
||
| status_v2 = status_url_response.session.statusV2 | ||
|
|
||
| if status_v2 == SessionStatus.PROOF_GENERATION_FAILED: | ||
| # ignoring complicated timeout logic for python since we just poll endlessly | ||
| await asyncio.sleep(session_update_polling_interval) | ||
| continue | ||
|
|
||
| if status_v2 in [SessionStatus.PROOF_SUBMISSION_FAILED, "ERROR_SUBMITTED", "ERROR_SUBMISSION_FAILED"]: | ||
| raise ErrorDuringVerificationError() | ||
|
|
||
| is_default_callback_url = self.get_app_callback_url() == f"{DEFAULT_RECLAIM_CALLBACK_URL}{self._session_id}" | ||
|
|
||
| if is_default_callback_url: | ||
| if status_url_response.session.proofs and len(status_url_response.session.proofs) > 0: | ||
| proofs = status_url_response.session.proofs | ||
| if self._claim_creation_type == "STANDALONE": | ||
| verified = await verify_proof(proofs) | ||
| if not verified: | ||
| logger.info(f"Proofs not verified: {proofs}") | ||
| raise ProofNotVerifiedError() | ||
|
|
||
| if len(proofs) == 1: | ||
| params.on_success(proofs[0]) | ||
| else: | ||
| params.on_success(proofs) | ||
| break | ||
| else: | ||
| if status_v2 == SessionStatus.PROOF_SUBMISSION_FAILED: | ||
| raise ProofSubmissionFailedError() | ||
| if status_v2 in [SessionStatus.PROOF_SUBMITTED, "AI_PROOF_SUBMITTED"]: | ||
| if params.on_success: | ||
| params.on_success([]) | ||
| break | ||
|
|
||
| except Exception as e: | ||
| if params.on_error: | ||
| params.on_error(e) | ||
| break | ||
| await asyncio.sleep(session_update_polling_interval) | ||
|
|
||
| # Start background polling task and return immediately (or wait depending on design? In python we'll use create_task) | ||
| asyncio.create_task(_poll_session()) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n '^\s*(import asyncio|from asyncio import)' src/reclaim_python_sdk/reclaim.py
rg -n '\basyncio\.(sleep|create_task)\b' src/reclaim_python_sdk/reclaim.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 361
Import asyncio before scheduling the poller.
This method uses asyncio.sleep() at lines 602, 609, 643 and asyncio.create_task() at line 646, but asyncio is never imported in this file. The code will raise NameError on the first start_session() call.
Suggested fix
+import asyncio
import json
import time
import warnings🧰 Tools
🪛 Ruff (0.15.5)
[error] 602-602: Undefined name asyncio
(F821)
[error] 609-609: Undefined name asyncio
(F821)
[warning] 613-613: Abstract raise to an inner function
(TRY301)
[warning] 624-624: Abstract raise to an inner function
(TRY301)
[warning] 633-633: Abstract raise to an inner function
(TRY301)
[warning] 639-639: Do not catch blind exception: Exception
(BLE001)
[error] 643-643: Undefined name asyncio
(F821)
[error] 646-646: Undefined name asyncio
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/reclaim.py` around lines 556 - 646, The start_session
method uses asyncio (asyncio.sleep and asyncio.create_task) but asyncio is not
imported, causing a NameError; fix by importing asyncio (preferably at the
module top) or adding "import asyncio" at the top of this file so start_session,
its inner _poll_session coroutine, and the asyncio.create_task(...) call can
run; locate the start_session function and ensure asyncio is imported before any
use of asyncio.sleep or asyncio.create_task.
| async def get_attestors() -> List[str]: | ||
| """ | ||
| Retrieves the list of witnesses for a given claim | ||
| Retrieves the list of witnesses (attestors) from the backend | ||
| """ | ||
| from .constants import DEFAULT_ATTESTORS_URL | ||
| try: | ||
| beacon = await make_beacon() | ||
| if not beacon: | ||
| logger.info('No beacon available for getting witnesses') | ||
| raise Exception('No beacon available') | ||
|
|
||
| state = await beacon.get_state(epoch) | ||
| witness_list = fetch_witness_list_for_claim(state, identifier, timestamp_s) | ||
| witnesses = [w.id.lower() for w in witness_list] | ||
| return witnesses | ||
| async with httpx.AsyncClient() as client: | ||
| response = await client.get(DEFAULT_ATTESTORS_URL) | ||
| if response.status_code != 200: | ||
| response.read() | ||
| raise Exception(f"Failed to fetch witness addresses: {response.status_code}") | ||
|
|
||
| res_data = response.json() | ||
| # The JS SDK expects data: { address: string }[] but handles res.data | ||
| address_list = res_data.get("data", []) | ||
| witnesses = [w["address"].lower() for w in address_list if "address" in w] | ||
| return witnesses | ||
| except Exception as err: | ||
| logger.info(f'Error getting witnesses for claim: {str(err)}') | ||
| raise Exception(f'Error getting witnesses for claim: {str(err)}') | ||
| logger.info(f'Error getting attestors: {str(err)}') | ||
| raise Exception(f'Error getting attestors: {str(err)}') |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
git rev-parse --verify origin/main >/dev/null
git diff --unified=80 origin/main...HEAD -- src/reclaim_python_sdk/utils/proof_utils.py
echo
echo "Current witness/attestor call sites:"
rg -n -C4 'get_witnesses_for_claim|get_attestors|fetch_witness_list_for_claim|assert_valid_signed_claim|verify_proof\(' srcRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 120
🏁 Script executed:
cat -n src/reclaim_python_sdk/utils/proof_utils.py | head -120Repository: reclaimprotocol/reclaim-python-sdk
Length of output: 4813
🏁 Script executed:
rg -n 'assert_valid_signed_claim|def verify_proof|def assert_valid' src/reclaim_python_sdk/utils/proof_utils.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 178
🏁 Script executed:
rg -n 'claim\.' src/reclaim_python_sdk/utils/proof_utils.py -A 2 -B 2 | head -80Repository: reclaimprotocol/reclaim-python-sdk
Length of output: 426
🏁 Script executed:
rg -n 'get_attestors\(' src -A 3 -B 3Repository: reclaimprotocol/reclaim-python-sdk
Length of output: 1121
🏁 Script executed:
rg -n 'assert_valid_signed_claim\(' src -A 3 -B 3Repository: reclaimprotocol/reclaim-python-sdk
Length of output: 1073
🏁 Script executed:
rg -n 'verify_proof\|verify_signed_claim' src/reclaim_python_sdk/utils/proof_utils.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 60
🏁 Script executed:
grep -n 'def verify' src/reclaim_python_sdk/utils/proof_utils.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 60
🏁 Script executed:
sed -n '75,105p' src/reclaim_python_sdk/reclaim.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 898
🏁 Script executed:
rg -n 'class SignedClaim|class ProviderClaimData' srcRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 208
🏁 Script executed:
rg -n 'def fetch_witness_list_for_claim' srcRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 148
🏁 Script executed:
sed -n '24,50p' src/reclaim_python_sdk/witness.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 947
🏁 Script executed:
sed -n '11,40p' src/reclaim_python_sdk/utils/interfaces.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 979
🏁 Script executed:
rg -n 'fetch_witness_list_for_claim' srcRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 274
🏁 Script executed:
sed -n '81,100p' src/reclaim_python_sdk/utils/types.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 593
🏁 Script executed:
rg -n 'proof\.witnesses|\.witnesses' src/reclaim_python_sdk/reclaim.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 60
🏁 Script executed:
sed -n '70,110p' src/reclaim_python_sdk/reclaim.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 1242
🏁 Script executed:
cat -n src/reclaim_python_sdk/utils/interfaces.py | sed -n '15,30p'Repository: reclaimprotocol/reclaim-python-sdk
Length of output: 676
🏁 Script executed:
rg -n 'DEFAULT_ATTESTORS_URL|ATTESTORS' src/reclaim_python_sdk/utils/constants.pyRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 139
Verification now depends on the current global attestor list rather than claim-specific witness data.
The code fetches the live attestor list at verification time instead of using the claim's embedded witness set or deterministically deriving it from the claim's epoch and identifier. This introduces temporal dependency: after attestor rotation, previously valid proofs will fail. The Proof object carries a witnesses field that is ignored, and fetch_witness_list_for_claim() exists but is unused. To maintain claim-anchored verification, either use proof.witnesses directly or restore deterministic witness selection based on the claim's epoch, identifier, and timestamp.
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 67-67: Abstract raise to an inner function
(TRY301)
[warning] 67-67: Create your own exception
(TRY002)
[warning] 67-67: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 74-74: Do not catch blind exception: Exception
(BLE001)
[warning] 75-75: Use explicit conversion flag
Replace with conversion flag
(RUF010)
[warning] 76-76: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
[warning] 76-76: Create your own exception
(TRY002)
[warning] 76-76: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 76-76: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/utils/proof_utils.py` around lines 57 - 76, The
verification currently calls get_attestors() which fetches the live global
attestor list; change the verification flow to anchor witnesses to the claim by
first using the Proof object's witnesses field (Proof.witnesses) if present, and
only fall back to a deterministic selection routine (call
fetch_witness_list_for_claim(claim) which uses claim.epoch/identifier/timestamp)
when proof.witnesses is absent, removing reliance on get_attestors() for claim
verification; update any function that currently invokes get_attestors() during
verification to use proof.witnesses or fetch_witness_list_for_claim(...) instead
and ensure types/normalization (lowercasing) match existing expectations.
| except Exception as err: | ||
| logger.info(f'Error getting witnesses for claim: {str(err)}') | ||
| raise Exception(f'Error getting witnesses for claim: {str(err)}') | ||
| logger.info(f'Error getting attestors: {str(err)}') | ||
| raise Exception(f'Error getting attestors: {str(err)}') |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C3 'get_attestors\(|raise Exception\(|except Exception as err|ProofNotVerifiedError|ErrorDuringVerificationError' src testsRepository: reclaimprotocol/reclaim-python-sdk
Length of output: 12975
Use a domain-specific error and preserve the exception chain.
Catching Exception and re-raising a new generic Exception loses the original traceback and makes it impossible for SDK callers to distinguish network, parsing, or data structure failures. Either use an existing error class from errors.py or create a domain-specific one for attestor-fetching failures, and chain it with from err to preserve the stack trace.
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 74-74: Do not catch blind exception: Exception
(BLE001)
[warning] 75-75: Use explicit conversion flag
Replace with conversion flag
(RUF010)
[warning] 76-76: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
[warning] 76-76: Create your own exception
(TRY002)
[warning] 76-76: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 76-76: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/utils/proof_utils.py` around lines 74 - 76, The except
block in proof_utils.py currently catches Exception and re-raises a generic
Exception, losing the original traceback; replace this by raising a
domain-specific error (either an existing class from errors.py or add a new
AttestorFetchError) and preserve the exception chain using "from err" (i.e.,
raise AttestorFetchError("...") from err), and keep the logger call but change
its level if desired; locate the except block that references logger.info('Error
getting attestors...') and modify it to raise the domain-specific error from the
caught err.
| except Exception as err: | ||
| error_message = f'Failed to update session with sessionId: {session_id}' | ||
| logger.info(f'{error_message}\n{str(err)}') | ||
| raise UpdateSessionError(f'Error updating session with sessionId: {session_id}') No newline at end of file | ||
| raise UpdateSessionError(f'Error updating session with sessionId: {session_id}') |
There was a problem hiding this comment.
Preserve exception context when re-raising.
Same issue as fetch_status_url - the original exception should be chained for better debugging.
🔧 Proposed fix
except Exception as err:
error_message = f'Failed to update session with sessionId: {session_id}'
- logger.info(f'{error_message}\n{str(err)}')
- raise UpdateSessionError(f'Error updating session with sessionId: {session_id}')
+ logger.info(f'{error_message}\n{err!s}')
+ raise UpdateSessionError(f'Error updating session with sessionId: {session_id}') from err🧰 Tools
🪛 Ruff (0.15.5)
[warning] 66-66: Do not catch blind exception: Exception
(BLE001)
[warning] 68-68: Use explicit conversion flag
Replace with conversion flag
(RUF010)
[warning] 69-69: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
[warning] 69-69: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/utils/session_utils.py` around lines 66 - 69, The
except block in update session handling loses the original exception context;
change the logging to include the traceback (e.g., use logger.exception or
logger.error with str(err)) and re-raise the custom UpdateSessionError using
exception chaining (raise UpdateSessionError(f'Error updating session with
sessionId: {session_id}') from err) so the original exception is preserved —
mirror the fix used in fetch_status_url and update the block referencing
session_id, logger, and UpdateSessionError accordingly.
| try: | ||
| response = requests.get( | ||
| f'{DEFAULT_RECLAIM_STATUS_URL}{session_id}', | ||
| headers={'Content-Type': 'application/json'} | ||
| ) |
There was a problem hiding this comment.
Add timeout to HTTP request to prevent indefinite hangs.
The requests.get call lacks a timeout parameter, which can cause the application to hang indefinitely if the server doesn't respond.
🔧 Proposed fix
response = requests.get(
f'{DEFAULT_RECLAIM_STATUS_URL}{session_id}',
- headers={'Content-Type': 'application/json'}
+ headers={'Content-Type': 'application/json'},
+ timeout=30
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| response = requests.get( | |
| f'{DEFAULT_RECLAIM_STATUS_URL}{session_id}', | |
| headers={'Content-Type': 'application/json'} | |
| ) | |
| try: | |
| response = requests.get( | |
| f'{DEFAULT_RECLAIM_STATUS_URL}{session_id}', | |
| headers={'Content-Type': 'application/json'}, | |
| timeout=30 | |
| ) |
🧰 Tools
🪛 Ruff (0.15.5)
[error] 80-80: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/utils/session_utils.py` around lines 79 - 83, The
requests.get call constructing the status fetch (using
DEFAULT_RECLAIM_STATUS_URL and session_id in session_utils.py) lacks a timeout
and can hang; add a timeout argument (e.g., timeout=5 or a configurable
DEFAULT_REQUEST_TIMEOUT constant) to the requests.get(...) call and update the
surrounding try/except to explicitly catch requests.exceptions.Timeout and
requests.exceptions.RequestException so you can log/handle timeouts and other
network errors cleanly rather than blocking indefinitely.
| except Exception as err: | ||
| error_message = f'Failed to fetch status URL for sessionId: {session_id}' | ||
| logger.info(f'{error_message}\n{str(err)}') | ||
| raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}') No newline at end of file |
There was a problem hiding this comment.
Preserve exception context when re-raising.
The original exception is caught but not chained when re-raising StatusUrlError, which loses valuable debugging information. Use from err to preserve the exception chain.
🔧 Proposed fix
except Exception as err:
error_message = f'Failed to fetch status URL for sessionId: {session_id}'
- logger.info(f'{error_message}\n{str(err)}')
- raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}')
+ logger.info(f'{error_message}\n{err!s}')
+ raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}') from err📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as err: | |
| error_message = f'Failed to fetch status URL for sessionId: {session_id}' | |
| logger.info(f'{error_message}\n{str(err)}') | |
| raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}') | |
| except Exception as err: | |
| error_message = f'Failed to fetch status URL for sessionId: {session_id}' | |
| logger.info(f'{error_message}\n{err!s}') | |
| raise StatusUrlError(f'Error fetching status URL for sessionId: {session_id}') from err |
🧰 Tools
🪛 Ruff (0.15.5)
[warning] 93-93: Do not catch blind exception: Exception
(BLE001)
[warning] 95-95: Use explicit conversion flag
Replace with conversion flag
(RUF010)
[warning] 96-96: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
[warning] 96-96: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/reclaim_python_sdk/utils/session_utils.py` around lines 93 - 96, The
except block for Status URL retrieval catches Exception as err but loses the
original traceback when re-raising; update the handler in session_utils.py (the
except block that references session_id, logger, and StatusUrlError) to re-raise
the new StatusUrlError using exception chaining (raise StatusUrlError(f'Error
fetching status URL for sessionId: {session_id}') from err) and log the original
exception (e.g., use logger.exception or include err) so the original context is
preserved.
Summary by CodeRabbit
Release Notes
New Features
Changed
Deprecated
add_context()→ useset_context().set_callback_url()→ useset_app_callback_url().Documentation
Chores