Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a complete Onshape connector Odoo module: package manifest and docs, an API adapter (HMAC/OAuth2) with retry/quota handling, connector components (importers, exporter, binder, mapper, listener), webhook controller, backend/document/product/BOM models and views, cron/queue_job records, wizards, and tests. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Backend as Onshape Backend
participant Importer as Batch Importer
participant Adapter as Onshape API Adapter
participant Model as Odoo Models
participant DB as Database
User->>Backend: action_import_documents()
Backend->>Importer: run(backend)
Importer->>Adapter: search_documents()
Adapter-->>Importer: documents
loop per document
Importer->>Model: create/update onshape.document
Model->>DB: write document
Importer->>Adapter: read_document_elements(document)
Adapter-->>Importer: elements
loop per element
Importer->>Model: create/update onshape.document.element
Model->>DB: write element
end
end
Importer-->>Backend: done
sequenceDiagram
participant Onshape as Onshape Platform
participant Webhook as Webhook Controller
participant Validator as Signature Validator
participant Handler as Event Handler
participant Backend as Onshape Backend
participant Job as Delayed Job
Onshape->>Webhook: POST /connector_onshape/webhook/<backend_id>
Webhook->>Validator: _validate_signature(payload)
Validator-->>Webhook: valid/invalid
Webhook->>Handler: _get_event_handler(event_type)
alt metadata change
Handler->>Job: schedule product import
Job->>Backend: action_import_products()
else workflow transition
Handler->>Backend: update binding states
else revision created
Handler->>Backend: mark released
end
Webhook-->>Onshape: 200/4xx
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (19)
connector_onshape/readme/USAGE.rst (1)
35-40: Add job-monitoring hint for automatic exports.Since export is async, documenting where to check failures/retries will reduce support friction.
Suggested doc patch
Automatic Export ~~~~~~~~~~~~~~~~ @@ When a product's ``default_code`` or ``name`` is changed in Odoo, a background job is automatically queued to export the update to Onshape (if the product is bound to an Onshape part). +You can monitor queued/failed jobs in **Queue Jobs** (Technical menu), +and retry failed exports from there.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/readme/USAGE.rst` around lines 35 - 40, Update the "Automatic Export" section to add a brief job-monitoring hint: state that exports are performed asynchronously when a product's default_code or name changes and instruct users to check Odoo's background job/queue dashboard (Queue Jobs / Settings > Technical > Jobs or the project's job-queue UI) for failure/retry status, review worker logs for error details, and verify the product's Onshape binding/part association to troubleshoot stuck exports; reference the automatic export/job terminology (automatic export, default_code, name, export job) so readers can locate and correlate the docs with the system UI and logs.connector_onshape/readme/CONFIGURE.rst (1)
97-107: Add queue-job runner prerequisite to scheduled sync docs.Cron activation alone is not enough if job workers/runner are not configured; imports may queue but never execute.
Suggested doc patch
Scheduled Sync (Cron Jobs) ~~~~~~~~~~~~~~~~~~~~~~~~~~ Three cron jobs are created (disabled by default): @@ Enable them in **Settings > Technical > Automation > Scheduled Actions** when you're ready for automatic background synchronization. + +.. note:: + This connector uses ``queue_job`` for async processing. Ensure your Odoo + deployment has the queue job runner/workers configured, otherwise scheduled + imports may be enqueued but not processed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/readme/CONFIGURE.rst` around lines 97 - 107, Update the "Scheduled Sync (Cron Jobs)" section to add a prerequisite note that enabling the cron jobs ("Onshape: Import Documents", "Onshape: Import Products", "Onshape: Import BOMs") is not sufficient by itself — a queue/job runner or worker process must be configured and running for queued import jobs to actually execute; include a short bullet describing how to configure/start the job runner (e.g., enable the worker service, point it at the app's job queue, and ensure it runs on boot) and a reminder to verify the jobs in Settings > Technical > Automation > Scheduled Actions after starting the worker.connector_onshape/data/ir_cron_data.xml (1)
4-38: Consider staggering cron start times when enabling jobs.
ir_cron_import_onshape_documentsandir_cron_import_onshape_productsshare the same 6-hour cadence; enabling both simultaneously can create API bursts and increase rate-limit pressure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/data/ir_cron_data.xml` around lines 4 - 38, Stagger the cron start times to avoid simultaneous API bursts: for records ir_cron_import_onshape_documents (model.cron_import_documents) and ir_cron_import_onshape_products (model.cron_import_products) add distinct nextcall values so they don't start at the same moment (e.g., offset one by a few hours), and optionally set a different nextcall for ir_cron_import_onshape_boms (model.cron_import_boms) if needed; implement this by adding a <field name="nextcall" eval="..."> with appropriate datetime offsets for each record so enabling them won't trigger concurrent calls.connector_onshape/tests/test_wizard.py (1)
36-53:test_documents_only_importdoesn't verify the import action.The patched
mock_importat Line 44 is unused, and the test body only asserts wizard field state without callingwizard.action_import(). Move the patch into the test body using a context manager and assert thatwizard.action_import()invokes the mocked backend method.Proposed test hardening
- `@patch.object`( - type( - OnshapeTestCase.env["onshape.backend"] - if hasattr(OnshapeTestCase, "env") - else object - ), - "action_import_documents", - ) - def test_documents_only_import(self, mock_import): - # Just verify the wizard can be created and doesn't crash + def test_documents_only_import(self): wizard = self.env["onshape.import.wizard"].create( { "backend_id": self.backend.id, "import_type": "documents", } ) + with patch.object(type(self.env["onshape.backend"]), "action_import_documents") as mock_import: + wizard.action_import() + mock_import.assert_called_once() self.assertEqual(wizard.import_type, "documents")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_wizard.py` around lines 36 - 53, The test test_documents_only_import currently patches action_import_documents but never uses the mock or calls the wizard import; update the test to patch the backend method within the test body (e.g., using patch.object as a context manager on the same target used now), create the wizard via self.env["onshape.import.wizard"].create(...), then call wizard.action_import() and assert the patched method (action_import_documents) was called (use mock_import.assert_called_once() or similar) to verify the import action is invoked; keep references to the existing symbols: test_documents_only_import, action_import_documents, wizard.action_import(), and the backend retrieval via OnshapeTestCase.env/ self.env to locate the code.connector_onshape/tests/test_adapter.py (1)
78-89: Consider moving theOnshapeQuotaErrorimport to the module level.The import on line 80 inside the test method body works, but placing it alongside other imports at the top of the file (line 4-7) would be more conventional and consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_adapter.py` around lines 78 - 89, Move the OnshapeQuotaError import out of the test body and into the module-level imports so it’s declared alongside other imports at the top of connector_onshape/tests/test_adapter.py; update the test_quota_exhausted_raises to reference the module-level OnshapeQuotaError (used when adapter._request("GET", "/api/v6/documents") raises) instead of importing it inside the function.connector_onshape/tests/test_listener.py (1)
10-47: Tests only verify no exception, not that export was actually skipped.All three tests rely on "no error means it worked," but they don't assert that
export_record(orwith_delay().export_record()) was never called. If the listener had a bug that queued exports anyway (without raising), these tests would still pass.Consider patching
export_recordorwith_delayand asserting.assert_not_called()for at least the key scenario:Example for test_connector_no_export_context_skips
+from unittest.mock import patch + class TestProductListener(OnshapeTestCase): ... def test_connector_no_export_context_skips(self): + with patch.object( + type(self.env["onshape.product.product"]), + "export_record", + ) as mock_export: self.product_bolt.with_context(connector_no_export=True).write( {"default_code": "KF-BOLT-002"} ) + mock_export.assert_not_called()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_listener.py` around lines 10 - 47, Update the three tests (test_connector_no_export_context_skips, test_unlinked_product_no_error, test_irrelevant_field_no_export) to assert that no export was queued by patching the export entry point: mock the model method that triggers the export (e.g., export_record on the Onshape listener or the recordset's with_delay().export_record chain) using unittest.mock.patch or monkeypatch, perform the write operation under test, and then call assert_not_called() on the mock; ensure you patch the exact symbol used by the listener (export_record or the with_delay proxy) so the test fails if an export is enqueued without raising.connector_onshape/components/listener.py (2)
26-26: Prefix unused lambda parameters with_to satisfy ARG005.
recordand**kwargsare structurally required by theskip_ifdecorator to match the wrapped function's signature, but their values go unused. Use underscore-prefixed names to make the intent explicit and silence the linter warning.♻️ Proposed fix
- `@skip_if`(lambda self, record, **kwargs: self.env.context.get("connector_no_export")) + `@skip_if`(lambda self, _record, **_kwargs: self.env.context.get("connector_no_export"))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/listener.py` at line 26, The skip_if decorator lambda currently declares unused parameters record and **kwargs which triggers ARG005; update the lambda in the `@skip_if`(...) call to prefix unused parameters with an underscore (e.g., use lambda self, _record, **_kwargs: self.env.context.get("connector_no_export")) so the signature still matches the wrapped function while silencing the linter warning for the skip_if decorator usage.
24-24: Suppress RUF012 viaClassVarannotation on_apply_on.Ruff flags
_apply_on = ["product.product"]as a mutable default class attribute (RUF012). Annotating withClassVaris the idiomatic fix used in strict OCA setups.♻️ Proposed fix
+from typing import ClassVar + class OnshapeProductListener(Component): ... - _apply_on = ["product.product"] + _apply_on: ClassVar = ["product.product"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/listener.py` at line 24, Ruff flags the class attribute _apply_on as a mutable default; annotate it with typing.ClassVar to suppress RUF012 by changing the declaration of _apply_on to use a ClassVar of a string-list type (e.g. ClassVar[List[str]] or ClassVar[list[str]] depending on your typing style) and add the required import(s) (ClassVar and List or use built-in list with forward-compatible typing) at the top of connector_onshape/components/listener.py so _apply_on remains a class-level constant.connector_onshape/components/binder.py (2)
14-20: Suppress RUF012 viaClassVaron mutable class attributes.Both
_inherit(line 14) and_apply_on(lines 17–20) are mutable list class attributes flagged by Ruff RUF012. Annotate withClassVarfor lint compliance.♻️ Proposed fix
+from typing import ClassVar + class OnshapeBinder(Component): ... - _inherit = ["onshape.base", "base.binder"] + _inherit: ClassVar = ["onshape.base", "base.binder"] _usage = "binder" _external_field = "external_id" - _apply_on = [ + _apply_on: ClassVar = [ "onshape.product.product", "onshape.mrp.bom", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 14 - 20, The mutable class attributes _inherit and _apply_on in binder.py are flagged by RUF012; update their annotations to use typing.ClassVar (e.g., ClassVar[List[str]]) and import the required names (ClassVar, List) from typing so these remain class-level constants rather than instance attributes; modify the class definition where _inherit and _apply_on are declared to use the ClassVar[List[str]] annotation for each.
52-56:if part_id:treats empty string same asNone.
make_compound_id("d", "e", "")silently returns"d/e"rather than raising or including a trailing slash. If callers can passpart_id=""(e.g., from a model field that is falsy-empty), the intent should be explicit. Usingif part_id is not None:makes the contract clearer.♻️ Proposed fix
`@staticmethod` def make_compound_id(document_id, element_id, part_id=None): - if part_id: + if part_id is not None: return f"{document_id}/{element_id}/{part_id}" return f"{document_id}/{element_id}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 52 - 56, The function make_compound_id treats empty-string part_id as None because it checks `if part_id:`, which drops trailing slashes; change the predicate to `if part_id is not None:` so that an explicit empty string produces a compound id with a trailing slash (i.e., `f"{document_id}/{element_id}/{part_id}"`) while None still results in the two-segment form; update the method make_compound_id accordingly to use `is not None` to make the contract explicit.connector_onshape/tests/test_export_product.py (1)
56-78: Missing test for malformed/1-part compound ID insplit_compound_id.The binder's
split_compound_idhas a silent fallback for inputs with fewer than 2/-separated parts, returning only{"document_id": external_id}. This code path has no test coverage. Additionally, consider moving the inlinefrom ..components.binder import OnshapeBinderimports to the class or module level to avoid repetition.Would you like me to generate a test case for the malformed ID fallback path?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_export_product.py` around lines 56 - 78, Add a unit test that exercises OnshapeBinder.split_compound_id for the malformed / single-part fallback path (e.g., pass "doc1" and assert it returns {"document_id": "doc1"} and also test empty string behavior if desired) to cover the code branch that returns only {"document_id": external_id}; also refactor the repeated inline imports in test_make_compound_id_with_part/test_make_compound_id_without_part/test_split_compound_id by moving "from ..components.binder import OnshapeBinder" to the module or test class level so the binder is imported once for all tests.connector_onshape/models/onshape_backend.py (3)
177-193: Cron methods run imports synchronously — consider usingwith_delay().Each cron method calls
action_import_*()synchronously in a loop. If any backend's import takes long or fails, it blocks subsequent backends and could exceed the cron timeout. Given that queue_job infrastructure is already in place, delegate to the job queue.♻️ Suggested approach
`@api.model` def cron_import_documents(self): backends = self.search([("state", "=", "active")]) for backend in backends: - backend.action_import_documents() + backend.with_delay().action_import_documents()Apply the same pattern to
cron_import_productsandcron_import_boms.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 177 - 193, The cron methods currently call action_import_documents/action_import_products/action_import_boms synchronously in a loop over backends found by search([("state","=","active")]); change each direct call to enqueue the work with the job queue by using backend.with_delay().action_import_documents(), backend.with_delay().action_import_products(), and backend.with_delay().action_import_boms() respectively so each backend import runs as a queued job instead of blocking the cron.
66-79: Count computations load all related records into memory.
len(rec.document_ids)triggers a full ORM read of every related record just to count them. For backends with hundreds or thousands of documents/bindings, this is wasteful.♻️ Suggested: use `search_count` or `read_group`
`@api.depends`("document_ids") def _compute_document_count(self): - for rec in self: - rec.document_count = len(rec.document_ids) + data = self.env["onshape.document"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.document_count = mapped.get(rec.id, 0) `@api.depends`("product_binding_ids") def _compute_product_binding_count(self): - for rec in self: - rec.product_binding_count = len(rec.product_binding_ids) + data = self.env["onshape.product.product"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.product_binding_count = mapped.get(rec.id, 0) `@api.depends`("bom_binding_ids") def _compute_bom_binding_count(self): - for rec in self: - rec.bom_binding_count = len(rec.bom_binding_ids) + data = self.env["onshape.mrp.bom"].read_group( + [("backend_id", "in", self.ids)], + ["backend_id"], + ["backend_id"], + ) + mapped = {d["backend_id"][0]: d["backend_id_count"] for d in data} + for rec in self: + rec.bom_binding_count = mapped.get(rec.id, 0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 66 - 79, The three compute methods (_compute_document_count, _compute_product_binding_count, _compute_bom_binding_count) currently call len(rec.document_ids / product_binding_ids / bom_binding_ids) which forces loading all related records; change each to use a count query against the related model (e.g. self.env['onshape.document'].search_count([('backend_id','=', rec.id)]) or equivalent domain) or use read_group to get counts for all recs at once, then assign rec.document_count / product_binding_count / bom_binding_count from those count results to avoid materializing related recordsets.
131-136:action_export_part_numbersloads all bindings eagerly.
self.product_binding_ids.filtered(...)fetches every product binding and its relatedodoo_idinto memory. For large backends, consider a domain-based search instead.♻️ Suggested fix
def action_export_part_numbers(self): self.ensure_one() self._check_active() - bindings = self.product_binding_ids.filtered(lambda b: b.odoo_id.default_code) + bindings = self.env["onshape.product.product"].search([ + ("backend_id", "=", self.id), + ("odoo_id.default_code", "!=", False), + ]) for binding in bindings: binding.with_delay().export_record()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_backend.py` around lines 131 - 136, action_export_part_numbers currently loads all product_binding_ids (and their odoo_id) into memory via self.product_binding_ids.filtered(...); replace this with a domain search on the binding model to fetch only bindings for this backend that have an odoo_id.default_code and iterate them to call binding.with_delay().export_record(); specifically, keep ensure_one() and _check_active(), then use env['connector.onshape.product.binding'].search(domain) where domain targets the current backend (e.g. [('backend_id','=', self.id)]) and non-empty default_code (e.g. [('odoo_id.default_code','!=', False)]) before calling export_record on each binding.connector_onshape/components/importer.py (3)
396-398: MD5 is flagged by static analysis — consider SHA256 for BOM hashing.While MD5 here is not used for security (just change detection),
hashlib.sha256is equally fast in practice and avoids linter warnings and future audit questions. It's a low-effort swap.♻️ Minimal fix
- bom_hash = hashlib.md5( + bom_hash = hashlib.sha256( json.dumps(bom_items, sort_keys=True).encode() ).hexdigest()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 396 - 398, Replace the MD5 usage when computing bom_hash with SHA256: change the call to hashlib.md5(...) to hashlib.sha256(...) while keeping the same json.dumps(bom_items, sort_keys=True).encode() and .hexdigest() call; update any variable/comment referencing MD5 accordingly so the BOM hash generation (bom_hash from bom_items) uses hashlib.sha256 to satisfy the linter/audit requirement.
463-483: Silently skipping unmatched BOM components may confuse users.
_build_bom_linesdrops components that can't be matched to an Odoo product without any logging. Combined with the match_score, users can see that components are missing but not which ones. A debug-level log per skipped item would aid troubleshooting.♻️ Suggested improvement
product = self._find_component_product(backend, item) if not product: + _logger.debug( + "BOM component not matched, skipping: %s (partNumber=%s)", + item.get("name", ""), + item.get("partNumber", ""), + ) continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 463 - 483, The _build_bom_lines function currently drops BOM items when _find_component_product returns no product; add a debug-level log for each skipped item so users can see which components were unmatched and why. In _build_bom_lines, after "if not product: continue", insert a debug log call (using the connector's existing logger, e.g., self.logger or self._logger) that outputs identifying item details (name/part number), the attempted match inputs, and any available match_score or reason from _find_component_product; ensure the log message is concise and only at debug level so normal runs aren’t noisy.
106-113: Bareexcept Exceptioncatches mask different failure modes.All three API call sites catch
Exceptionbroadly. A transient network error, an authentication failure (401/403), and a rate-limit response (402/429) all get the same treatment: a warning log and skip. Authentication failures especially should propagate rather than be silently swallowed, as they indicate a systemic configuration issue that will affect every subsequent call.Consider catching a more specific exception hierarchy (e.g.,
requests.RequestExceptionor a custom adapter exception) and letting auth/config errors propagate.Also applies to: 186-194, 381-389
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 106 - 113, The current bare "except Exception" around the API calls (the blocks that log "Could not fetch elements for document %s" and return, referencing document.onshape_document_id and workspace_id) should be replaced with targeted exception handling: catch requests.RequestException (or your connector's specific HTTP client error) to handle transient network errors and log/return, but catch requests.HTTPError (or inspect exception.response.status_code) and re-raise for authentication errors (401/403) so they propagate, and treat rate-limit codes (429/402) separately (e.g., log + backoff/retry or return), ensuring you import requests (or the adapter) and avoid swallowing unexpected exceptions.connector_onshape/controllers/webhook.py (1)
96-125:_handle_metadata_changere-imports all products for the backend, not just the affected document.
backend.with_delay().action_import_products()triggers a full product batch import across all documents, not scoped to the document identified bydoc_id. For backends with many documents, this is unnecessarily expensive for a single-document metadata change. Consider passing the document context to scope the re-import.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 96 - 125, The current _handle_metadata_change uses backend.with_delay(...).action_import_products() which triggers a full import across all documents; instead scope the job to the specific document by invoking the import on the document record or passing the document context/ID to the import task. Replace backend.with_delay(...).action_import_products() with a call that targets the document (e.g., document.with_delay(priority=10, description=f"Re-import products for doc {document.name}").action_import_products() or backend.with_delay(...).action_import_products(document_id=document.id) depending on the existing task signature) so only the affected document is re-imported.connector_onshape/models/onshape_mrp_bom.py (1)
67-79: URL may contain empty path segments when workspace or element is missing.When
doc.onshape_default_workspace_idorrec.onshape_element_idis falsy, the fallbackor ""produces paths like/documents/.../w//e/which are malformed. Theonshape_product_product.pysnippet has the same pattern, so this may be a broader issue, but consider gating the URL construction on the presence of workspace and element IDs, or at minimum the workspace.♻️ Suggested fix
def _compute_onshape_url(self): for rec in self: doc = rec.onshape_document_id - if doc and doc.backend_id.base_url and doc.onshape_document_id: - workspace = doc.onshape_default_workspace_id or "" - elem = rec.onshape_element_id or "" + if ( + doc + and doc.backend_id.base_url + and doc.onshape_document_id + and doc.onshape_default_workspace_id + ): + workspace = doc.onshape_default_workspace_id + elem = rec.onshape_element_id or "" rec.onshape_url = ( f"{doc.backend_id.base_url}" f"/documents/{doc.onshape_document_id}" f"/w/{workspace}/e/{elem}" ) else: rec.onshape_url = False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_mrp_bom.py` around lines 67 - 79, The _compute_onshape_url method builds URLs with empty path segments when workspace or element are missing; update it to first ensure rec.onshape_document_id and doc.backend_id.base_url exist, then build the URL starting with f"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" and append "/w/{workspace}" only if doc.onshape_default_workspace_id is truthy and append "/e/{elem}" only if rec.onshape_element_id is truthy; set rec.onshape_url = False if required doc/base_url/document id is missing. Ensure you reference the existing symbols: _compute_onshape_url, onshape_document_id, onshape_default_workspace_id, onshape_element_id, and onshape_url.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 171-177: The Retry-After header parsing can raise ValueError and
abort the 429 retry path; update the block that computes wait (using resp,
retry_after, wait, RETRY_BACKOFF, attempt) to guard parsing: attempt to convert
retry_after to an int inside a try/except (or coerce via int(float(...))) and on
any TypeError/ValueError fall back to RETRY_BACKOFF * (attempt + 1); ensure the
final wait variable always gets a valid numeric fallback so retries continue.
- Around line 9-10: The nonce generation currently uses random.choice (imported
via "import random") which is not cryptographically secure; replace usages of
random.choice with secrets.choice and add "import secrets" (remove or keep
random only if used elsewhere). Update the nonce generation site(s) (the
expression using random.choice and string) to use secrets.choice (or better, use
secrets.token_urlsafe/token_hex if preferred) so HMAC nonces are
cryptographically secure; ensure any tests or callers still reference the same
variable name (e.g., nonce) and that imports are adjusted accordingly.
In `@connector_onshape/components/binder.py`:
- Around line 58-69: split_compound_id currently calls external_id.split("/")
without guarding against None and returns inconsistent dict shapes for malformed
IDs; update split_compound_id to validate input and normalize outputs: if
external_id is None or not a str raise a ValueError (or TypeError) with a clear
message, split into parts, and always return the same keys ("document_id",
"element_id", "part_id") populating missing values with None (or explicitly
raise on wrong segment counts if you prefer strictness) so callers receive a
predictable dict; reference function name split_compound_id and the keys
"document_id", "element_id", "part_id" when making the change.
In `@connector_onshape/components/exporter.py`:
- Around line 59-60: The early return when items is falsy prevents updating the
stored sync_date; instead, ensure you still call binding.write({"sync_date":
<current timestamp>}) before returning so an empty items response advances the
sync cursor. Locate the block in exporter.py where items is checked (the `if not
items: return`), replace the silent return with a call to binding.write(...)
using the same timestamp logic used after successful exports (or compute now via
the same helper used elsewhere), then return; this preserves existing behavior
while ensuring `sync_date` is updated even when the items list is empty.
In `@connector_onshape/components/importer.py`:
- Around line 498-504: The "Try by name" fallback incorrectly searches the
default_code field; update the search domain used in the block that references
item_name and product (self.env["product.product"].search) to query the name
field instead of default_code (i.e., use ("name", "=", item_name)) so the lookup
truly attempts a product name match; keep the limit=1 and return behavior
unchanged.
In `@connector_onshape/controllers/webhook.py`:
- Around line 43-56: The webhook currently skips HMAC verification when
backend.webhook_secret is empty, but handlers (e.g., code paths reading
payload.get("documentId") and payload.get("transitionName")) trust that data and
act on it; change behavior so webhook events cannot be forged by a guessed
backend_id by either: (A) enforcing a webhook secret requirement in webhook.py
by returning an error if backend.webhook_secret is not set (i.e., refuse
processing before calling _validate_signature), or (B) keeping optional HMAC but
making all handlers re-fetch authoritative state from the Onshape API (use the
Onshape client to retrieve document/element/transition state instead of using
payload.get(...) values) before modifying binding state; pick one option and
apply consistently to functions that currently trust the payload.
- Around line 127-159: The handlers _handle_workflow_transition (and likewise
_handle_revision_created) currently update all product bindings for a document;
change the search to narrow to the affected element/part by reading
payload.get("elementId") and payload.get("partId") and adding corresponding
search domain clauses (e.g., filter by the binding's element id and/or part id
fields in addition to backend_id and onshape_document_id) so only bindings
matching the elementId and/or partId are updated; keep the existing fallback to
document-only updates if elementId/partId are absent.
- Around line 30-61: The webhook handler currently uses `@http.route` with
type="json" and optionally skips HMAC validation when backend.webhook_secret is
empty; update the webhook to use type="http" (make intent explicit for plain
JSON), require or strongly log/warn when backend.webhook_secret is not set
(reject requests or return an error instead of silently accepting them), ensure
_validate_signature(raw_body, backend.webhook_secret, signature) is always
invoked when a secret exists and fail fast on invalid signatures, and
clarify/reinforce the docstring behavior by either re-fetching event details
from Onshape before trusting payload data or explicitly documenting that
payloads are treated as untrusted; locate these changes in the webhook method,
the route decorator, and the use of backend.webhook_secret/_validate_signature
to implement.
In `@connector_onshape/data/queue_job_function_data.xml`:
- Around line 4-34: The CI fails because the field related_action_enable is
invalid on model queue.job.function; remove the <field
name="related_action_enable" eval="False" /> line from each record referencing
queue.job.function (records job_function_import_documents,
job_function_import_products, job_function_import_boms, and
job_function_export_product) in
connector_onshape/data/queue_job_function_data.xml so the records only include
valid fields (model_id, method, channel_id, retry_pattern).
In `@connector_onshape/models/onshape_document.py`:
- Around line 80-86: Replace the _compute_display_name computed-field approach
with an override of name_get() for Odoo 16 compatibility: implement a
name_get(self) method that iterates over records in self, builds the label using
rec.name and rec.onshape_document_id (use the first 8 chars of
onshape_document_id when present, e.g. "[{rec.onshape_document_id[:8]}]
{rec.name}"), and returns a list of (rec.id, label) tuples; then remove or
disable the `@api.depends/_compute_display_name` implementation so display_name
resolution consistently uses name_get across Odoo 16 code paths.
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 91-103: The _compute_onshape_url method builds malformed URLs when
either onshape_default_workspace_id or onshape_element_id is missing; update
_compute_onshape_url to only construct the URL when doc and
doc.backend_id.base_url are set AND both doc.onshape_default_workspace_id and
rec.onshape_element_id are truthy, otherwise set rec.onshape_url = False;
reference the symbols _compute_onshape_url, onshape_document_id,
onshape_default_workspace_id, onshape_element_id, onshape_url, and
backend_id.base_url when making the change.
In `@connector_onshape/models/product_template.py`:
- Around line 15-23: The dependency decorator on _compute_onshape_document_count
only tracks product_variant_ids.onshape_bind_ids but the method reads
bind.onshape_document_id; update the `@api.depends` for the
_compute_onshape_document_count method to include the nested field
(product_variant_ids.onshape_bind_ids.onshape_document_id) so changes to a
binding's onshape_document_id will invalidate the cache and trigger
recomputation of onshape_document_count.
In `@connector_onshape/static/description/index.html`:
- Line 372: Update the AGPL-3 badge href in the README.rst source
(connector_onshape/README.rst) to use HTTPS instead of HTTP: replace
"http://www.gnu.org/licenses/agpl-3.0-standalone.html" with
"https://www.gnu.org/licenses/agpl-3.0-standalone.html" (the change will then be
picked up by oca-gen-addon-readme and regenerate the index.html).
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-57: The test test_mcmaster_catalog_match currently unpacks
match_type but doesn't assert it and the comment wrongly states Strategy 1;
update the test to assert match_type equals the McMaster match value used by the
mapper (match_product) — i.e., use the same constant/string other tests assert
for McMaster matches (copy the exact value used elsewhere, e.g.,
mapper.MATCH_TYPE_MCMASTER or 'mcmaster') and change the inline comment to say
the name is matched by MCMASTER_RE extraction (Strategy 3) because
VERSION_SUFFIX_RE only strips numeric .#### suffixes and does not remove the
"_Grade 5 Steel Bolt" part, so base_no_ver still contains the underscore string
and the code falls back to the MCMASTER_RE extraction.
In `@connector_onshape/views/onshape_backend_views.xml`:
- Around line 113-117: The oauth2_token field in the view is rendered in plain
text; change its field tag to mark it as a password so it's masked in the UI by
adding password="True" to the <field name="oauth2_token" /> declaration (same
pattern used for oauth2_client_secret and api_secret), ensuring the form view
for the related model now renders oauth2_token masked.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 47-48: The wizard currently only writes True and never resets the
backend flag; update the write to persist the actual checkbox state by writing
the boolean value of self.auto_create_products (instead of only writing True) so
backend.write receives True or False accordingly—modify the block that contains
self.auto_create_products and backend.write({"auto_create_products": ...}) to
pass the real boolean state (e.g., using bool(self.auto_create_products) or the
checkbox value) so the backend flag is cleared when unchecked.
---
Nitpick comments:
In `@connector_onshape/components/binder.py`:
- Around line 14-20: The mutable class attributes _inherit and _apply_on in
binder.py are flagged by RUF012; update their annotations to use typing.ClassVar
(e.g., ClassVar[List[str]]) and import the required names (ClassVar, List) from
typing so these remain class-level constants rather than instance attributes;
modify the class definition where _inherit and _apply_on are declared to use the
ClassVar[List[str]] annotation for each.
- Around line 52-56: The function make_compound_id treats empty-string part_id
as None because it checks `if part_id:`, which drops trailing slashes; change
the predicate to `if part_id is not None:` so that an explicit empty string
produces a compound id with a trailing slash (i.e.,
`f"{document_id}/{element_id}/{part_id}"`) while None still results in the
two-segment form; update the method make_compound_id accordingly to use `is not
None` to make the contract explicit.
In `@connector_onshape/components/importer.py`:
- Around line 396-398: Replace the MD5 usage when computing bom_hash with
SHA256: change the call to hashlib.md5(...) to hashlib.sha256(...) while keeping
the same json.dumps(bom_items, sort_keys=True).encode() and .hexdigest() call;
update any variable/comment referencing MD5 accordingly so the BOM hash
generation (bom_hash from bom_items) uses hashlib.sha256 to satisfy the
linter/audit requirement.
- Around line 463-483: The _build_bom_lines function currently drops BOM items
when _find_component_product returns no product; add a debug-level log for each
skipped item so users can see which components were unmatched and why. In
_build_bom_lines, after "if not product: continue", insert a debug log call
(using the connector's existing logger, e.g., self.logger or self._logger) that
outputs identifying item details (name/part number), the attempted match inputs,
and any available match_score or reason from _find_component_product; ensure the
log message is concise and only at debug level so normal runs aren’t noisy.
- Around line 106-113: The current bare "except Exception" around the API calls
(the blocks that log "Could not fetch elements for document %s" and return,
referencing document.onshape_document_id and workspace_id) should be replaced
with targeted exception handling: catch requests.RequestException (or your
connector's specific HTTP client error) to handle transient network errors and
log/return, but catch requests.HTTPError (or inspect
exception.response.status_code) and re-raise for authentication errors (401/403)
so they propagate, and treat rate-limit codes (429/402) separately (e.g., log +
backoff/retry or return), ensuring you import requests (or the adapter) and
avoid swallowing unexpected exceptions.
In `@connector_onshape/components/listener.py`:
- Line 26: The skip_if decorator lambda currently declares unused parameters
record and **kwargs which triggers ARG005; update the lambda in the
`@skip_if`(...) call to prefix unused parameters with an underscore (e.g., use
lambda self, _record, **_kwargs: self.env.context.get("connector_no_export")) so
the signature still matches the wrapped function while silencing the linter
warning for the skip_if decorator usage.
- Line 24: Ruff flags the class attribute _apply_on as a mutable default;
annotate it with typing.ClassVar to suppress RUF012 by changing the declaration
of _apply_on to use a ClassVar of a string-list type (e.g. ClassVar[List[str]]
or ClassVar[list[str]] depending on your typing style) and add the required
import(s) (ClassVar and List or use built-in list with forward-compatible
typing) at the top of connector_onshape/components/listener.py so _apply_on
remains a class-level constant.
In `@connector_onshape/controllers/webhook.py`:
- Around line 96-125: The current _handle_metadata_change uses
backend.with_delay(...).action_import_products() which triggers a full import
across all documents; instead scope the job to the specific document by invoking
the import on the document record or passing the document context/ID to the
import task. Replace backend.with_delay(...).action_import_products() with a
call that targets the document (e.g., document.with_delay(priority=10,
description=f"Re-import products for doc
{document.name}").action_import_products() or
backend.with_delay(...).action_import_products(document_id=document.id)
depending on the existing task signature) so only the affected document is
re-imported.
In `@connector_onshape/data/ir_cron_data.xml`:
- Around line 4-38: Stagger the cron start times to avoid simultaneous API
bursts: for records ir_cron_import_onshape_documents
(model.cron_import_documents) and ir_cron_import_onshape_products
(model.cron_import_products) add distinct nextcall values so they don't start at
the same moment (e.g., offset one by a few hours), and optionally set a
different nextcall for ir_cron_import_onshape_boms (model.cron_import_boms) if
needed; implement this by adding a <field name="nextcall" eval="..."> with
appropriate datetime offsets for each record so enabling them won't trigger
concurrent calls.
In `@connector_onshape/models/onshape_backend.py`:
- Around line 177-193: The cron methods currently call
action_import_documents/action_import_products/action_import_boms synchronously
in a loop over backends found by search([("state","=","active")]); change each
direct call to enqueue the work with the job queue by using
backend.with_delay().action_import_documents(),
backend.with_delay().action_import_products(), and
backend.with_delay().action_import_boms() respectively so each backend import
runs as a queued job instead of blocking the cron.
- Around line 66-79: The three compute methods (_compute_document_count,
_compute_product_binding_count, _compute_bom_binding_count) currently call
len(rec.document_ids / product_binding_ids / bom_binding_ids) which forces
loading all related records; change each to use a count query against the
related model (e.g.
self.env['onshape.document'].search_count([('backend_id','=', rec.id)]) or
equivalent domain) or use read_group to get counts for all recs at once, then
assign rec.document_count / product_binding_count / bom_binding_count from those
count results to avoid materializing related recordsets.
- Around line 131-136: action_export_part_numbers currently loads all
product_binding_ids (and their odoo_id) into memory via
self.product_binding_ids.filtered(...); replace this with a domain search on the
binding model to fetch only bindings for this backend that have an
odoo_id.default_code and iterate them to call
binding.with_delay().export_record(); specifically, keep ensure_one() and
_check_active(), then use
env['connector.onshape.product.binding'].search(domain) where domain targets the
current backend (e.g. [('backend_id','=', self.id)]) and non-empty default_code
(e.g. [('odoo_id.default_code','!=', False)]) before calling export_record on
each binding.
In `@connector_onshape/models/onshape_mrp_bom.py`:
- Around line 67-79: The _compute_onshape_url method builds URLs with empty path
segments when workspace or element are missing; update it to first ensure
rec.onshape_document_id and doc.backend_id.base_url exist, then build the URL
starting with f"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}"
and append "/w/{workspace}" only if doc.onshape_default_workspace_id is truthy
and append "/e/{elem}" only if rec.onshape_element_id is truthy; set
rec.onshape_url = False if required doc/base_url/document id is missing. Ensure
you reference the existing symbols: _compute_onshape_url, onshape_document_id,
onshape_default_workspace_id, onshape_element_id, and onshape_url.
In `@connector_onshape/readme/CONFIGURE.rst`:
- Around line 97-107: Update the "Scheduled Sync (Cron Jobs)" section to add a
prerequisite note that enabling the cron jobs ("Onshape: Import Documents",
"Onshape: Import Products", "Onshape: Import BOMs") is not sufficient by itself
— a queue/job runner or worker process must be configured and running for queued
import jobs to actually execute; include a short bullet describing how to
configure/start the job runner (e.g., enable the worker service, point it at the
app's job queue, and ensure it runs on boot) and a reminder to verify the jobs
in Settings > Technical > Automation > Scheduled Actions after starting the
worker.
In `@connector_onshape/readme/USAGE.rst`:
- Around line 35-40: Update the "Automatic Export" section to add a brief
job-monitoring hint: state that exports are performed asynchronously when a
product's default_code or name changes and instruct users to check Odoo's
background job/queue dashboard (Queue Jobs / Settings > Technical > Jobs or the
project's job-queue UI) for failure/retry status, review worker logs for error
details, and verify the product's Onshape binding/part association to
troubleshoot stuck exports; reference the automatic export/job terminology
(automatic export, default_code, name, export job) so readers can locate and
correlate the docs with the system UI and logs.
In `@connector_onshape/tests/test_adapter.py`:
- Around line 78-89: Move the OnshapeQuotaError import out of the test body and
into the module-level imports so it’s declared alongside other imports at the
top of connector_onshape/tests/test_adapter.py; update the
test_quota_exhausted_raises to reference the module-level OnshapeQuotaError
(used when adapter._request("GET", "/api/v6/documents") raises) instead of
importing it inside the function.
In `@connector_onshape/tests/test_export_product.py`:
- Around line 56-78: Add a unit test that exercises
OnshapeBinder.split_compound_id for the malformed / single-part fallback path
(e.g., pass "doc1" and assert it returns {"document_id": "doc1"} and also test
empty string behavior if desired) to cover the code branch that returns only
{"document_id": external_id}; also refactor the repeated inline imports in
test_make_compound_id_with_part/test_make_compound_id_without_part/test_split_compound_id
by moving "from ..components.binder import OnshapeBinder" to the module or test
class level so the binder is imported once for all tests.
In `@connector_onshape/tests/test_listener.py`:
- Around line 10-47: Update the three tests
(test_connector_no_export_context_skips, test_unlinked_product_no_error,
test_irrelevant_field_no_export) to assert that no export was queued by patching
the export entry point: mock the model method that triggers the export (e.g.,
export_record on the Onshape listener or the recordset's
with_delay().export_record chain) using unittest.mock.patch or monkeypatch,
perform the write operation under test, and then call assert_not_called() on the
mock; ensure you patch the exact symbol used by the listener (export_record or
the with_delay proxy) so the test fails if an export is enqueued without
raising.
In `@connector_onshape/tests/test_wizard.py`:
- Around line 36-53: The test test_documents_only_import currently patches
action_import_documents but never uses the mock or calls the wizard import;
update the test to patch the backend method within the test body (e.g., using
patch.object as a context manager on the same target used now), create the
wizard via self.env["onshape.import.wizard"].create(...), then call
wizard.action_import() and assert the patched method (action_import_documents)
was called (use mock_import.assert_called_once() or similar) to verify the
import action is invoked; keep references to the existing symbols:
test_documents_only_import, action_import_documents, wizard.action_import(), and
the backend retrieval via OnshapeTestCase.env/ self.env to locate the code.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
connector_onshape/security/ir.model.access.csvis excluded by!**/*.csvconnector_onshape/static/description/icon.pngis excluded by!**/*.pngconnector_onshape/static/description/icon.svgis excluded by!**/*.svg
📒 Files selected for processing (51)
connector_onshape/README.rstconnector_onshape/__init__.pyconnector_onshape/__manifest__.pyconnector_onshape/components/__init__.pyconnector_onshape/components/adapter.pyconnector_onshape/components/binder.pyconnector_onshape/components/core.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/listener.pyconnector_onshape/components/mapper.pyconnector_onshape/controllers/__init__.pyconnector_onshape/controllers/webhook.pyconnector_onshape/data/ir_cron_data.xmlconnector_onshape/data/queue_job_channel_data.xmlconnector_onshape/data/queue_job_function_data.xmlconnector_onshape/models/__init__.pyconnector_onshape/models/mrp_bom.pyconnector_onshape/models/onshape_backend.pyconnector_onshape/models/onshape_document.pyconnector_onshape/models/onshape_mrp_bom.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/models/product_product.pyconnector_onshape/models/product_template.pyconnector_onshape/readme/CONFIGURE.rstconnector_onshape/readme/CONTRIBUTORS.rstconnector_onshape/readme/DESCRIPTION.rstconnector_onshape/readme/USAGE.rstconnector_onshape/security/onshape_security.xmlconnector_onshape/static/description/index.htmlconnector_onshape/tests/__init__.pyconnector_onshape/tests/common.pyconnector_onshape/tests/test_adapter.pyconnector_onshape/tests/test_backend.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_bom.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/tests/test_importer_flow.pyconnector_onshape/tests/test_listener.pyconnector_onshape/tests/test_webhook.pyconnector_onshape/tests/test_wizard.pyconnector_onshape/views/mrp_bom_views.xmlconnector_onshape/views/onshape_backend_views.xmlconnector_onshape/views/onshape_document_views.xmlconnector_onshape/views/onshape_product_views.xmlconnector_onshape/views/product_template_views.xmlconnector_onshape/wizards/__init__.pyconnector_onshape/wizards/onshape_import_wizard.pyconnector_onshape/wizards/onshape_import_wizard_views.xmlsetup/connector_onshape/odoo/addons/connector_onshapesetup/connector_onshape/setup.py
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (6)
connector_onshape/components/adapter.py (2)
58-60: Usesecrets.choicefor cryptographically secure HMAC nonce generation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 58 - 60, The nonce generation using random.choice in the adapter (the variable named "nonce") is not cryptographically secure; replace the current random.choice(...) loop with a secrets-based generator (e.g., use secrets.choice or preferably secrets.token_urlsafe/truncation) so the HMAC nonce is cryptographically secure. Locate the nonce assignment in adapter.py (the block that builds the 25-char nonce) and swap in secrets.* API calls and import secrets at top of file, preserving the 25-character length and character set requirements.
171-177:int(retry_after)can raiseValueErroron non-integerRetry-Afterheaders, aborting the retry path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 171 - 177, The Retry-After handling may raise ValueError when the header is not an integer; update the block that reads resp.headers.get("Retry-After") (inside the resp.status_code == 429 branch) to robustly handle both integer seconds and HTTP-date formats: attempt int(retry_after) inside a try/except ValueError, and on ValueError parse the HTTP-date (e.g. via email.utils.parsedate_to_datetime) to compute seconds until that date, and if parsing fails fall back to RETRY_BACKOFF * (attempt + 1); ensure the final value is assigned to wait and that missing Retry-After still uses the existing backoff logic.connector_onshape/models/onshape_product_product.py (1)
137-149: Malformed Onshape URL when workspace or element ID is missing.
workspaceandelemcan be empty strings, producing broken links like/w//e/that are surfaced viawidget="url"in the form view.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_product_product.py` around lines 137 - 149, _compute_onshape_url builds URLs that include empty "/w//e/" segments when onshape_default_workspace_id or onshape_element_id are missing; update the logic in _compute_onshape_url to only append the workspace segment ("/w/{workspace}") if onshape_default_workspace_id is truthy and only append the element segment ("/e/{elem}") if onshape_element_id is truthy, ensuring the base "{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" is used when both are missing, and keep setting rec.onshape_url = False when onshape_document_id or backend_id/base_url are absent.connector_onshape/components/importer.py (1)
520-526: "Try by name" fallback incorrectly searchesdefault_codeinstead ofname.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 520 - 526, The fallback "Try by name" block in importer.py is searching the wrong field—replace the domain on the self.env["product.product"].search call to match the product name instead of default_code (i.e. use ("name", "=", item_name)); ensure the check still uses item_name and returns product as before (the search invocation and variable names product and item_name should be left intact).connector_onshape/tests/test_import_product.py (1)
49-58:match_typeis not asserted and the inline comment is incorrect.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` around lines 49 - 58, The test test_mcmaster_catalog_match currently ignores match_type and has an incorrect inline comment; update the test to assert the returned match_type from mapper.match_product(part_data) against the expected value used by the match logic (use the actual match_type constant/enum or string that match_product returns, e.g., the MatchType or constant used in the mapper implementation), and correct the inline comment to accurately describe the matching mechanism (referencing mapper.match_product and self.product_mcmaster / default_code to ensure the comment matches the asserted match_type).connector_onshape/components/exporter.py (1)
57-58:sync_dateis never updated whenitemsis empty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 57 - 58, The early return at "if not items: return" prevents updating sync_date when there are no items; modify the logic so that before returning you still persist the new sync position (e.g., call the existing update/persist function or set the sync_date field) — locate the block with "if not items: return" in exporter.py and insert a call to the component that records the sync cursor (or assign self.sync_date / sync_state.sync_date) with the computed sync_date, then return.
🧹 Nitpick comments (4)
connector_onshape/components/importer.py (1)
101-113:backendparameter is accepted but never used in_import_elements.Remove it from the signature to avoid confusion, and update the call site accordingly.
♻️ Proposed fix
- def _import_elements(self, backend, document, workspace_id): + def _import_elements(self, document, workspace_id):- self._import_elements(backend, document, workspace_id) + self._import_elements(document, workspace_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 101 - 113, The _import_elements function currently accepts an unused backend parameter; remove backend from the signature of _import_elements(document, workspace_id) and update any callers to stop passing backend, keeping the body unchanged (it should still obtain adapter via self.component(usage="backend.adapter") and call adapter.read_document_elements(document.onshape_document_id, workspace_id)); ensure tests and all call sites that invoked _import_elements(..., backend, ...) are updated to call _import_elements(document, workspace_id) so there are no mismatched arguments.connector_onshape/components/mapper.py (1)
63-63: Unusedelementparameter.
elementis accepted bymap_recordbut never referenced. Prefix it with_to signal intentional non-use.♻️ Proposed fix
- def map_record(self, part_data, document=None, element=None): + def map_record(self, part_data, document=None, _element=None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` at line 63, The map_record method declares an unused parameter element; rename it to _element (or prefix with an underscore) in the map_record signature to indicate intentional non-use and update any internal references (none expected) and callers if they rely on exact signature matching so the function remains clear that element is unused.connector_onshape/components/exporter.py (1)
72-72: Redundant membership check before dictionary access.
prop_name in export_values and export_values[prop_name]performs two lookups; usedict.getinstead.♻️ Proposed fix
- if prop_name in export_values and export_values[prop_name]: + if export_values.get(prop_name):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` at line 72, Replace the redundant membership-and-index check "prop_name in export_values and export_values[prop_name]" with a single lookup using export_values.get(prop_name) (or export_values.get(prop_name, default)) inside the exporter code; locate the occurrence using the symbols prop_name and export_values in connector_onshape/components/exporter.py and change the condition to use dict.get to avoid double lookup and handle falsy/missing values appropriately.connector_onshape/tests/test_import_product.py (1)
95-95: Redundant import pathfrom ..tests.commoninside thetests/package.The file is already inside
connector_onshape/tests/, so the correct relative import isfrom .common import MOCK_MASS_PROPERTIES.♻️ Proposed fix
- from ..tests.common import MOCK_MASS_PROPERTIES + from .common import MOCK_MASS_PROPERTIES🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 95, The test file imports MOCK_MASS_PROPERTIES using an incorrect relative path; replace the redundant import "from ..tests.common import MOCK_MASS_PROPERTIES" with the correct package-local relative import "from .common import MOCK_MASS_PROPERTIES" so the module resolves inside the connector_onshape/tests package and avoids traversing up into tests twice; update the import line where MOCK_MASS_PROPERTIES is referenced.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 95-105: Wrap the json.loads call in _oauth2_headers in a
try/except that catches json.JSONDecodeError (and possibly ValueError) when
parsing backend.oauth2_token returned by _get_backend(); on exception set
token_data = {} (so access_token becomes ""), and emit a warning via an
available logger (e.g., self.logger.warning) indicating the malformed token and
fallback, then continue to build headers as before—this prevents uncaught
crashes from a corrupted backend.oauth2_token while preserving normal request
behavior.
In `@connector_onshape/components/mapper.py`:
- Around line 127-130: The loop in map_record that sums
total_mass/total_volume/total_area uses body_data.get("mass", [0.0])[0] etc.,
which raises IndexError when the API returns an empty list; update the
extraction to safely handle empty lists by checking the returned list before
indexing (e.g., retrieve vals = body_data.get("mass") and use vals[0] if vals
and len(vals)>0 else 0.0), and apply the same pattern for "volume" and
"periphery" so total_mass, total_volume, and total_area never attempt to index
an empty list.
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 132-136: The computed, stored field onshape_url is missing
onshape_document_id.onshape_default_workspace_id from its `@api.depends`, so
changes to a document's default workspace won't trigger recomputation; update
the `@api.depends` on the compute method (the decorator above the onshape_url
compute function) to include "onshape_document_id.onshape_default_workspace_id"
alongside the existing dependencies ("onshape_document_id.backend_id.base_url",
"onshape_document_id.onshape_document_id", "onshape_element_id") so stored
values are refreshed when the document workspace changes.
---
Duplicate comments:
In `@connector_onshape/components/adapter.py`:
- Around line 58-60: The nonce generation using random.choice in the adapter
(the variable named "nonce") is not cryptographically secure; replace the
current random.choice(...) loop with a secrets-based generator (e.g., use
secrets.choice or preferably secrets.token_urlsafe/truncation) so the HMAC nonce
is cryptographically secure. Locate the nonce assignment in adapter.py (the
block that builds the 25-char nonce) and swap in secrets.* API calls and import
secrets at top of file, preserving the 25-character length and character set
requirements.
- Around line 171-177: The Retry-After handling may raise ValueError when the
header is not an integer; update the block that reads
resp.headers.get("Retry-After") (inside the resp.status_code == 429 branch) to
robustly handle both integer seconds and HTTP-date formats: attempt
int(retry_after) inside a try/except ValueError, and on ValueError parse the
HTTP-date (e.g. via email.utils.parsedate_to_datetime) to compute seconds until
that date, and if parsing fails fall back to RETRY_BACKOFF * (attempt + 1);
ensure the final value is assigned to wait and that missing Retry-After still
uses the existing backoff logic.
In `@connector_onshape/components/exporter.py`:
- Around line 57-58: The early return at "if not items: return" prevents
updating sync_date when there are no items; modify the logic so that before
returning you still persist the new sync position (e.g., call the existing
update/persist function or set the sync_date field) — locate the block with "if
not items: return" in exporter.py and insert a call to the component that
records the sync cursor (or assign self.sync_date / sync_state.sync_date) with
the computed sync_date, then return.
In `@connector_onshape/components/importer.py`:
- Around line 520-526: The fallback "Try by name" block in importer.py is
searching the wrong field—replace the domain on the
self.env["product.product"].search call to match the product name instead of
default_code (i.e. use ("name", "=", item_name)); ensure the check still uses
item_name and returns product as before (the search invocation and variable
names product and item_name should be left intact).
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 137-149: _compute_onshape_url builds URLs that include empty
"/w//e/" segments when onshape_default_workspace_id or onshape_element_id are
missing; update the logic in _compute_onshape_url to only append the workspace
segment ("/w/{workspace}") if onshape_default_workspace_id is truthy and only
append the element segment ("/e/{elem}") if onshape_element_id is truthy,
ensuring the base
"{doc.backend_id.base_url}/documents/{doc.onshape_document_id}" is used when
both are missing, and keep setting rec.onshape_url = False when
onshape_document_id or backend_id/base_url are absent.
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-58: The test test_mcmaster_catalog_match currently ignores
match_type and has an incorrect inline comment; update the test to assert the
returned match_type from mapper.match_product(part_data) against the expected
value used by the match logic (use the actual match_type constant/enum or string
that match_product returns, e.g., the MatchType or constant used in the mapper
implementation), and correct the inline comment to accurately describe the
matching mechanism (referencing mapper.match_product and self.product_mcmaster /
default_code to ensure the comment matches the asserted match_type).
---
Nitpick comments:
In `@connector_onshape/components/exporter.py`:
- Line 72: Replace the redundant membership-and-index check "prop_name in
export_values and export_values[prop_name]" with a single lookup using
export_values.get(prop_name) (or export_values.get(prop_name, default)) inside
the exporter code; locate the occurrence using the symbols prop_name and
export_values in connector_onshape/components/exporter.py and change the
condition to use dict.get to avoid double lookup and handle falsy/missing values
appropriately.
In `@connector_onshape/components/importer.py`:
- Around line 101-113: The _import_elements function currently accepts an unused
backend parameter; remove backend from the signature of
_import_elements(document, workspace_id) and update any callers to stop passing
backend, keeping the body unchanged (it should still obtain adapter via
self.component(usage="backend.adapter") and call
adapter.read_document_elements(document.onshape_document_id, workspace_id));
ensure tests and all call sites that invoked _import_elements(..., backend, ...)
are updated to call _import_elements(document, workspace_id) so there are no
mismatched arguments.
In `@connector_onshape/components/mapper.py`:
- Line 63: The map_record method declares an unused parameter element; rename it
to _element (or prefix with an underscore) in the map_record signature to
indicate intentional non-use and update any internal references (none expected)
and callers if they rely on exact signature matching so the function remains
clear that element is unused.
In `@connector_onshape/tests/test_import_product.py`:
- Line 95: The test file imports MOCK_MASS_PROPERTIES using an incorrect
relative path; replace the redundant import "from ..tests.common import
MOCK_MASS_PROPERTIES" with the correct package-local relative import "from
.common import MOCK_MASS_PROPERTIES" so the module resolves inside the
connector_onshape/tests package and avoids traversing up into tests twice;
update the import line where MOCK_MASS_PROPERTIES is referenced.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
connector_onshape/components/adapter.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/mapper.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/tests/common.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/views/onshape_product_views.xml
🚧 Files skipped from review as they are similar to previous changes (2)
- connector_onshape/tests/common.py
- connector_onshape/views/onshape_product_views.xml
|
@cursor review |
|
Skipping Bugbot: Bugbot is disabled for this repository. Visit the Bugbot dashboard to update your settings. |
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (11)
connector_onshape/wizards/onshape_import_wizard.py (1)
47-47:auto_create_productsis now correctly persisted bidirectionally.
backend.write({"auto_create_products": self.auto_create_products})correctly propagates bothTrueandFalse, resolving the previous one-way toggle issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` at line 47, The write call now correctly persists the boolean toggle; update the OnshapeImportWizard implementation to use backend.write({"auto_create_products": self.auto_create_products}) (in the method where the wizard commits settings) and ensure the corresponding backend field auto_create_products exists on the target model so both True and False are stored; confirm no leftover one-way toggle logic (e.g., any previous conditional that inverted or only set True) remains in methods like OnshapeImportWizard.<method handling save> or related helper functions.connector_onshape/controllers/webhook.py (2)
133-177:_find_bindingscorrectly narrows byelementId/partId— LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 133 - 177, No change required: the _find_bindings function already narrows by elementId and partId correctly; leave _handle_workflow_transition, _handle_revision_created, and _find_bindings as-is (they correctly build the domain using ("onshape_element_id","=", elem_id) and ("onshape_part_id","=", part_id) and call request.env["onshape.product.product"].sudo().search(domain)).
43-62: HMAC validation remains optional with warning — no change from previous finding.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 43 - 62, The code currently only warns when backend.webhook_secret is missing and continues processing; change this to enforce HMAC validation: if backend.webhook_secret is falsy, return an error response (e.g., {"status":"error","message":"Webhook secret not configured"}) instead of logging a warning, and ensure the existing validation branch using request.httprequest.get_data(), request.httprequest.headers.get("X-Onshape-Webhook-Signature", ""), and self._validate_signature(raw_body, backend.webhook_secret, signature) remains unchanged so incoming requests are rejected when signature is missing or invalid.connector_onshape/components/binder.py (1)
58-71: Input validation and malformed-ID guard are now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 58 - 71, The split_compound_id function already includes input validation and guards for malformed IDs; no code changes required—keep the existing implementation of split_compound_id (including the ValueError checks and the parsing into document_id, element_id, and optional part_id) as reviewed.connector_onshape/models/onshape_document.py (1)
80-88:name_get()correctly overrides display label for Odoo 16 — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_document.py` around lines 80 - 88, No changes required: the name_get method correctly overrides the record display label for Odoo 16 by returning tuples of (rec.id, name) and handling rec.onshape_document_id; leave the name_get function as-is (function name_get in onshape_document.py).connector_onshape/components/exporter.py (1)
57-59:sync_dateis now updated before the early return — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 57 - 59, The early-return branch in exporter.py currently updates the record's sync_date via binding.write({"sync_date": fields.Datetime.now()}) before returning when items is falsy; leave this as-is (no code change required) since the sync_date should be refreshed on an empty export, so ensure the binding.write call in the block containing the items check remains intact and unchanged.connector_onshape/models/product_template.py (1)
15-26: Granular@api.dependsnow includes the nestedonshape_document_idpath — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/product_template.py` around lines 15 - 26, The updated `@api.depends` correctly adds the nested path and the compute method _compute_onshape_document_count already collects unique onshape_document_id values from product_variant_ids -> onshape_bind_ids and sets onshape_document_count; no code change required—keep the depends decorator as ("product_variant_ids.onshape_bind_ids","product_variant_ids.onshape_bind_ids.onshape_document_id") and retain the existing _compute_onshape_document_count implementation that builds a set of bind.onshape_document_id.id and assigns len(doc_ids) to rec.onshape_document_count.connector_onshape/models/onshape_product_product.py (1)
132-155: URL guard and missing@api.dependsdependency are both resolved — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_product_product.py` around lines 132 - 155, The review contains duplicated non-code review markers — remove the duplicate review/comment tokens from the PR text (e.g., the repeated "[duplicate_comment]" or duplicate approval lines) so there is only one approval note; no change is needed in the _compute_onshape_url method or its `@api.depends`, just clean up the duplicated comment metadata in the PR/review.connector_onshape/components/mapper.py (1)
128-130: Empty-listIndexErrorguard is now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 128 - 130, The current index-guard pattern using (body_data.get("mass") or [0.0])[0] is fragile/duplicated; create a small helper like _first_or_zero(data: dict, key: str) -> float that returns the first element or 0.0 (e.g., v = data.get(key); return (v[0] if v else 0.0)), then replace the three spots updating total_mass, total_volume, total_area to use _first_or_zero(body_data, "mass") etc.; this centralizes the safe-access logic and removes repetition in mapper.py.connector_onshape/tests/test_import_product.py (1)
49-57: McMastermatch_typeassertion is now in place — LGTM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` around lines 49 - 57, The review contains a duplicate approval comment rather than a code issue; no test changes are required for test_mcmaster_catalog_match or mapper.match_product/match_type — remove the duplicate review comment from the PR thread (clean up the redundant "[duplicate_comment]"/extra approval entry) and leave the test assertions as-is.connector_onshape/components/importer.py (1)
520-526: Name-based fallback is now correct.The past concern about
_find_component_productsearchingdefault_codeinstead ofnamehas been addressed; the domain now correctly uses("name", "=", item_name).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 520 - 526, The name-based fallback in _find_component_product is now correct (using self.env["product.product"].search([("name","=", item_name)], limit=1)); ensure any remaining references or legacy code that searched by "default_code" are removed or updated to avoid confusion, keep the domain as ("name","=", item_name) in _find_component_product, and update or add a unit/integration test that verifies finding products by name to prevent regressions.
🧹 Nitpick comments (11)
connector_onshape/wizards/onshape_import_wizard.py (2)
49-50: Tautological condition — simplify or remove.
import_typeis arequiredselection field whose only valid values are exactly("documents", "products", "boms", "full"). Thein (...)guard on line 49 is therefore alwaysTrueand adds no protection.♻️ Proposed simplification
- if self.import_type in ("documents", "products", "boms", "full"): - backend.action_import_documents() + backend.action_import_documents()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 49 - 50, The if-check on the import call is tautological because the selection field import_type only permits "documents","products","boms","full", so remove the redundant guard and call backend.action_import_documents() unconditionally (or replace the entire block with a direct call). Update the call site in onshape_import_wizard.py where import_type and backend.action_import_documents() are used so the code simply invokes backend.action_import_documents() without the if-check.
66-68: Merge adjacent string literals into one.Python silently concatenates the two adjacent string literals, but a single string (or explicit
+) is clearer and avoids a potential maintenance hazard if the strings are later separated by a variable.♻️ Proposed fix
- "message": _( - "Import jobs have been queued. " "Check the job queue for progress." - ), + "message": _( + "Import jobs have been queued. Check the job queue for progress." + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 66 - 68, The message value currently uses two adjacent string literals inside the _(...) call for the "message" key; merge them into a single string literal so the _("...") contains "Import jobs have been queued. Check the job queue for progress." (or explicitly join with +) to avoid silent concatenation—update the string inside the _("...") call where "message" is defined in onshape_import_wizard.py.connector_onshape/components/exporter.py (1)
70-79: Simplify the redundant key-existence check (Ruff RUF019).
prop_name in export_values and export_values[prop_name]checks the key twice. Useexport_values.get(prop_name)instead.♻️ Proposed fix
- if prop_name in export_values and export_values[prop_name]: + if export_values.get(prop_name):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/exporter.py` around lines 70 - 79, The loop building properties_update redundantly checks membership and value; replace the condition "prop_name in export_values and export_values[prop_name]" with a single truthy lookup using export_values.get(prop_name) (e.g., value = export_values.get(prop_name) and if value: ...), then use that value when constructing the dict for propertyId and value; update the code paths around part_item, prop_name, export_values, and properties_update accordingly.connector_onshape/components/mapper.py (2)
53-61: AnnotatePROPERTY_FIELD_MAPasClassVarto satisfy Ruff RUF012.Class-level dicts/lists that aren't meant to be instance attributes should be declared with
typing.ClassVarto communicate intent and silence the warning.♻️ Proposed fix
+from typing import ClassVar ... - PROPERTY_FIELD_MAP = { + PROPERTY_FIELD_MAP: ClassVar[dict] = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 53 - 61, PROPERTY_FIELD_MAP is a class-level dictionary and should be annotated as a typing.ClassVar to satisfy Ruff RUF012; update the declaration of PROPERTY_FIELD_MAP in mapper.py to have a ClassVar type annotation (e.g., ClassVar[Dict[str, str]] or ClassVar[dict[str, str]]) and add the required import from typing (ClassVar and Dict if using that form), leaving the dictionary literal itself unchanged and keeping it at class scope (refer to the PROPERTY_FIELD_MAP symbol and its enclosing class in mapper.py).
63-63: Unusedelementparameter (Ruff ARG002).
elementis accepted but never read insidemap_record. Either use it or prefix with_to document the intentional non-use.♻️ Proposed fix
- def map_record(self, part_data, document=None, element=None): + def map_record(self, part_data, document=None, _element=None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` at line 63, The map_record method declares an unused parameter element which triggers ARG002; either use the parameter where needed inside map_record or rename it to _element (or prefix with an underscore) to indicate intentional non-use and silence the linter. Locate the map_record function definition and update the parameter name from element to _element or incorporate element into the mapping logic (e.g., pass it to any helper or include it in returned data) so the symbol is referenced.connector_onshape/tests/test_import_product.py (2)
95-95: Redundant relative import path: use.commoninstead of..tests.common.The file is already inside
connector_onshape/tests/, so..tests.commonresolves to the same package as.common— the path suggests going up and back in, which is confusing.♻️ Proposed fix
- from ..tests.common import MOCK_MASS_PROPERTIES + from .common import MOCK_MASS_PROPERTIES🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 95, Replace the redundant relative import "from ..tests.common import MOCK_MASS_PROPERTIES" with a direct package-relative import "from .common import MOCK_MASS_PROPERTIES" in the test_import_product module so it references the tests.common module within the same package; update the import statement accordingly to use .common.
86-86: Moveimport jsonto the module top level.In-method imports are unusual in tests and signal the import was added in a hurry. Hoist it to the top of the file alongside the other imports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/tests/test_import_product.py` at line 86, The in-test import of the json module should be hoisted to the module top-level: remove the "import json" from inside the test in connector_onshape/tests/test_import_product.py and add "import json" with the other imports at the top of the file so that json is available module-wide (no runtime in-method import in the test function).connector_onshape/models/onshape_document.py (1)
62-68: Annotate_sql_constraintswithClassVarto satisfy Ruff RUF012.Applies to both
OnshapeDocument(lines 62–68) andOnshapeDocumentElement(lines 116–122).♻️ Proposed fix
+from typing import ClassVar ... - _sql_constraints = [ + _sql_constraints: ClassVar[list] = [ ( "unique_document", ... ), ]Apply the same pattern to
OnshapeDocumentElement._sql_constraints.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/models/onshape_document.py` around lines 62 - 68, Annotate the class-level _sql_constraints attributes with typing.ClassVar to satisfy Ruff RUF012: update OnshapeDocument._sql_constraints and OnshapeDocumentElement._sql_constraints to be declared as ClassVar[...] so the linter recognizes them as class variables; locate the _sql_constraints definitions in the OnshapeDocument and OnshapeDocumentElement classes and add the ClassVar typing annotation (import typing.ClassVar if missing).connector_onshape/components/binder.py (1)
14-20: Annotate component-framework list attributes withClassVarto satisfy Ruff RUF012.
_inheritand_apply_onare read-only registry hints, not instance data. Annotating them silences the warning without behavioral change.♻️ Proposed fix
+from typing import ClassVar ... - _inherit = ["onshape.base", "base.binder"] + _inherit: ClassVar[list] = ["onshape.base", "base.binder"] ... - _apply_on = [ + _apply_on: ClassVar[list] = [ "onshape.product.product", "onshape.mrp.bom", ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/binder.py` around lines 14 - 20, The attributes _inherit and _apply_on in binder.py are class-level registry hints and should be annotated with typing.ClassVar to silence Ruff RUF012; import ClassVar from typing (or typing_extensions if needed), then change the declarations for _inherit and _apply_on to use ClassVar[list[str]] (or ClassVar[list[str | str]] as appropriate) so they remain class-level constants rather than instance fields, keeping their current values and behavior.connector_onshape/components/adapter.py (1)
96-100: Prefer_logger.exceptionover_logger.errorinsideexceptblocks.
_logger.errordiscards the exception traceback;_logger.exceptionlogs it automatically and is the idiomatic choice here.♻️ Proposed change
except (ValueError, TypeError): - _logger.error("oauth2_token for backend %s is not valid JSON", backend.id) + _logger.exception( + "oauth2_token for backend %s is not valid JSON", backend.id + ) token_data = {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 96 - 100, In the except block that handles json.loads(backend.oauth2_token or "{}") (catching ValueError and TypeError), change the logging call to use _logger.exception instead of _logger.error so the exception traceback is captured; leave token_data = {} unchanged and keep the same exception types to preserve behavior, referencing backend.oauth2_token, token_data and _logger.exception to locate the change.connector_onshape/components/importer.py (1)
418-420: Suppress Ruff S324 by declaring MD5 is not used for security.MD5 is fine for BOM change-detection; annotate it so linters and future readers agree.
♻️ Proposed change
bom_hash = hashlib.md5( - json.dumps(bom_items, sort_keys=True).encode() + json.dumps(bom_items, sort_keys=True).encode(), + usedforsecurity=False, ).hexdigest()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 418 - 420, Replace the insecure-linter warning by calling hashlib.md5 with the non-security flag and a short clarifying comment: change the bom_hash computation to use hashlib.md5(json.dumps(bom_items, sort_keys=True).encode(), usedforsecurity=False).hexdigest() and add a brief inline comment on the same line (e.g., "# MD5 used only for BOM change-detection, not for security") to satisfy Ruff S324 and future readers; target the existing bom_hash assignment in importer.py.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/components/adapter.py`:
- Around line 341-343: The current code sets body["filter"] =
json.dumps({"teamId": backend.team_id}) which produces an invalid Onshape filter
and uses a non-existent $TeamId variable; instead, remove the filter assignment
and set the company/org scope directly by adding body["companyId"] =
backend.team_id (or the appropriate company id field) before calling
self._request("POST", "/api/v6/webhooks", json_body=body) in adapter.py so the
webhook is registered at the organization level rather than trying to use an
unsupported filter expression.
- Around line 341-343: The webhook registration incorrectly sets body["filter"]
= json.dumps({"teamId": backend.team_id}) and POSTs it via self._request("POST",
"/api/v6/webhooks", json_body=body); change this to use Onshape's company
scoping by setting body["companyId"] = backend.team_id (use the raw
backend.team_id value, not json.dumps) and remove the filter field so the
request body contains companyId at the top level before calling self._request;
update any related references to body["filter"] to reflect removal.
- Around line 316-324: The read_thumbnail method currently returns bytes because
base64.b64encode(resp.content) yields bytes, which will not serialize for Odoo
Binary fields or JSON-RPC; update read_thumbnail to return a text-safe base64
string by base64-encoding resp.content and then decoding to a str (e.g.,
.decode('ascii'/'utf-8')) before returning, while preserving the None return
when status_code != 200; reference the method name read_thumbnail and the resp
handling to locate the change.
In `@connector_onshape/components/mapper.py`:
- Around line 219-221: On export you write "Weight" in
OnshapeProductExportMapper.map_record but the import mapper lacks a
corresponding entry, causing round-trip loss; add "Weight" to KNOWN_PROPERTIES
and add a PROPERTY_FIELD_MAP entry mapping "Weight" -> "onshape_mass" (or the
chosen binding field) and implement unit conversion in the import path so the
imported value populates the binding field (e.g., convert Onshape mass units to
kg and store in onshape_mass or the binding's weight field) to make
export/import symmetric.
In `@connector_onshape/controllers/webhook.py`:
- Around line 127-131: The current call to
backend.with_delay(...).action_import_products() triggers a full backend resync
for every metadata webhook; change this to only re-import products for the
specific changed document by either (A) adding a targeted method like
onshape.document.action_import_document_products(document.id) and invoking
backend.with_delay(...).action_import_document_products(document.id), or (B)
update the existing action_import_products to accept a document_id/filter
argument and call
backend.with_delay(...).action_import_products(document_id=document.id); update
the webhook handler (where document is already resolved) to pass the document ID
so only that document’s products are re-imported and avoid queuing full resyncs.
- Around line 85-91: The _validate_signature function is using hex digests and
omitting the timestamp; change it to build the signed message as
"<timestamp>.<raw_body>" (use the X-onshape-webhook-timestamp header value
passed into the function), compute the HMAC-SHA256 over the raw bytes of that
message using secret.encode("utf-8"), compare the raw binary digests (use
.digest()) and decode the incoming signature from Base64
(base64.b64decode(signature)) before calling hmac.compare_digest to perform a
constant-time comparison; update references in _validate_signature accordingly
to operate on bytes and the timestamp.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 61-70: The notification text in onshape_import_wizard.py is
misleading because the wizard calls backend.action_import_documents(),
backend.action_import_products(), and backend.action_import_boms() synchronously
(they call importer.run(...) directly), so either change the wizard to dispatch
those calls asynchronously using the same with_delay() pattern as the
cron_import_* methods or update the notification message to reflect synchronous
execution; specifically, adjust the "message" and/or "title" in the return dict
so it no longer says "queued" (e.g., "Import started - running synchronously;
check the job log for results" or similar) or replace the direct calls to
action_import_documents/action_import_products/action_import_boms with delayed
invocations (backend.with_delay().action_import_documents(), etc.) to match the
message.
---
Duplicate comments:
In `@connector_onshape/components/binder.py`:
- Around line 58-71: The split_compound_id function already includes input
validation and guards for malformed IDs; no code changes required—keep the
existing implementation of split_compound_id (including the ValueError checks
and the parsing into document_id, element_id, and optional part_id) as reviewed.
In `@connector_onshape/components/exporter.py`:
- Around line 57-59: The early-return branch in exporter.py currently updates
the record's sync_date via binding.write({"sync_date": fields.Datetime.now()})
before returning when items is falsy; leave this as-is (no code change required)
since the sync_date should be refreshed on an empty export, so ensure the
binding.write call in the block containing the items check remains intact and
unchanged.
In `@connector_onshape/components/importer.py`:
- Around line 520-526: The name-based fallback in _find_component_product is now
correct (using self.env["product.product"].search([("name","=", item_name)],
limit=1)); ensure any remaining references or legacy code that searched by
"default_code" are removed or updated to avoid confusion, keep the domain as
("name","=", item_name) in _find_component_product, and update or add a
unit/integration test that verifies finding products by name to prevent
regressions.
In `@connector_onshape/components/mapper.py`:
- Around line 128-130: The current index-guard pattern using
(body_data.get("mass") or [0.0])[0] is fragile/duplicated; create a small helper
like _first_or_zero(data: dict, key: str) -> float that returns the first
element or 0.0 (e.g., v = data.get(key); return (v[0] if v else 0.0)), then
replace the three spots updating total_mass, total_volume, total_area to use
_first_or_zero(body_data, "mass") etc.; this centralizes the safe-access logic
and removes repetition in mapper.py.
In `@connector_onshape/controllers/webhook.py`:
- Around line 133-177: No change required: the _find_bindings function already
narrows by elementId and partId correctly; leave _handle_workflow_transition,
_handle_revision_created, and _find_bindings as-is (they correctly build the
domain using ("onshape_element_id","=", elem_id) and ("onshape_part_id","=",
part_id) and call request.env["onshape.product.product"].sudo().search(domain)).
- Around line 43-62: The code currently only warns when backend.webhook_secret
is missing and continues processing; change this to enforce HMAC validation: if
backend.webhook_secret is falsy, return an error response (e.g.,
{"status":"error","message":"Webhook secret not configured"}) instead of logging
a warning, and ensure the existing validation branch using
request.httprequest.get_data(),
request.httprequest.headers.get("X-Onshape-Webhook-Signature", ""), and
self._validate_signature(raw_body, backend.webhook_secret, signature) remains
unchanged so incoming requests are rejected when signature is missing or
invalid.
In `@connector_onshape/models/onshape_document.py`:
- Around line 80-88: No changes required: the name_get method correctly
overrides the record display label for Odoo 16 by returning tuples of (rec.id,
name) and handling rec.onshape_document_id; leave the name_get function as-is
(function name_get in onshape_document.py).
In `@connector_onshape/models/onshape_product_product.py`:
- Around line 132-155: The review contains duplicated non-code review markers —
remove the duplicate review/comment tokens from the PR text (e.g., the repeated
"[duplicate_comment]" or duplicate approval lines) so there is only one approval
note; no change is needed in the _compute_onshape_url method or its
`@api.depends`, just clean up the duplicated comment metadata in the PR/review.
In `@connector_onshape/models/product_template.py`:
- Around line 15-26: The updated `@api.depends` correctly adds the nested path and
the compute method _compute_onshape_document_count already collects unique
onshape_document_id values from product_variant_ids -> onshape_bind_ids and sets
onshape_document_count; no code change required—keep the depends decorator as
("product_variant_ids.onshape_bind_ids","product_variant_ids.onshape_bind_ids.onshape_document_id")
and retain the existing _compute_onshape_document_count implementation that
builds a set of bind.onshape_document_id.id and assigns len(doc_ids) to
rec.onshape_document_count.
In `@connector_onshape/tests/test_import_product.py`:
- Around line 49-57: The review contains a duplicate approval comment rather
than a code issue; no test changes are required for test_mcmaster_catalog_match
or mapper.match_product/match_type — remove the duplicate review comment from
the PR thread (clean up the redundant "[duplicate_comment]"/extra approval
entry) and leave the test assertions as-is.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Line 47: The write call now correctly persists the boolean toggle; update the
OnshapeImportWizard implementation to use backend.write({"auto_create_products":
self.auto_create_products}) (in the method where the wizard commits settings)
and ensure the corresponding backend field auto_create_products exists on the
target model so both True and False are stored; confirm no leftover one-way
toggle logic (e.g., any previous conditional that inverted or only set True)
remains in methods like OnshapeImportWizard.<method handling save> or related
helper functions.
---
Nitpick comments:
In `@connector_onshape/components/adapter.py`:
- Around line 96-100: In the except block that handles
json.loads(backend.oauth2_token or "{}") (catching ValueError and TypeError),
change the logging call to use _logger.exception instead of _logger.error so the
exception traceback is captured; leave token_data = {} unchanged and keep the
same exception types to preserve behavior, referencing backend.oauth2_token,
token_data and _logger.exception to locate the change.
In `@connector_onshape/components/binder.py`:
- Around line 14-20: The attributes _inherit and _apply_on in binder.py are
class-level registry hints and should be annotated with typing.ClassVar to
silence Ruff RUF012; import ClassVar from typing (or typing_extensions if
needed), then change the declarations for _inherit and _apply_on to use
ClassVar[list[str]] (or ClassVar[list[str | str]] as appropriate) so they remain
class-level constants rather than instance fields, keeping their current values
and behavior.
In `@connector_onshape/components/exporter.py`:
- Around line 70-79: The loop building properties_update redundantly checks
membership and value; replace the condition "prop_name in export_values and
export_values[prop_name]" with a single truthy lookup using
export_values.get(prop_name) (e.g., value = export_values.get(prop_name) and if
value: ...), then use that value when constructing the dict for propertyId and
value; update the code paths around part_item, prop_name, export_values, and
properties_update accordingly.
In `@connector_onshape/components/importer.py`:
- Around line 418-420: Replace the insecure-linter warning by calling
hashlib.md5 with the non-security flag and a short clarifying comment: change
the bom_hash computation to use hashlib.md5(json.dumps(bom_items,
sort_keys=True).encode(), usedforsecurity=False).hexdigest() and add a brief
inline comment on the same line (e.g., "# MD5 used only for BOM
change-detection, not for security") to satisfy Ruff S324 and future readers;
target the existing bom_hash assignment in importer.py.
In `@connector_onshape/components/mapper.py`:
- Around line 53-61: PROPERTY_FIELD_MAP is a class-level dictionary and should
be annotated as a typing.ClassVar to satisfy Ruff RUF012; update the declaration
of PROPERTY_FIELD_MAP in mapper.py to have a ClassVar type annotation (e.g.,
ClassVar[Dict[str, str]] or ClassVar[dict[str, str]]) and add the required
import from typing (ClassVar and Dict if using that form), leaving the
dictionary literal itself unchanged and keeping it at class scope (refer to the
PROPERTY_FIELD_MAP symbol and its enclosing class in mapper.py).
- Line 63: The map_record method declares an unused parameter element which
triggers ARG002; either use the parameter where needed inside map_record or
rename it to _element (or prefix with an underscore) to indicate intentional
non-use and silence the linter. Locate the map_record function definition and
update the parameter name from element to _element or incorporate element into
the mapping logic (e.g., pass it to any helper or include it in returned data)
so the symbol is referenced.
In `@connector_onshape/models/onshape_document.py`:
- Around line 62-68: Annotate the class-level _sql_constraints attributes with
typing.ClassVar to satisfy Ruff RUF012: update OnshapeDocument._sql_constraints
and OnshapeDocumentElement._sql_constraints to be declared as ClassVar[...] so
the linter recognizes them as class variables; locate the _sql_constraints
definitions in the OnshapeDocument and OnshapeDocumentElement classes and add
the ClassVar typing annotation (import typing.ClassVar if missing).
In `@connector_onshape/tests/test_import_product.py`:
- Line 95: Replace the redundant relative import "from ..tests.common import
MOCK_MASS_PROPERTIES" with a direct package-relative import "from .common import
MOCK_MASS_PROPERTIES" in the test_import_product module so it references the
tests.common module within the same package; update the import statement
accordingly to use .common.
- Line 86: The in-test import of the json module should be hoisted to the module
top-level: remove the "import json" from inside the test in
connector_onshape/tests/test_import_product.py and add "import json" with the
other imports at the top of the file so that json is available module-wide (no
runtime in-method import in the test function).
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 49-50: The if-check on the import call is tautological because the
selection field import_type only permits "documents","products","boms","full",
so remove the redundant guard and call backend.action_import_documents()
unconditionally (or replace the entire block with a direct call). Update the
call site in onshape_import_wizard.py where import_type and
backend.action_import_documents() are used so the code simply invokes
backend.action_import_documents() without the if-check.
- Around line 66-68: The message value currently uses two adjacent string
literals inside the _(...) call for the "message" key; merge them into a single
string literal so the _("...") contains "Import jobs have been queued. Check the
job queue for progress." (or explicitly join with +) to avoid silent
concatenation—update the string inside the _("...") call where "message" is
defined in onshape_import_wizard.py.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
connector_onshape/components/adapter.pyconnector_onshape/components/binder.pyconnector_onshape/components/exporter.pyconnector_onshape/components/importer.pyconnector_onshape/components/mapper.pyconnector_onshape/controllers/webhook.pyconnector_onshape/data/queue_job_function_data.xmlconnector_onshape/models/onshape_backend.pyconnector_onshape/models/onshape_document.pyconnector_onshape/models/onshape_product_product.pyconnector_onshape/models/product_template.pyconnector_onshape/tests/test_export_product.pyconnector_onshape/tests/test_import_product.pyconnector_onshape/views/onshape_backend_views.xmlconnector_onshape/wizards/onshape_import_wizard.py
🚧 Files skipped from review as they are similar to previous changes (4)
- connector_onshape/tests/test_export_product.py
- connector_onshape/views/onshape_backend_views.xml
- connector_onshape/models/onshape_backend.py
- connector_onshape/data/queue_job_function_data.xml
|
bugbot run |
|
Skipping Bugbot: Bugbot is disabled for this repository. Visit the Bugbot dashboard to update your settings. |
|
bugbot run |
|
Skipping Bugbot: Bugbot is disabled for this repository. Visit the Bugbot dashboard to update your settings. |
|
bugbot run |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 5 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
connector_onshape/components/adapter.py (1)
316-324:⚠️ Potential issue | 🟡 Minor
base64.b64encodereturnsbytes, but Odoo Binary fields require a string.This issue was flagged in a previous review but remains unaddressed. When the return value is used with Odoo's Binary fields or serialized to JSON,
byteswill fail.🐛 Proposed fix
if resp.status_code == 200: - return base64.b64encode(resp.content) + return base64.b64encode(resp.content).decode("utf-8") return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 316 - 324, read_thumbnail currently returns bytes because base64.b64encode returns bytes, which breaks Odoo Binary fields and JSON serialization; update read_thumbnail to return a str by decoding the base64 bytes (e.g., base64.b64encode(resp.content).decode('ascii')) so it returns a plain string when resp.status_code == 200 and still return None otherwise; locate this change in the read_thumbnail method to apply the decode step.connector_onshape/controllers/webhook.py (1)
85-93:⚠️ Potential issue | 🔴 CriticalWebhook signature validation missing timestamp in signed message.
According to Onshape's official webhook documentation, the signed message format must be
<X-onshape-webhook-timestamp>.<raw_body>. The current implementation only usesraw_body, which causes signature validation to fail against real Onshape webhooks. The correct HMAC calculation is:Base64(HMAC-SHA256(signingKey, timestamp + "." + rawBody)).Additionally, the test file does not include timestamp validation, masking this critical defect.
🔧 Proposed fix
- def _validate_signature(self, raw_body, secret, signature): + def _validate_signature(self, raw_body, secret, signature, timestamp): + # Onshape signs: "<timestamp>.<body>" + message = timestamp.encode("utf-8") + b"." + raw_body expected = base64.b64encode( hmac.new( secret.encode("utf-8"), - raw_body, + message, digestmod=hashlib.sha256, ).digest() ).decode("ascii") return hmac.compare_digest(expected, signature)And update the caller to extract and pass the timestamp:
+ timestamp = request.httprequest.headers.get( + "X-onshape-webhook-timestamp", "" + ) signature = request.httprequest.headers.get( "X-onshape-webhook-signature-primary", "" ) - if not self._validate_signature(raw_body, backend.webhook_secret, signature): + if not self._validate_signature(raw_body, backend.webhook_secret, signature, timestamp):Update tests to include timestamp in signature calculation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 85 - 93, The _validate_signature function currently computes HMAC over raw_body only; change it to compute the HMAC over the string formed by "<timestamp>.<raw_body>" (timestamp + "." + raw_body) using the provided secret and Base64-encode the result, and update its signature to accept the timestamp (or update its caller to extract the X-onshape-webhook-timestamp header and pass it into _validate_signature). Also update any callers of _validate_signature (webhook request handler) to extract the X-onshape-webhook-timestamp header and pass it in, and modify the tests to compute and assert signatures using the timestamp-prefixed message so tests validate the real Onshape signing scheme.
🧹 Nitpick comments (8)
connector_onshape/components/adapter.py (1)
55-59:datetime.utcnow()is deprecated in Python 3.12+.While Odoo 16 currently targets Python 3.8-3.10, consider future-proofing by using timezone-aware datetime.
♻️ Suggested fix
+from datetime import datetime, timezone -from datetime import datetime ... - date = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S GMT") + date = datetime.now(timezone.utc).strftime("%a, %d %b %Y %H:%M:%S GMT")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/adapter.py` around lines 55 - 59, The code uses datetime.utcnow() to build the request date (variable date) which is deprecated; update the date generation to use a timezone-aware UTC datetime (e.g., datetime.now(timezone.utc)) and adjust the import to include timezone from datetime so the RFC-1123 formatted string remains the same; change the date assignment near the _get_backend()/method/nonce block (the variable name date in adapter.py) to use a timezone-aware now() call to future-proof against Python 3.12+.connector_onshape/controllers/webhook.py (2)
38-38: Unusedkwargsparameter.The
kwargsparameter is never used. Consider removing it or prefixing with underscore to indicate intentional non-use.♻️ Suggested fix
- def webhook(self, backend_id, **kwargs): + def webhook(self, backend_id, **_kwargs):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` at line 38, The webhook function signature includes an unused kwargs parameter; update the webhook method in connector_onshape/controllers/webhook.py (the function named webhook) to remove the unused kwargs parameter or rename it to _kwargs to indicate intentional non‑use, and update any callers if necessary to match the new signature so there are no mismatched argument errors.
181-189: Unusedbackendparameter in_handle_version_created.The
backendparameter is not used in this handler. Consider prefixing with underscore for consistency, or document that it's kept for handler interface uniformity.♻️ Suggested fix
- def _handle_version_created(self, backend, payload): + def _handle_version_created(self, _backend, payload):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/controllers/webhook.py` around lines 181 - 189, The _handle_version_created handler declares an unused parameter named backend; to fix, either rename the parameter to _backend (or prefix it with an underscore) in the _handle_version_created signature to indicate it is intentionally unused, or add a short comment inside the _handle_version_created function documenting that backend is kept for handler interface uniformity; update any callers if they rely on the exact signature.connector_onshape/components/mapper.py (2)
64-91: Unusedelementparameter.The
elementparameter is accepted but never referenced. If kept for API consistency or future use, prefix with underscore to indicate it's intentionally unused.♻️ Prefix unused parameter
- def map_record(self, part_data, document=None, element=None): + def map_record(self, part_data, document=None, _element=None):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 64 - 91, The map_record method accepts an unused parameter element; to indicate it's intentionally unused and silence linters, rename the parameter to _element in the method signature of map_record (and update any internal references if any), or if you prefer to keep the original name for API compatibility, add a leading underscore usage (e.g., assign element to _ = element) at the start of map_record; reference the map_record function in mapper.py and ensure callers still pass the same argument shape if you change the signature.
54-62: Consider usingClassVartype hint for the mapping dict.The static analysis flags this dict as a mutable class attribute. While it's only read (never mutated), adding a type hint makes the intent explicit.
♻️ Optional type hint addition
+from typing import ClassVar + class OnshapeProductImportMapper(Component): ... # Map of Onshape property name → binding field name - PROPERTY_FIELD_MAP = { + PROPERTY_FIELD_MAP: ClassVar[dict[str, str]] = { "Part Number": "onshape_part_number",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/mapper.py` around lines 54 - 62, PROPERTY_FIELD_MAP is currently a mutable-looking class attribute; annotate it with typing.ClassVar to mark it as an immutable/intentional class-level constant for static analysis. Import ClassVar from typing if not already imported, then change the declaration to include the ClassVar[...] type hint (e.g., PROPERTY_FIELD_MAP: ClassVar[dict[str, str]] = {...}) so linters recognize it as a non-instance mutable class constant in the mapper.py module.connector_onshape/components/importer.py (3)
101-152: Consider catching specific exceptions for API calls.The bare
except Exceptionat line 108 is flagged by static analysis. While catching broadly for external API resilience is reasonable, consider catching more specific exceptions (e.g.,requests.RequestException,ConnectionError) to avoid masking programming errors.♻️ Example specific exception handling
+from requests.exceptions import RequestException + try: elements = adapter.read_document_elements( document.onshape_document_id, workspace_id, ) - except Exception: + except (RequestException, ConnectionError, TimeoutError) as exc: _logger.warning( - "Could not fetch elements for document %s", + "Could not fetch elements for document %s: %s", document.onshape_document_id, + exc, ) return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 101 - 152, The bare except in _import_elements around adapter.read_document_elements should be narrowed to catch only expected network/API errors (e.g., requests.RequestException, requests.exceptions.Timeout, requests.exceptions.ConnectionError or the adapter's specific exception class) instead of Exception; update the try/except to import and catch those specific exceptions, retain the same _logger.warning and return behavior on those errors, and let other exceptions propagate so they are not masked (change the except Exception: block in the _import_elements method of connector_onshape/components/importer.py accordingly).
419-422: MD5 for change detection is acceptable but consider alternatives.Static analysis flags MD5 as insecure, but here it's used for BOM content hashing (change detection), not security. This is a valid use case. However, for consistency with security best practices and to silence linters, consider using SHA-256 or adding a comment explaining the non-cryptographic use.
♻️ Option 1: Add explanatory comment
+ # MD5 used for change detection only (not security-sensitive) bom_hash = hashlib.md5( json.dumps(bom_items, sort_keys=True).encode() ).hexdigest()♻️ Option 2: Use SHA-256
- bom_hash = hashlib.md5( + bom_hash = hashlib.sha256( json.dumps(bom_items, sort_keys=True).encode() ).hexdigest()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 419 - 422, Replace the MD5-based BOM hash with SHA-256 to satisfy linters and follow best practices: change the hashlib.md5(...) call that assigns bom_hash to use hashlib.sha256(...) and keep the same json.dumps(..., sort_keys=True).encode() input and .hexdigest() output; alternatively, if you decide to keep MD5, add a clear comment immediately above the bom_hash assignment explaining MD5 is used only for non-cryptographic change detection so the linter warning can be ignored.
487-507: Silently skipping unmatched BOM components may warrant logging.When
_find_component_productreturnsNone(line 494), the component is silently skipped. Whilematch_scorecaptures the ratio of matched lines, a debug log could help troubleshoot import issues.♻️ Add debug logging for skipped components
product = self._find_component_product(backend, item) if not product: + _logger.debug( + "Skipping BOM component: no match for %s (part: %s)", + item.get("name", ""), + item.get("partNumber", ""), + ) continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/components/importer.py` around lines 487 - 507, In _build_bom_lines, when product = self._find_component_product(backend, item) returns None you should add a debug log before continuing that records identifying details of the skipped component (e.g. item.get("name"), item.get("part_number") or item.get("id"), and item.get("quantity")) and the backend/context so imports can be diagnosed; insert the log right inside the if not product: branch and use the repository's logger instance (e.g. self.logger.debug(...) or the class logger) so the message is emitted at debug level without changing control flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_onshape/controllers/webhook.py`:
- Around line 27-28: The docstring claims we "re-fetch from API" but
_handle_workflow_transition and _handle_revision_created still trust
payload.get("transitionName"); update implementation to match the docstring by,
after validating the HMAC in the webhook handler, calling the Onshape API to
fetch the authoritative workflow/revision state (use the same identifiers from
the payload to query the API) and derive the transition/state from that response
instead of payload.get("transitionName"); ensure _handle_workflow_transition and
_handle_revision_created accept and use the re-fetched API object (or adjust
their signatures to perform the fetch internally) so binding updates come only
from the API, or alternatively, if you prefer not to re-fetch, change the module
docstring to remove the "re-fetch from API" claim and document that handlers
trust the webhook payload.
In `@connector_onshape/wizards/onshape_import_wizard.py`:
- Around line 49-59: The jobs for import_type are currently queued independently
which can lead to out-of-order execution; update the sequence to chain jobs
using depends_on so each step waits for its prerequisite: call
backend.with_delay().action_import_documents() and store it (e.g., doc_job),
then when import_type includes products call
backend.with_delay(depends_on=doc_job).action_import_products() (store
prod_job), when import_type includes boms call
backend.with_delay(depends_on=prod_job).action_import_boms() (store bom_job),
and for full call
backend.with_delay(depends_on=bom_job).action_export_part_numbers(); ensure you
only reference the prior job variable when that prior step was created.
---
Duplicate comments:
In `@connector_onshape/components/adapter.py`:
- Around line 316-324: read_thumbnail currently returns bytes because
base64.b64encode returns bytes, which breaks Odoo Binary fields and JSON
serialization; update read_thumbnail to return a str by decoding the base64
bytes (e.g., base64.b64encode(resp.content).decode('ascii')) so it returns a
plain string when resp.status_code == 200 and still return None otherwise;
locate this change in the read_thumbnail method to apply the decode step.
In `@connector_onshape/controllers/webhook.py`:
- Around line 85-93: The _validate_signature function currently computes HMAC
over raw_body only; change it to compute the HMAC over the string formed by
"<timestamp>.<raw_body>" (timestamp + "." + raw_body) using the provided secret
and Base64-encode the result, and update its signature to accept the timestamp
(or update its caller to extract the X-onshape-webhook-timestamp header and pass
it into _validate_signature). Also update any callers of _validate_signature
(webhook request handler) to extract the X-onshape-webhook-timestamp header and
pass it in, and modify the tests to compute and assert signatures using the
timestamp-prefixed message so tests validate the real Onshape signing scheme.
---
Nitpick comments:
In `@connector_onshape/components/adapter.py`:
- Around line 55-59: The code uses datetime.utcnow() to build the request date
(variable date) which is deprecated; update the date generation to use a
timezone-aware UTC datetime (e.g., datetime.now(timezone.utc)) and adjust the
import to include timezone from datetime so the RFC-1123 formatted string
remains the same; change the date assignment near the
_get_backend()/method/nonce block (the variable name date in adapter.py) to use
a timezone-aware now() call to future-proof against Python 3.12+.
In `@connector_onshape/components/importer.py`:
- Around line 101-152: The bare except in _import_elements around
adapter.read_document_elements should be narrowed to catch only expected
network/API errors (e.g., requests.RequestException,
requests.exceptions.Timeout, requests.exceptions.ConnectionError or the
adapter's specific exception class) instead of Exception; update the try/except
to import and catch those specific exceptions, retain the same _logger.warning
and return behavior on those errors, and let other exceptions propagate so they
are not masked (change the except Exception: block in the _import_elements
method of connector_onshape/components/importer.py accordingly).
- Around line 419-422: Replace the MD5-based BOM hash with SHA-256 to satisfy
linters and follow best practices: change the hashlib.md5(...) call that assigns
bom_hash to use hashlib.sha256(...) and keep the same json.dumps(...,
sort_keys=True).encode() input and .hexdigest() output; alternatively, if you
decide to keep MD5, add a clear comment immediately above the bom_hash
assignment explaining MD5 is used only for non-cryptographic change detection so
the linter warning can be ignored.
- Around line 487-507: In _build_bom_lines, when product =
self._find_component_product(backend, item) returns None you should add a debug
log before continuing that records identifying details of the skipped component
(e.g. item.get("name"), item.get("part_number") or item.get("id"), and
item.get("quantity")) and the backend/context so imports can be diagnosed;
insert the log right inside the if not product: branch and use the repository's
logger instance (e.g. self.logger.debug(...) or the class logger) so the message
is emitted at debug level without changing control flow.
In `@connector_onshape/components/mapper.py`:
- Around line 64-91: The map_record method accepts an unused parameter element;
to indicate it's intentionally unused and silence linters, rename the parameter
to _element in the method signature of map_record (and update any internal
references if any), or if you prefer to keep the original name for API
compatibility, add a leading underscore usage (e.g., assign element to _ =
element) at the start of map_record; reference the map_record function in
mapper.py and ensure callers still pass the same argument shape if you change
the signature.
- Around line 54-62: PROPERTY_FIELD_MAP is currently a mutable-looking class
attribute; annotate it with typing.ClassVar to mark it as an
immutable/intentional class-level constant for static analysis. Import ClassVar
from typing if not already imported, then change the declaration to include the
ClassVar[...] type hint (e.g., PROPERTY_FIELD_MAP: ClassVar[dict[str, str]] =
{...}) so linters recognize it as a non-instance mutable class constant in the
mapper.py module.
In `@connector_onshape/controllers/webhook.py`:
- Line 38: The webhook function signature includes an unused kwargs parameter;
update the webhook method in connector_onshape/controllers/webhook.py (the
function named webhook) to remove the unused kwargs parameter or rename it to
_kwargs to indicate intentional non‑use, and update any callers if necessary to
match the new signature so there are no mismatched argument errors.
- Around line 181-189: The _handle_version_created handler declares an unused
parameter named backend; to fix, either rename the parameter to _backend (or
prefix it with an underscore) in the _handle_version_created signature to
indicate it is intentionally unused, or add a short comment inside the
_handle_version_created function documenting that backend is kept for handler
interface uniformity; update any callers if they rely on the exact signature.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
connector_onshape/components/adapter.pyconnector_onshape/components/importer.pyconnector_onshape/components/mapper.pyconnector_onshape/controllers/webhook.pyconnector_onshape/models/onshape_backend.pyconnector_onshape/wizards/onshape_import_wizard.py
🚧 Files skipped from review as they are similar to previous changes (1)
- connector_onshape/models/onshape_backend.py
| Security: Validate HMAC signature, then re-fetch from API | ||
| (never trust webhook payload directly). |
There was a problem hiding this comment.
Docstring security claim does not match implementation.
The docstring states "re-fetch from API (never trust webhook payload directly)", but the handlers (_handle_workflow_transition, _handle_revision_created) trust and directly use payload.get("transitionName") to update binding states without re-fetching from the Onshape API. Either update the handlers to re-fetch authoritative state, or correct the docstring to accurately reflect the current behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/controllers/webhook.py` around lines 27 - 28, The docstring
claims we "re-fetch from API" but _handle_workflow_transition and
_handle_revision_created still trust payload.get("transitionName"); update
implementation to match the docstring by, after validating the HMAC in the
webhook handler, calling the Onshape API to fetch the authoritative
workflow/revision state (use the same identifiers from the payload to query the
API) and derive the transition/state from that response instead of
payload.get("transitionName"); ensure _handle_workflow_transition and
_handle_revision_created accept and use the re-fetched API object (or adjust
their signatures to perform the fetch internally) so binding updates come only
from the API, or alternatively, if you prefer not to re-fetch, change the module
docstring to remove the "re-fetch from API" claim and document that handlers
trust the webhook payload.
| if self.import_type in ("documents", "products", "boms", "full"): | ||
| backend.with_delay().action_import_documents() | ||
|
|
||
| if self.import_type in ("products", "boms", "full"): | ||
| backend.with_delay().action_import_products() | ||
|
|
||
| if self.import_type in ("boms", "full"): | ||
| backend.with_delay().action_import_boms() | ||
|
|
||
| if self.import_type == "full": | ||
| backend.with_delay().action_export_part_numbers() |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
OCA queue_job depends_on dependency mechanism
💡 Result:
In OCA queue_job, “dependencies” are implemented as a job graph:
-
How dependencies are stored
- Each
queue.jobrecord has:graph_uuid: groups all jobs that belong to the same dependency graph. (odoo-connector.com)dependencies(serialized/JSON) with two UUID lists:depends_on: parent jobsreverse_depends_on: children jobs (odoo-connector.com)
- Each
-
How a dependency is created (in memory, before storing)
- When you link jobs (e.g., via delayable graph primitives like
on_done,chain(),group()), the job object callsadd_depends(jobs):- adds parent UUIDs to
depends_on - adds the child UUID to each parent’s
reverse_depends_on - if any parent isn’t
done, the child job state becomeswait_dependencies. (odoo-connector.com)
- adds parent UUIDs to
- When the job is stored, those UUID sets are persisted into the
dependenciesJSON. (odoo-connector.com)
- When you link jobs (e.g., via delayable graph primitives like
-
How waiting jobs get released
- When a parent job finishes,
enqueue_waiting()runs a SQL update that finds its children viareverse_depends_onand sets a child fromwait_dependencies→pendingonly if all its parents aredone. (odoo-connector.com)
- When a parent job finishes,
-
Behavior on failures
- If a job in a graph fails, dependent jobs stay in
wait_dependenciesuntil the parent eventually becomes successful (retry succeeds or it’s manually marked done), otherwise they can be canceled. (pypi.org)
- If a job in a graph fails, dependent jobs stay in
Citations:
- 1: https://odoo-connector.com/12.0/_modules/odoo/addons/queue_job/models/queue_job.html
- 2: https://odoo-connector.com/12.0/_modules/odoo/addons/queue_job/models/queue_job.html
- 3: https://odoo-connector.com/12.0/_modules/odoo/addons/queue_job/job.html?utm_source=openai
- 4: https://odoo-connector.com/12.0/_modules/odoo/addons/queue_job/job.html
- 5: https://odoo-connector.com/12.0/_modules/odoo/addons/queue_job/job.html
- 6: https://pypi.org/project/odoo-addon-queue-job/?utm_source=openai
🏁 Script executed:
# Search for with_delay implementation and queue_job usage patterns
rg "with_delay|queue_job" --type py -A 3 -B 1 | head -100Repository: kencove/connector
Length of output: 6476
🏁 Script executed:
# Find the connector_onshape module to understand the backend implementation
fd "queue_job" --type fRepository: kencove/connector
Length of output: 239
🏁 Script executed:
# Search for action_import_documents, action_import_products, action_import_boms, action_export_part_numbers implementations
rg "def action_import_documents|def action_import_products|def action_import_boms|def action_export_part_numbers" --type pyRepository: kencove/connector
Length of output: 494
🏁 Script executed:
cat -n connector_onshape/models/onshape_backend.py | sed -n '1,100p'Repository: kencove/connector
Length of output: 4445
🏁 Script executed:
# Find the action_import_* method definitions to understand their actual dependencies
rg "def action_import_documents|def action_import_products|def action_import_boms|def action_export_part_numbers" -A 20 connector_onshape/models/onshape_backend.pyRepository: kencove/connector
Length of output: 1951
🏁 Script executed:
cat -n connector/models/queue_job.pyRepository: kencove/connector
Length of output: 1901
🏁 Script executed:
# Check if with_delay is being used with any parameters like depends_on
rg "with_delay\(" -B 2 -A 5 connector_onshape/Repository: kencove/connector
Length of output: 4704
🏁 Script executed:
rg "class.*batch.*importer" -A 30 connector_onshape/components/importer.py | head -60Repository: kencove/connector
Length of output: 43
🏁 Script executed:
# Check if there's any error handling or dependency checks in the batch importers
rg "def run" -A 15 connector_onshape/components/importer.py | head -50Repository: kencove/connector
Length of output: 1680
Chain import jobs to enforce proper execution order.
The four jobs are queued independently without dependencies. Since action_import_products requires documents to be imported first, and action_import_boms requires products, out-of-order execution could cause incomplete imports or failures. Use depends_on to enforce sequential execution:
Example
doc_job = backend.with_delay().action_import_documents()
if self.import_type in ("products", "boms", "full"):
prod_job = backend.with_delay(depends_on=doc_job).action_import_products()
if self.import_type in ("boms", "full"):
bom_job = backend.with_delay(depends_on=prod_job).action_import_boms()
if self.import_type == "full":
backend.with_delay(depends_on=bom_job).action_export_part_numbers()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_onshape/wizards/onshape_import_wizard.py` around lines 49 - 59, The
jobs for import_type are currently queued independently which can lead to
out-of-order execution; update the sequence to chain jobs using depends_on so
each step waits for its prerequisite: call
backend.with_delay().action_import_documents() and store it (e.g., doc_job),
then when import_type includes products call
backend.with_delay(depends_on=doc_job).action_import_products() (store
prod_job), when import_type includes boms call
backend.with_delay(depends_on=prod_job).action_import_boms() (store bom_job),
and for full call
backend.with_delay(depends_on=bom_job).action_export_part_numbers(); ensure you
only reference the prior job variable when that prior step was created.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
connector_onshape/__manifest__.py (1)
35-36: Consider marking this as a technical module in Apps listing.If this addon is intended as infrastructure rather than an end-user app, setting
applicationtoFalsereduces Apps menu noise.Suggested tweak
- "application": True, + "application": False,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_onshape/__manifest__.py` around lines 35 - 36, The manifest currently sets "application": True which exposes the addon as an end-user app; if this connector is infrastructure/technical-only, change the manifest's "application" flag to False in the __manifest__.py so the module is treated as a technical module (leave "installable": True as-is).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@connector_onshape/__manifest__.py`:
- Around line 35-36: The manifest currently sets "application": True which
exposes the addon as an end-user app; if this connector is
infrastructure/technical-only, change the manifest's "application" flag to False
in the __manifest__.py so the module is treated as a technical module (leave
"installable": True as-is).
8bb5018 to
d263c9e
Compare
New OCA-style connector module for bidirectional synchronization between Odoo and Onshape cloud CAD/PLM platform. Features: - HMAC and OAuth2 authentication with Onshape API - Document, element, and part discovery from Onshape workspace - Product binding with 4-strategy SKU matching (exact, extension strip, version strip, case-insensitive) - Assembly BOM import with component match scoring - Part number/description export from Odoo to Onshape metadata - Webhook receiver for real-time change notifications - Event listener for auto-export on product changes - Queue job integration for async processing with retry patterns - Cron jobs for scheduled background synchronization - Import wizard for guided document/product/BOM imports - Rate limit handling and API quota exhaustion detection (402) Models: onshape.backend, onshape.document, onshape.document.element, onshape.product.product, onshape.mrp.bom Test suite: 51 test methods covering backend, adapter, importers, exporters, mappers, binder, webhooks, listener, and wizard. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Add OAuth2 Authorization Code flow as an alternative to HMAC authentication. Includes token exchange, refresh, CSRF protection, and context-aware error handling for API quota (402) responses.
d263c9e to
ce17d50
Compare
setuptools 82+ removed pkg_resources, which broke the setuptools-odoo-make-default and setuptools-odoo-get-requirements hooks. Version 3.3.2 pins setuptools<82 to restore compatibility. Co-Authored-By: Claude Opus 4.6 <[email protected]>
48b7138 to
f5dc996
Compare
f5dc996 to
ff9de7b
Compare
Summary
Models
onshape.backendonshape.documentonshape.product.productonshape.mrp.bomMetadata Fields on Product Binding
onshape_part_numberonshape_descriptiononshape_materialonshape_appearanceonshape_revisiononshape_massonshape_volumeonshape_surface_areaonshape_authoronshape_designeronshape_vendoronshape_projectonshape_custom_propertiesDependencies
All already installed in the Kencove 16.0 stack.
Deployment notes
After merging, add to
addons.yamlunder theconnector:section:No new Python dependencies required. Onshape API credentials (HMAC or OAuth2) must be configured in the backend form after installation.
Test plan
invoke test --modules=connector_onshape🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
Note
Medium Risk
Large new integration surface (external API calls, webhooks, background jobs) that can affect product/BOM data and create records if misconfigured, but is isolated to a new addon with tests and explicit activation gating.
Overview
Adds a new
connector_onshapeOdoo addon that introduces bidirectional sync with Onshape via the OCA Connector framework, including backend configuration, queue-driven imports (documents/parts/BOMs), and product metadata export back to Onshape.The PR implements an Onshape API adapter with HMAC and OAuth2 header support plus retry/rate-limit handling, product matching/auto-creation and BOM import with change detection + match scoring, and a webhook endpoint (HMAC-validated) to trigger re-imports and update lifecycle state. It also adds full UI (menus, forms, stat buttons), security groups/ACLs, cron + queue_job wiring, an import wizard, and a comprehensive automated test suite.
Written by Cursor Bugbot for commit 339471f. Configure here.