[ADD] connector_amazon: Amazon Seller Central connector#3
[ADD] connector_amazon: Amazon Seller Central connector#3
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughAdds a new Odoo Amazon Selling Partner API connector (connector_amazon): module manifest, package exports, SP‑API adapters, backend/shop models and auth, feed/order/price/notification models, webhook SNS controller, views/menus/crons, docs, and an extensive test suite. Changes
Sequence Diagram(s)sequenceDiagram
participant Shop as Amazon Shop
participant Backend as amz.backend
participant OrdersAPI as OrdersAdapter
participant Mapper as OrderImportMapper
participant DB as Odoo DB
Shop->>Backend: request list_orders(marketplace_id, created_after)
Backend->>OrdersAPI: _call_sp_api -> list_orders endpoint
OrdersAPI-->>Backend: orders page + next_token
Backend-->>Shop: orders payload
Shop->>Mapper: map order payload -> mapped values
Mapper->>DB: create/update res.partner / sale.order / amz.sale.order
DB-->>Shop: created/updated records
sequenceDiagram
participant SNS as Amazon SNS
participant Controller as WebhookController
participant Cert as CertFetcher
participant Verifier as SignatureVerifier
participant Log as NotificationLog
participant Queue as Job Queue
SNS->>Controller: POST (SubscriptionConfirmation | Notification)
Controller->>Cert: fetch SigningCertURL (cached)
Cert-->>Controller: x509 certificate
Controller->>Verifier: verify signature (string-to-sign)
Verifier-->>Controller: verified
Controller->>Log: create notification log (raw+parsed)
alt SubscriptionConfirmation
Controller->>SNS: GET SubscribeURL (confirm)
else Notification
Controller->>Queue: enqueue Log.process_notification()
end
sequenceDiagram
participant Shop as Amazon Shop
participant FeedComp as Feed model
participant FeedAdapter as FeedAdapter
participant S3 as Presigned S3
participant Backend as amz.backend
participant Monitor as FeedStatusChecker
Shop->>FeedComp: submit_feed(payload_xml)
FeedComp->>FeedAdapter: create_feed_document()
FeedAdapter->>Backend: POST /documents
Backend-->>FeedAdapter: feedDocumentId + presigned_url
FeedComp->>S3: PUT payload to presigned_url
S3-->>FeedComp: 200 OK
FeedComp->>FeedAdapter: create_feed(feedDocumentId)
FeedAdapter->>Backend: POST /feeds
Backend-->>FeedAdapter: feedId
FeedComp->>Monitor: schedule check_feed_status(feedId)
Monitor->>Backend: GET /feeds/{feedId}
Backend-->>Monitor: processingStatus
Monitor->>FeedComp: update state
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Looks good |
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (16)
connector_amazon/README.rst-10-10 (1)
10-10:⚠️ Potential issue | 🟡 MinorReplace placeholder digest before merge.
Line [10] still has
sha256:PLACEHOLDER_DIGEST, which indicates the generated README metadata is incomplete.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/README.rst` at line 10, Replace the placeholder digest string "sha256:PLACEHOLDER_DIGEST" in the generated README metadata with the actual SHA-256 digest value; locate the source digest line that currently contains sha256:PLACEHOLDER_DIGEST and update it to the real computed digest (ensuring the format remains "sha256:<actual_digest>") so the README.rst metadata is complete before merging.connector_amazon/readme/USAGE.rst-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorUSAGE content is incorrect for this addon.
Line [1]–Line [3] describe a framework/base module, but this PR introduces an operational connector. This will mislead deployment teams.
📝 Proposed documentation fix
-This module does nothing on its own. It is a ground for developing -advanced connector modules. For further information, please go on: -http://odoo-connector.com +Configure an Amazon backend, marketplaces, and at least one shop, then: + +1. Authorize SP-API credentials (LWA + IAM role). +2. Run order import/catalog sync jobs (or scheduled crons). +3. Use read-only mode for validation in sandbox/live dry runs. +4. Enable webhook endpoint processing for near real-time updates. + +See CONFIGURATION and ARCHITECTURE docs for setup details.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/readme/USAGE.rst` around lines 1 - 3, The USAGE.rst currently describes a base framework but this addon is an operational Amazon connector; update connector_amazon/readme/USAGE.rst to replace the generic text with accurate usage for this addon: provide a clear one-line description of the connector, required configuration steps (credentials, Odoo settings, API keys), deployment notes (which services/modules to enable), example configuration entries and common commands to start/sync, and any required credentials/environment variables; reference the connector name (connector_amazon) and any configuration keys or cron jobs used by this module so deployment teams can set it up correctly.connector_amazon/README.rst-67-70 (1)
67-70:⚠️ Potential issue | 🟡 MinorChangelog entry is stale for this new 16.0 module.
Line [67]–Line [70] references a 12.0 migration from 2018, which does not reflect this addon’s current release context.
📝 Proposed changelog update
-12.0.1.0.0 (2018-11-26) -~~~~~~~~~~~~~~~~~~~~~~~ - -* [MIGRATION] from 12.0 branched at rev. 324e006 +16.0.1.0.0 (2026-03-01) +~~~~~~~~~~~~~~~~~~~~~~~ + +* Initial release of Amazon Seller Central connector (SP-API).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/README.rst` around lines 67 - 70, The README.rst contains a stale changelog entry "12.0.1.0.0 (2018-11-26)" that is incorrect for this 16.0 module; replace or remove that 12.0 migration line and add an appropriate 16.0 changelog header (e.g., "16.0.0.0.0" with current date) and a brief, accurate migration note for this addon so the changelog in README.rst reflects the current release context.connector_amazon/readme/HISTORY.rst-14-17 (1)
14-17:⚠️ Potential issue | 🟡 MinorVersion and date appear outdated.
The changelog shows version
12.0.1.0.0dated 2018-11-26, but the module manifest declares version16.0.1.0.0. Consider adding a16.0.1.0.0entry documenting the initial release of this connector, or updating the existing entry to reflect the current version.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/readme/HISTORY.rst` around lines 14 - 17, The HISTORY.rst entry is outdated (shows 12.0.1.0.0 2018-11-26) while the module manifest declares version 16.0.1.0.0; update HISTORY.rst by adding a new release header for 16.0.1.0.0 with the correct date and a brief note (e.g., "Initial release of Amazon connector for 16.0") or replace the existing 12.0.1.0.0 entry with a 16.0.1.0.0 entry so the changelog matches the manifest version string used in the module.connector_amazon/readme/ARCHITECTURE.rst-16-17 (1)
16-17:⚠️ Potential issue | 🟡 MinorArchitecture doc paths are out of sync with the actual module layout.
Line 16/17 references
adapters.pyandmappers.py, but this PR exposesbackend_adapter.pyandmapper.py.
Line 26 also saysREADME.rstis “This file” while this document isARCHITECTURE.rst.📝 Proposed doc fix
- │ ├── adapters.py # API request adapters - │ └── mappers.py # Data transformation mappers + │ ├── backend_adapter.py # API request adapters + │ └── mapper.py # Data transformation mappers @@ - └── README.rst # This file + └── README.rst # Module overviewAlso applies to: 26-26
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/readme/ARCHITECTURE.rst` around lines 16 - 17, Update ARCHITECTURE.rst so the listed module filenames match the code: replace references to adapters.py and mappers.py with backend_adapter.py and mapper.py (the actual module names exposed by the PR), and correct the description that calls README.rst "This file" to refer to ARCHITECTURE.rst instead; ensure any other occurrences in the document use the exact module names backend_adapter and mapper for clarity.connector_amazon/tests/README.md-35-39 (1)
35-39:⚠️ Potential issue | 🟡 MinorREADME still uses legacy module/model naming.
Several sections reference
amazon.*models andconnector_amazon_spapipaths, which no longer match this PR’samz.*namespace and module naming.📝 Suggested doc corrections (examples)
-Tests for the `amazon.backend` model covering authentication, token management, and API communication. +Tests for the `amz.backend` model covering authentication, token management, and API communication. -Tests for the `amazon.shop` model covering order synchronization and stock management. +Tests for the `amz.shop` model covering order synchronization and stock management. -Tests for `amazon.sale.order` and `amazon.sale.order.line` models covering order import and synchronization. +Tests for `amz.sale.order` and `amz.sale.order.line` models covering order import and synchronization. -cd /path/to/connector_amazon_spapi +cd /path/to/connector_amazon -odoo --test-enable -d test_db -i connector_amazon_spapi +odoo --test-enable -d test_db -i connector_amazonAlso applies to: 67-99, 124-157
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/README.md` around lines 35 - 39, Update README references from legacy module and model names to the new amz.* namespace: replace occurrences of amazon.backend, amazon.*, and connector_amazon_spapi with their new amz equivalents (e.g., amz.backend or the correct amz module names) throughout the file, including the sections around lines 35–39 and the other noted ranges (67–99, 124–157), and ensure test file names like test_backend.py still correctly describe the amz backend tests or are renamed to match the new naming convention.connector_amazon/views/notification_log_view.xml-71-84 (1)
71-84:⚠️ Potential issue | 🟡 MinorConsider adding view-level group restrictions to
payloadandraw_messagefields.The
amz.notification.logmodel is already restricted tobase.group_systemat the model ACL level, so access is limited to system/admin users. However, adding explicitgroups="base.group_system"attributes to the view pages provides defense-in-depth and makes the restriction transparent at the UI layer.Suggested hardening (lines 71-84, 98-103)
-<page string="Payload" name="payload"> +<page string="Payload" name="payload" groups="base.group_system"> <field name="payload" widget="ace" options="{'mode': 'json'}" /> </page> -<page string="Raw Message" name="raw_message"> +<page string="Raw Message" name="raw_message" groups="base.group_system"> <field name="raw_message" widget="ace" options="{'mode': 'json'}" /> </page>Apply the same pattern to lines 98-103 (raw_message in the popup form).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/views/notification_log_view.xml` around lines 71 - 84, Add view-level group restrictions to the payload and raw_message fields so only system/admin users see them: update the <page> or <field> definitions for the payload and raw_message fields in the amz.notification.log form view to include groups="base.group_system" (apply the same change to the popup form version of raw_message as well). This enforces defense-in-depth at the UI layer by restricting the payload and raw_message pages/fields (named payload and raw_message) to base.group_system in addition to the existing model ACL.connector_amazon/models/feed.py-69-87 (1)
69-87:⚠️ Potential issue | 🟡 MinorPotential AttributeError if
payload_jsonis None.Line 75-76 accesses
self.payload_json[:1000]andlen(self.payload_json)without checking ifpayload_jsonis set. If a feed is submitted with an empty payload, this will raise an error.🛡️ Proposed defensive fix
if self.backend_id.read_only_mode: + payload_preview = self.payload_json or "" _logger.info( "[READ-ONLY MODE] Feed %s (%s) would be submitted to Amazon. " "Payload preview:\n%s", self.id, self.feed_type, - self.payload_json[:1000] - + ("..." if len(self.payload_json) > 1000 else ""), + payload_preview[:1000] + + ("..." if len(payload_preview) > 1000 else ""), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/feed.py` around lines 69 - 87, In the read-only branch inside the Feed model (the block that checks self.backend_id.read_only_mode), guard against payload_json being None before slicing or measuring it: compute a safe_preview from self.payload_json (e.g., empty string if None) and use that when building the log string and preview length check, then proceed to write the state as before; update references to self.payload_json in this block so they use the safe value to avoid a possible AttributeError.connector_amazon/tests/test_shop.py-10-10 (1)
10-10:⚠️ Potential issue | 🟡 MinorMissing
@taggeddecorator on test class.Unlike other test classes in this PR,
TestAmazonShopis missing the@tagged("post_install", "-at_install")decorator. This inconsistency may affect when tests run.🔧 Suggested fix
+from odoo.tests import tagged + from . import common +@tagged("post_install", "-at_install") class TestAmazonShop(common.CommonConnectorAmazonSpapi):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_shop.py` at line 10, Add the missing test tag decorator to the TestAmazonShop class so it runs consistently with other tests: place `@tagged`("post_install", "-at_install") immediately above the class definition for TestAmazonShop (the class symbol) to match the other test classes' tagging convention and ensure the test is included/excluded in the expected test phases.connector_amazon/models/shop.py-666-670 (1)
666-670:⚠️ Potential issue | 🟡 MinorSilent exception swallowing in cron methods hinders debugging.
The try-except-continue blocks at Lines 666-670 and 686-688 catch all exceptions without logging. This makes it difficult to diagnose issues when jobs fail to queue. Static analysis correctly flags this (S112).
🔧 Add logging for exceptions
try: binding.with_delay().push_shipment() except Exception: - # let the job record the error; continue others + _logger.exception( + "Failed to queue shipment push for order %s", binding.external_id + ) continuetry: shop.with_delay().sync_competitive_prices( updated_since=shop.last_price_sync ) except Exception: - # Let job queue record errors; continue to next shop + _logger.exception( + "Failed to queue competitive pricing sync for shop %s", shop.name + ) continueAlso applies to: 686-688
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` around lines 666 - 670, The exception handlers that silently swallow errors around the calls to binding.with_delay().push_shipment() must log the exception before continuing: change the bare except blocks to catch Exception as e and call the module logger (e.g., logger.exception(...) or process_logger.exception(...)) with a clear message that includes the binding identifier/context and the exception, then continue; apply the same change to both occurrences (the try/except around binding.with_delay().push_shipment() at the two mentioned sites) so failures to queue jobs are recorded for debugging.connector_amazon/models/notification_log.py-250-271 (1)
250-271:⚠️ Potential issue | 🟡 MinorAuto-creating bindings may have unintended side effects.
When handling
LISTINGS_ITEM_STATUS_CHANGE, the code automatically creates newamz.product.bindingrecords if a product with matchingdefault_codeexists (Lines 252-271). This auto-creation enablessync_stock=Trueandsync_price=Trueby default, which could trigger unexpected stock/price pushes to Amazon for products the user didn't explicitly configure.Consider making this behavior configurable at the backend level or defaulting the sync flags to
Falsefor auto-created bindings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/notification_log.py` around lines 250 - 271, The code auto-creates amz.product.binding with sync_stock=True and sync_price=True which can push unintended changes; change the creation logic in the LISTINGS_ITEM_STATUS_CHANGE handling (the block that searches product by default_code and creates new_binding and sets self.product_binding_id) to instead consult a backend-level setting (e.g., a new boolean/selection on backend_id like auto_create_binding and/or auto_create_binding_default_sync) and default the sync flags to False for auto-created bindings; if such backend fields don't exist, add a backend field (backend_id.auto_create_binding_default_sync or backend_id.auto_create_bindings) and use it to set "sync_stock" and "sync_price" when creating amz.product.binding so auto-created records do not enable pushes by default.connector_amazon/models/shop.py-426-426 (1)
426-426:⚠️ Potential issue | 🟡 MinorUnused expression - dead code.
Line 426 calls
row.get("status", "").lower()but the result is never assigned or used. This appears to be leftover from development.🔧 Remove dead code
for row in reader: sku = row.get("seller-sku") asin = row.get("asin1") - row.get("status", "").lower() # Skip if no SKU or inactive if not sku:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` at line 426, The expression row.get("status", "").lower() is evaluated but its result is unused (dead code) in the import/processing logic; remove this stray call or assign its result to the intended variable (e.g., status = row.get("status", "").lower()) where the downstream logic expects a normalized status. Locate the occurrence of row.get("status", "").lower() in the shop model (around the CSV/row parsing code) and either delete the expression or replace it with a proper assignment and update uses of status accordingly so no unused expression remains.connector_amazon/models/backend.py-454-458 (1)
454-458:⚠️ Potential issue | 🟡 MinorLambda captures loop variable by reference - potential bug.
The lambda at Line 457 captures
marketplace_idby reference. If used outside the loop iteration (e.g., in lazy evaluation), it would always use the last value ofmarketplace_id. While this specific usage appears safe sincefiltered()evaluates immediately, it's a code smell flagged by static analysis.🔧 Fix by capturing value in lambda parameter
# Prefer the already-linked marketplaces to avoid missing the # record when the database search ignores an unflushed cache. - existing = self.marketplace_ids.filtered( - lambda m: m.marketplace_id == marketplace_id - ) + existing = self.marketplace_ids.filtered( + lambda m, mp_id=marketplace_id: m.marketplace_id == mp_id + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/backend.py` around lines 454 - 458, The lambda passed to self.marketplace_ids.filtered captures the loop variable marketplace_id by reference which is a potential bug; change the lambda to capture the current value explicitly (e.g., use a default parameter like lambda m, mid=marketplace_id: m.marketplace_id == mid) or replace the filtered call with an equivalent immediate-evaluating construct that binds marketplace_id, so existing = self.marketplace_ids.filtered(...) no longer relies on a closure over the loop variable.connector_amazon/components/backend_adapter.py-616-627 (1)
616-627:⚠️ Potential issue | 🟡 MinorARN parsing in
create_destinationmay raise IndexError on malformed ARN.Lines 621-622 parse the ARN by splitting on
:and accessing indices[4]and[3]. If the ARN doesn't follow the expected formatarn:partition:service:region:account-id:resource, this will raise anIndexError.🔧 Add validation for ARN format
elif resource_type == "EVENT_BRIDGE": + arn_parts = arn.split(":") + if len(arn_parts) < 5: + raise ValueError( + f"Invalid ARN format: {arn}. " + "Expected format: arn:partition:service:region:account-id:resource" + ) payload = { "name": name, "resourceSpecification": { "eventBridge": { - "accountId": arn.split(":")[4], # Extract account ID from ARN - "region": arn.split(":")[3], # Extract region from ARN + "accountId": arn_parts[4], + "region": arn_parts[3], } }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 616 - 627, In create_destination, the EVENT_BRIDGE branch directly indexes arn.split(":")[3] and [4], which can raise IndexError for malformed ARNs; validate the ARN structure (e.g., split on ":" and ensure it has at least 6 segments or use a safe split with maxsplit and length check) before extracting region/account, and if invalid raise a clear ValueError (or handle gracefully) mentioning the bad arn; update the EVENT_BRIDGE payload construction to use the validated values and reference the arn and resource_type in the error message so debugging is easier.connector_amazon/models/shop.py-779-816 (1)
779-816:⚠️ Potential issue | 🟡 MinorInventory feed XML construction should escape special characters.
The
_build_inventory_feed_xmlmethod (Lines 779-816) constructs XML by string concatenation. Ifmerchant_idorseller_skucontain XML special characters (<,>,&, etc.), the generated XML would be malformed. Consider using proper XML escaping.🔧 Use xml.sax.saxutils.escape for safety
+from xml.sax.saxutils import escape def _build_inventory_feed_xml(self, bindings): """Build XML feed for inventory updates per Amazon specification.""" - merchant_id = self.backend_id.lwa_client_id + merchant_id = escape(self.backend_id.lwa_client_id or "") xml_lines = [ '<?xml version="1.0" encoding="UTF-8"?>', # ... ] for idx, binding in enumerate(bindings, start=1): available_qty = max(0, binding.odoo_id.qty_available - binding.stock_buffer) xml_lines.extend( [ " <Message>", " <MessageID>%d</MessageID>" % idx, " <Inventory>", - " <SKU>%s</SKU>" % binding.seller_sku, + " <SKU>%s</SKU>" % escape(binding.seller_sku), " <Quantity>",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` around lines 779 - 816, The _build_inventory_feed_xml function builds XML by concatenating strings and must escape special characters; import and use xml.sax.saxutils.escape and apply it to merchant_id and binding.seller_sku (and any other text node inputs) before inserting into the XML lines (e.g., replace direct uses of merchant_id and binding.seller_sku with escape(str(merchant_id)) and escape(str(binding.seller_sku))); keep numeric fields (available_qty) as integers but convert to str when joining; this ensures the XML produced by _build_inventory_feed_xml is well-formed even when values contain &,<,> or quotes.connector_amazon/models/backend.py-303-310 (1)
303-310:⚠️ Potential issue | 🟡 MinorUse timezone-aware datetime comparison.
Line 306 compares
self.token_expires_at(an Odoo datetime field) withdatetime.now()(naive datetime). In Odoo, datetime fields are stored as UTC strings and should be compared usingfields.Datetime.now()for consistency.🔧 Proposed fix
def _get_access_token(self): """Get valid access token, refreshing if necessary""" self.ensure_one() if ( not self.access_token or not self.token_expires_at - or self.token_expires_at <= datetime.now() + or self.token_expires_at <= fields.Datetime.now() ): self._refresh_access_token() return self.access_token🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/backend.py` around lines 303 - 310, The code compares a possibly timezone-aware Odoo datetime field (self.token_expires_at) to a naive datetime (datetime.now()), causing incorrect expiry checks; update the comparison in the method that returns self.access_token to use Odoo's timezone-aware now (fields.Datetime.now()) when checking self.token_expires_at and call self._refresh_access_token() if expired or missing, ensuring you import and use fields.Datetime.now() instead of datetime.now().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_amazon/components/backend_adapter.py`:
- Around line 209-229: The create_inventory_feed method currently ignores the
feed_content parameter; fix by uploading feed_content to the feed document
returned by feed_adapter.create_feed_document before calling create_feed.
Specifically, after calling feed_adapter.create_feed_document() and obtaining
feedDocumentId (and the returned upload URL/put info if present), call the
adapter method that uploads content (e.g., feed_adapter.upload_feed_document or
the appropriate upload function on the feed adapter) with feed_content and the
document info/URL, then proceed to call
feed_adapter.create_feed("POST_INVENTORY_AVAILABILITY_DATA", feedDocumentId,
marketplace_ids); ensure any needed serialization/encoding of feed_content is
applied and handle errors from the upload step.
In `@connector_amazon/data/ir_cron.xml`:
- Around line 4-66: cron_push_shipments currently is an instance method but
invoked as model.cron_push_shipments() — add the `@api.model` decorator and
refactor cron_push_shipments to iterate over shops (like cron_sync_orders() /
cron_sync_competitive_prices()) instead of accessing self.backend_id on a
multi-recordset; additionally, to avoid noisy failures, add explicit credential
validation at the start of each cron method (cron_push_shipments,
cron_sync_orders, cron_push_stock, cron_sync_competitive_prices,
cron_sync_catalog_bulk) to check required backend fields (e.g.,
backend_id.lwa_client_secret, backend_id.lwa_refresh_token) and skip/return when
credentials are missing, or alternatively mark the cron records inactive by
default in the XML if backends are unconfigured.
In `@connector_amazon/models/competitive_price.py`:
- Around line 202-213: The code writes self.listing_price into the pricelist
item without checking currency; ensure the amount matches
shop.pricelist_id.currency_id by checking if self.currency_id ==
shop.pricelist_id.currency_id and, if not, convert the amount before building
vals (use currency conversion such as
self.currency_id._convert(self.listing_price, shop.pricelist_id.currency_id,
shop.company_id or self.env.company, fields.Date.today()) or similar), then set
"fixed_price" to the converted value when creating/updating pricelist_item
(refer to vals, pricelist_item, self.listing_price, self.currency_id,
shop.pricelist_id.currency_id).
In `@connector_amazon/models/shop.py`:
- Around line 650-670: The cron_push_shipments method currently assumes a single
record via self.backend_id which breaks when called as a cron or on multiple
shops; change its scope and iterate shops explicitly: either make
cron_push_shipments an `@api.model` method and search for all shop records to
iterate (use shop.backend_id inside the loop), or keep it as a recordset method
and loop "for shop in self:" and use shop.backend_id when building the
amz.sale.order search and when calling shop.with_delay().push_shipment(); also
ensure you handle empty backend_id safely and preserve the existing
picking/tracking checks and exception swallowing.
In `@connector_amazon/tests/test_ee_coexistence.py`:
- Around line 18-25: The test should not unconditionally call
IrModule.create(...) for "sale_amazon"; instead, query IrModule (the IrModule
recordset) for an existing record with name "sale_amazon" and only create it if
not found to avoid integrity errors; update both occurrences that call
IrModule.create for "sale_amazon" (the block using IrModule =
self.env["ir.module.module"].sudo() and the second occurrence around lines
44-46) to perform a search/filter for name == "sale_amazon" and create the
record only when the search returns empty.
In `@connector_amazon/tests/test_shop_sync.py`:
- Around line 352-361: Add a helper method named _create_done_picking to the
CommonConnectorAmazonSpapi test class (CommonConnectorAmazonSpapi) that accepts
the order Odoo id (e.g., order_id) and creates a stock.picking record via
self.env["stock.picking"].create(...) with appropriate fields (picking_type_id,
location_id, location_dest_id, origin or sale_id linking to the order, state set
to "done", carrier_id optional, and carrier_tracking_ref unset/None) so tests
like test_cron_push_shipments_skips_no_tracking can call
self._create_done_picking(order.odoo_id) without raising AttributeError; use the
same pattern seen in other tests (test_shop.py/test_order.py) when constructing
pickings.
In `@connector_amazon/views/competitive_price_view.xml`:
- Around line 149-158: The domain filters named "last_7_days" and "last_30_days"
in competitive_price_view.xml use datetime.timedelta; change them to use
relativedelta (from dateutil) for date arithmetic (e.g., replace uses of
datetime.timedelta(days=7/30) with relativedelta(days=7/30) for the
(context_today() - ...).strftime(...) expressions) and ensure relativedelta is
exposed to the domain safe_eval context (or alternatively mirror the simpler
pattern used in notification_log_view.xml that relies only on context_today()
without relative calculations).
---
Minor comments:
In `@connector_amazon/components/backend_adapter.py`:
- Around line 616-627: In create_destination, the EVENT_BRIDGE branch directly
indexes arn.split(":")[3] and [4], which can raise IndexError for malformed
ARNs; validate the ARN structure (e.g., split on ":" and ensure it has at least
6 segments or use a safe split with maxsplit and length check) before extracting
region/account, and if invalid raise a clear ValueError (or handle gracefully)
mentioning the bad arn; update the EVENT_BRIDGE payload construction to use the
validated values and reference the arn and resource_type in the error message so
debugging is easier.
In `@connector_amazon/models/backend.py`:
- Around line 454-458: The lambda passed to self.marketplace_ids.filtered
captures the loop variable marketplace_id by reference which is a potential bug;
change the lambda to capture the current value explicitly (e.g., use a default
parameter like lambda m, mid=marketplace_id: m.marketplace_id == mid) or replace
the filtered call with an equivalent immediate-evaluating construct that binds
marketplace_id, so existing = self.marketplace_ids.filtered(...) no longer
relies on a closure over the loop variable.
- Around line 303-310: The code compares a possibly timezone-aware Odoo datetime
field (self.token_expires_at) to a naive datetime (datetime.now()), causing
incorrect expiry checks; update the comparison in the method that returns
self.access_token to use Odoo's timezone-aware now (fields.Datetime.now()) when
checking self.token_expires_at and call self._refresh_access_token() if expired
or missing, ensuring you import and use fields.Datetime.now() instead of
datetime.now().
In `@connector_amazon/models/feed.py`:
- Around line 69-87: In the read-only branch inside the Feed model (the block
that checks self.backend_id.read_only_mode), guard against payload_json being
None before slicing or measuring it: compute a safe_preview from
self.payload_json (e.g., empty string if None) and use that when building the
log string and preview length check, then proceed to write the state as before;
update references to self.payload_json in this block so they use the safe value
to avoid a possible AttributeError.
In `@connector_amazon/models/notification_log.py`:
- Around line 250-271: The code auto-creates amz.product.binding with
sync_stock=True and sync_price=True which can push unintended changes; change
the creation logic in the LISTINGS_ITEM_STATUS_CHANGE handling (the block that
searches product by default_code and creates new_binding and sets
self.product_binding_id) to instead consult a backend-level setting (e.g., a new
boolean/selection on backend_id like auto_create_binding and/or
auto_create_binding_default_sync) and default the sync flags to False for
auto-created bindings; if such backend fields don't exist, add a backend field
(backend_id.auto_create_binding_default_sync or backend_id.auto_create_bindings)
and use it to set "sync_stock" and "sync_price" when creating
amz.product.binding so auto-created records do not enable pushes by default.
In `@connector_amazon/models/shop.py`:
- Around line 666-670: The exception handlers that silently swallow errors
around the calls to binding.with_delay().push_shipment() must log the exception
before continuing: change the bare except blocks to catch Exception as e and
call the module logger (e.g., logger.exception(...) or
process_logger.exception(...)) with a clear message that includes the binding
identifier/context and the exception, then continue; apply the same change to
both occurrences (the try/except around binding.with_delay().push_shipment() at
the two mentioned sites) so failures to queue jobs are recorded for debugging.
- Line 426: The expression row.get("status", "").lower() is evaluated but its
result is unused (dead code) in the import/processing logic; remove this stray
call or assign its result to the intended variable (e.g., status =
row.get("status", "").lower()) where the downstream logic expects a normalized
status. Locate the occurrence of row.get("status", "").lower() in the shop model
(around the CSV/row parsing code) and either delete the expression or replace it
with a proper assignment and update uses of status accordingly so no unused
expression remains.
- Around line 779-816: The _build_inventory_feed_xml function builds XML by
concatenating strings and must escape special characters; import and use
xml.sax.saxutils.escape and apply it to merchant_id and binding.seller_sku (and
any other text node inputs) before inserting into the XML lines (e.g., replace
direct uses of merchant_id and binding.seller_sku with escape(str(merchant_id))
and escape(str(binding.seller_sku))); keep numeric fields (available_qty) as
integers but convert to str when joining; this ensures the XML produced by
_build_inventory_feed_xml is well-formed even when values contain &,<,> or
quotes.
In `@connector_amazon/README.rst`:
- Line 10: Replace the placeholder digest string "sha256:PLACEHOLDER_DIGEST" in
the generated README metadata with the actual SHA-256 digest value; locate the
source digest line that currently contains sha256:PLACEHOLDER_DIGEST and update
it to the real computed digest (ensuring the format remains
"sha256:<actual_digest>") so the README.rst metadata is complete before merging.
- Around line 67-70: The README.rst contains a stale changelog entry "12.0.1.0.0
(2018-11-26)" that is incorrect for this 16.0 module; replace or remove that
12.0 migration line and add an appropriate 16.0 changelog header (e.g.,
"16.0.0.0.0" with current date) and a brief, accurate migration note for this
addon so the changelog in README.rst reflects the current release context.
In `@connector_amazon/readme/ARCHITECTURE.rst`:
- Around line 16-17: Update ARCHITECTURE.rst so the listed module filenames
match the code: replace references to adapters.py and mappers.py with
backend_adapter.py and mapper.py (the actual module names exposed by the PR),
and correct the description that calls README.rst "This file" to refer to
ARCHITECTURE.rst instead; ensure any other occurrences in the document use the
exact module names backend_adapter and mapper for clarity.
In `@connector_amazon/readme/HISTORY.rst`:
- Around line 14-17: The HISTORY.rst entry is outdated (shows 12.0.1.0.0
2018-11-26) while the module manifest declares version 16.0.1.0.0; update
HISTORY.rst by adding a new release header for 16.0.1.0.0 with the correct date
and a brief note (e.g., "Initial release of Amazon connector for 16.0") or
replace the existing 12.0.1.0.0 entry with a 16.0.1.0.0 entry so the changelog
matches the manifest version string used in the module.
In `@connector_amazon/readme/USAGE.rst`:
- Around line 1-3: The USAGE.rst currently describes a base framework but this
addon is an operational Amazon connector; update
connector_amazon/readme/USAGE.rst to replace the generic text with accurate
usage for this addon: provide a clear one-line description of the connector,
required configuration steps (credentials, Odoo settings, API keys), deployment
notes (which services/modules to enable), example configuration entries and
common commands to start/sync, and any required credentials/environment
variables; reference the connector name (connector_amazon) and any configuration
keys or cron jobs used by this module so deployment teams can set it up
correctly.
In `@connector_amazon/tests/README.md`:
- Around line 35-39: Update README references from legacy module and model names
to the new amz.* namespace: replace occurrences of amazon.backend, amazon.*, and
connector_amazon_spapi with their new amz equivalents (e.g., amz.backend or the
correct amz module names) throughout the file, including the sections around
lines 35–39 and the other noted ranges (67–99, 124–157), and ensure test file
names like test_backend.py still correctly describe the amz backend tests or are
renamed to match the new naming convention.
In `@connector_amazon/tests/test_shop.py`:
- Line 10: Add the missing test tag decorator to the TestAmazonShop class so it
runs consistently with other tests: place `@tagged`("post_install", "-at_install")
immediately above the class definition for TestAmazonShop (the class symbol) to
match the other test classes' tagging convention and ensure the test is
included/excluded in the expected test phases.
In `@connector_amazon/views/notification_log_view.xml`:
- Around line 71-84: Add view-level group restrictions to the payload and
raw_message fields so only system/admin users see them: update the <page> or
<field> definitions for the payload and raw_message fields in the
amz.notification.log form view to include groups="base.group_system" (apply the
same change to the popup form version of raw_message as well). This enforces
defense-in-depth at the UI layer by restricting the payload and raw_message
pages/fields (named payload and raw_message) to base.group_system in addition to
the existing model ACL.
---
Nitpick comments:
In `@connector_amazon/components/binder.py`:
- Line 10: Replace the vague inline TODO in binder.py with an explicit action:
either remove it before release or convert it into a tracked task (e.g., add an
issue/PR reference or ticket ID) and a short checklist in the comment describing
required work (design multi-marketplace key strategy, implement helpers, and add
test cases). If you choose to keep a code-side marker, replace the single-line
TODO with a clear note referencing the issue ID and list the concrete subtasks
(design strategy, implement helpers such as
generate_marketplace_key/normalize_marketplace_keys/merge_marketplace_keys, and
add unit tests) so reviewers can find and follow up on the work. Ensure the
comment references the issue/ticket and includes the minimal checklist items so
it won't be lost.
In `@connector_amazon/components/mapper.py`:
- Around line 206-212: The currency fallback silently uses
self.env.company.currency_id when listing_price_data.get("CurrencyCode") isn't
found; add a warning log to record this fallback for traceability by
initializing a logger at the top of the file with other imports and then, inside
the method where currency_code, listing_price_data, currency =
self.env["res.currency"]... and fallback to self.env.company.currency_id, call
logger.warning with a clear message including the missing currency_code and the
company currency used; ensure you reference currency_code, listing_price_data,
and self.env.company.currency_id in the log so it's easy to trace.
In `@connector_amazon/controllers/webhook.py`:
- Line 41: The function receive_notification declares an unused kwargs
parameter; either remove it or rename it to indicate intentional unused status
(e.g., _kwargs) to satisfy linters and clarify intent. Locate the
receive_notification definition and update the parameter list accordingly, and
if any callers rely on accepting extra keyword args, keep the parameter but
rename to _kwargs to suppress unused-argument warnings.
- Around line 394-396: The code currently queues a background job via
log.with_delay().process_notification() only when _create_notification_log
returns a truthy log; add a clear warning/info log when _create_notification_log
returns None so dropped notifications are recorded. Update the surrounding logic
that calls _create_notification_log and checks the log variable to emit a
message (e.g., using the same logger instance or processLogger) including
identifying details from the incoming notification (order id, notification type,
raw payload or headers) when log is falsy, and keep the existing if log:
log.with_delay().process_notification() unchanged.
- Around line 15-16: The global _cert_cache dictionary currently holds
certificates indefinitely; replace it with a bounded cache (e.g., use
functools.lru_cache with a sensible maxsize or implement a small TTL-based
eviction map) and update the _get_certificate function to call the new cached
loader instead of directly reading/writing _cert_cache; ensure the cached loader
accepts the certificate URL as its key and that cache misses fetch and parse the
remote certificate just as before, while evicted entries will allow fresh
fetches.
- Around line 178-192: The certificate URL validation contains redundant logic:
the sns.*.amazonaws.com pattern check (using parsed.hostname.startswith("sns.")
and parsed.hostname.endswith(".amazonaws.com")) already covers the explicit
valid_domains list, making the final return of parsed.hostname in valid_domains
unreachable; remove the explicit valid_domains list and the final return, and
instead return the boolean expression that checks both
parsed.hostname.startswith("sns.") and
parsed.hostname.endswith(".amazonaws.com") (or, if you prefer to keep a
whitelist, reverse the logic to check valid_domains first and only fall back to
the pattern if not found). Ensure you update any references to valid_domains and
keep parsed.hostname checks in the function performing the certificate URL
validation.
In `@connector_amazon/models/competitive_price.py`:
- Around line 141-147: The unique constraint on _sql_constraints
(amz_competitive_price_unique) won't block duplicates when competitive_price_id
is NULL because PostgreSQL treats NULLs as distinct; either make the field
non-nullable (set competitive_price_id required=True and remove NULLs) or add
explicit DB indexes to enforce the intended uniqueness: implement a partial
unique index via module SQL (e.g., in a post_init_hook or override _auto_init in
the CompetitivePrice model) such as CREATE UNIQUE INDEX ... ON
competitive_price_table (product_binding_id, asin, fetch_date) WHERE
competitive_price_id IS NULL and/or CREATE UNIQUE INDEX ... WHERE
competitive_price_id IS NOT NULL (or a single index that fits your uniqueness
rule). Update or remove the original _sql_constraints accordingly so the DB
enforces the desired behavior.
In `@connector_amazon/models/feed.py`:
- Line 147: The import requests is nested inside the _upload_feed_content
function; move the requests import to the top-level module imports so the
dependency is resolved at import time, then remove the inline import statement
inside _upload_feed_content (delete the import requests line currently at line
where _upload_feed_content contains it) to keep imports consistent and fail fast
if requests is missing.
- Around line 220-228: check_feed_status's exception handler currently logs the
error and updates state but doesn't increment retry_count like submit_feed does,
making failure tracking inconsistent; modify the except block inside the
check_feed_status method to increment self.retry_count (and persist it via the
existing self.write call) when catching Exception e, so the write payload
includes an updated "retry_count": self.retry_count (or increment before
writing) along with "state", "last_state_message", and "last_status_update",
mirroring submit_feed's retry behavior.
In `@connector_amazon/models/marketplace.py`:
- Around line 88-93: The current derivation for vals["code"] can return empty or
single-character strings; change it to explicitly produce a 2-character code by
checking name_hint length first (use name_hint.upper()[:2] only when
len(name_hint) >= 2), otherwise fall back to using the first two chars of
vals.get("marketplace_id") when available, and if that is not at least 2 chars,
pad or default to "MK" to guarantee exactly two characters; update the logic
around name_hint, vals["marketplace_id"], and vals["code"] accordingly.
In `@connector_amazon/models/notification_log.py`:
- Around line 193-211: When handling a new order (the branch that uses
order_binding, self.order_id, and self.backend_id.shop_ids) add an explicit
fallback for the case where self.backend_id.shop_ids is empty: emit a clear
warning log (include backend id and amz_order_id) and avoid silently returning;
optionally set self.order_id to False (or a sentinel) so callers know no order
was created. Modify the block around work_on("amz.sale.order") /
adapter.get_order and the call to amz.sale.order._create_or_update_from_amazon
to first check shop presence and log the missing-shop condition with context
instead of doing nothing.
In `@connector_amazon/models/order.py`:
- Around line 101-106: The lambda inside the loop over picking.move_ids captures
the loop variable by reference and uses an ambiguous iterator name; fix the
filtered call in the loop by capturing the current move as a default argument
(e.g., m=move) and rename the lambda param from l to a clearer name (e.g.,
order_line_item) so change the expression
self.odoo_id.order_line.filtered(lambda l: l.product_id == move.product_id)[:1]
to use a captured variable and explicit param name to avoid late binding and
lint warnings while keeping the same behavior.
- Around line 374-383: The chained use of and/or in carrier_name, tracking and
especially ship_method is unclear and error-prone; replace these expressions
with explicit conditional logic: set carrier_name = picking.carrier_id.name if
picking.carrier_id else "" and tracking = picking.carrier_tracking_ref or "";
for ship_method, evaluate marketplace_id and call
get_delivery_carrier_for_amazon_shipping(self.odoo_id.carrier_id.name) in an
if/else (or a clear ternary) and return self.odoo_id.carrier_id.name when both
conditions are true, otherwise "Standard"; reference the symbols carrier_name,
tracking, ship_method, marketplace_id, get_delivery_carrier_for_amazon_shipping,
and self.odoo_id.carrier_id.name to locate and update the code.
- Around line 704-708: The broad except on parsing unit_price masks unexpected
errors; in the block that parses raw_amount into unit_price (the unit_price
assignment using round(float(raw_amount), 2)), replace the generic "except
Exception" with specific exception types expected from float conversion (e.g.,
except (ValueError, TypeError)) so only bad input is caught and unit_price falls
back to 0.0, leaving other exceptions to surface for debugging; update the
except clause near the unit_price/ raw_amount code accordingly.
In `@connector_amazon/models/product_binding.py`:
- Around line 178-218: The except block that catches requests.RequestException
currently only logs a warning (the "except requests.RequestException as exc"
block and the "_logger.warning" call) so users are silently redirected to the
search page; change this to also surface a brief user-facing notification by
raising a UserError (or returning a client warning action) with a concise
message that includes the SKU/brand and the exception message (keep the existing
_logger.warning for diagnostics), e.g. replace or augment the
"_logger.warning('Hub API call failed: %s', exc)" line with code that raises
UserError(_("Could not reach Amazon Hub for SKU '%(sku)s' and brand '%(brand)s':
%(err)s") % {"sku": self.seller_sku, "brand": brand_slug, "err": str(exc)}) so
the caller/user understands why the lookup fell back to search.
In `@connector_amazon/models/shop.py`:
- Line 614: The midnight checks use a timezone-unaware expression
datetime.now().hour == 0; replace these with a timezone-aware comparison using a
configured timezone (e.g., datetime.now(tz=zoneinfo.ZoneInfo(YOUR_TZ)) or
datetime.now(timezone.utc) depending on the app's convention) so both
occurrences that evaluate datetime.now().hour == 0 use the same tz-aware now;
alternatively centralize into a helper (e.g., is_midnight(tz)) and call it from
the places currently checking datetime.now().hour == 0 to ensure consistent,
documented timezone handling or to allow switching to a scheduler later.
In `@connector_amazon/readme/FEATURES.rst`:
- Line 10: Replace the hardcoded claim "* **283 Unit Tests**: Full mock
coverage, zero external API calls" in FEATURES.rst with a timeless, non-numeric
statement (e.g., "Comprehensive unit test suite with full mock coverage and no
external API calls") and move the exact test count to CI/reporting or release
notes so the feature list won't become stale; specifically locate the literal
string "* **283 Unit Tests**: Full mock coverage, zero external API calls" and
update it to the new wording.
In `@connector_amazon/static/description/index.html`:
- Around line 366-371: The generated HTML (produced by oca-gen-addon-readme)
contains HTTP links flagged by static analysis; update the source RST files
(e.g., readme/DESCRIPTION.rst and readme/USAGE.rst) to replace those HTTP URLs
with HTTPS where supported, then re-run the README generation
(oca-gen-addon-readme) so connector_amazon/static/description/index.html is
regenerated without HTTP links.
In `@connector_amazon/tests/common.py`:
- Around line 218-235: The broad except in _create_test_carrier around
carrier_model.create hides unrelated errors; narrow it to catch only DB-level
failures (e.g., psycopg2.Error or Odoo's DatabaseError) before falling back to
the raw SQL path, and log or re-raise other exceptions so they aren't silently
swallowed. Specifically, replace the generic except Exception that wraps
carrier_model.create with a targeted except (e.g., psycopg2.Error or
odoo.exceptions.DatabaseError) and keep the self.env.cr.execute fallback and
carrier_model.browse(carrier_id) unchanged.
In `@connector_amazon/tests/test_competitive_price.py`:
- Around line 236-252: The test test_archive_old_prices uses a hardcoded
fetch_date which can become brittle; change the old price creation to use a
relative date by computing fetch_date=datetime.now() - timedelta(days=40) (or
similar) when calling _create_competitive_price so the test consistently creates
an "old" record, keep the recent_price creation unchanged, and continue
asserting archive_old_prices(days=30) returns 1 and the active flags; update
imports if needed to include timedelta and ensure references to
test_archive_old_prices, _create_competitive_price, and archive_old_prices
remain intact.
- Around line 22-27: The broad except Exception in the fixture loader should be
tightened: replace the blind except with except KeyError to specifically handle
missing keys when walking
data["paths"]["/products/pricing/v0/price"]["get"]["responses"]["200"]["examples"]["application/json"],
and optionally add a debug log (e.g., using the same test logger) inside the
except KeyError block before returning {} to aid troubleshooting; keep the same
return {} behavior.
In `@connector_amazon/tests/test_feed.py`:
- Around line 406-434: Update the test_upload_feed_content_uses_correct_headers
to explicitly mock the response.raise_for_status method on mock_put.return_value
so the call to response.raise_for_status() in _upload_feed_content is a no-op;
e.g. set mock_put.return_value.raise_for_status = mock.Mock() or
mock_put.return_value.raise_for_status.return_value = None before calling
feed._upload_feed_content("https://s3-test-url") to make the test robust and
self-documenting while keeping the existing assertions on headers and data
intact.
In `@connector_amazon/tests/test_order_shipment.py`:
- Around line 89-120: The test test_push_shipment_readonly_no_api_call defines
unused locals (mock_submit, binding, picking, carrier, feed) — rename or prefix
any variables that are intentionally unused with an underscore (e.g.,
_mock_submit, _picking, _carrier) and for the feed lookup use a meaningful
assertion variable or prefix it if not used further (e.g., _feed) so
linters/tests don’t flag unused-variable warnings; update references accordingly
in test_push_shipment_readonly_no_api_call and similarly in
test_push_shipment_no_done_picking_linked for any intentionally unused values
returned by _create_amazon_order or _setup_order_with_picking.
- Around line 55-78: The test test_push_shipment_creates_feed unpacks picking
and carrier from the helper _setup_order_with_picking but never uses them;
change the unpacked variable names to _picking and _carrier (or prefix them with
an underscore) when calling _setup_order_with_picking in
test_push_shipment_creates_feed to mark them as intentionally unused and satisfy
static analysis.
In `@connector_amazon/tests/test_order.py`:
- Around line 22-27: Replace the broad except Exception in the test fixture's
try/except block that accesses
data["paths"]["/orders/v0/orders"]["get"]["responses"]["200"]["examples"]["application/json"]["payload"]["Orders"]
with a narrower except (KeyError, TypeError) to avoid swallowing unrelated
errors; keep the same fallback return [] behavior so only missing or wrong-typed
keys are caught (mirrors the change suggested for test_competitive_price.py).
- Around line 717-826: The two picking tests in this file
(test_get_last_done_picking_returns_latest and
test_get_last_done_picking_ignores_non_done) duplicate coverage already provided
elsewhere; remove or consolidate them and rely on the canonical tests (or a
shared test helper) that exercise binding._get_last_done_picking; alternatively
refactor the duplicated setup into a reusable fixture/helper used by both test
modules and keep only unique assertions here, ensuring references to the tested
method binding._get_last_done_picking and the original tests
(test_get_last_done_picking_returns_latest /
test_get_last_done_picking_ignores_non_done) are removed from this file to avoid
maintenance duplication.
In `@connector_amazon/tests/test_shop.py`:
- Around line 797-808: The current test guards the assertion with conditional
checks that can skip the assertion; update the test to make definitive
assertions: assert that the patched method action_push_stock (mock_push) was
called (use mock_push.assert_called() or assert mock_push.called) after invoking
cron_push_stock(), then retrieve and assert the call arguments exist and include
hourly_shop (inspect mock_push.call_args and assert hourly_shop.id is in the
passed recordset ids). Target the patched symbol action_push_stock on the
amz.shop instance and the cron_push_stock invocation to ensure the test always
fails when the push was not performed.
In `@connector_amazon/tests/test_webhook.py`:
- Around line 265-295: The test currently invokes log._handle_order_change()
directly which bypasses the normal handler lifecycle and therefore doesn't
exercise or assert the log state transition; change the test to call the public
entry point log.process_notification() (with the same MagicMock work_on setup)
and then assert the expected final state on the log record (e.g. that log.state
changed from "received" to the expected state such as "done" or "failed"), or if
you intentionally want to call _handle_order_change(), add an explicit assertion
that the state remains "received" (or whatever the correct behavior is) after
the direct call; reference the test method test_handle_order_change_new_order,
the amz.notification.log record variable log, and the methods
process_notification() and _handle_order_change() when making the change.
In `@connector_amazon/views/shop_view.xml`:
- Around line 55-62: The XML has two adjacent button elements with no
whitespace/newline between them causing formatting issues; add a newline (or a
blank line) between the closing tag of the previous button (e.g., the "Push
Stock" button) and the opening tag of the button with
name="action_sync_competitive_prices" (string="Sync Competitive Pricing") so
each <button .../> sits on its own line for proper formatting and readability.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
connector_amazon/security/ir.model.access.csvis excluded by!**/*.csvconnector_amazon/static/description/icon.pngis excluded by!**/*.png
📒 Files selected for processing (60)
connector_amazon/README.rstconnector_amazon/__init__.pyconnector_amazon/__manifest__.pyconnector_amazon/components/__init__.pyconnector_amazon/components/backend_adapter.pyconnector_amazon/components/binder.pyconnector_amazon/components/mapper.pyconnector_amazon/controllers/__init__.pyconnector_amazon/controllers/webhook.pyconnector_amazon/data/ir_cron.xmlconnector_amazon/models/__init__.pyconnector_amazon/models/backend.pyconnector_amazon/models/competitive_price.pyconnector_amazon/models/feed.pyconnector_amazon/models/marketplace.pyconnector_amazon/models/notification_log.pyconnector_amazon/models/order.pyconnector_amazon/models/product_binding.pyconnector_amazon/models/res_partner.pyconnector_amazon/models/shop.pyconnector_amazon/readme/ARCHITECTURE.rstconnector_amazon/readme/CONFIGURATION.rstconnector_amazon/readme/CONTRIBUTORS.rstconnector_amazon/readme/DESCRIPTION.rstconnector_amazon/readme/FEATURES.rstconnector_amazon/readme/HISTORY.rstconnector_amazon/readme/USAGE.rstconnector_amazon/static/description/index.htmlconnector_amazon/tests/README.mdconnector_amazon/tests/__init__.pyconnector_amazon/tests/common.pyconnector_amazon/tests/ordersV0.jsonconnector_amazon/tests/productPricingV0.jsonconnector_amazon/tests/test_adapters.pyconnector_amazon/tests/test_backend.pyconnector_amazon/tests/test_backend_subscriptions.pyconnector_amazon/tests/test_competitive_price.pyconnector_amazon/tests/test_ee_coexistence.pyconnector_amazon/tests/test_feed.pyconnector_amazon/tests/test_mapper.pyconnector_amazon/tests/test_marketplace.pyconnector_amazon/tests/test_notification_log.pyconnector_amazon/tests/test_order.pyconnector_amazon/tests/test_order_shipment.pyconnector_amazon/tests/test_product_binding.pyconnector_amazon/tests/test_shop.pyconnector_amazon/tests/test_shop_sync.pyconnector_amazon/tests/test_webhook.pyconnector_amazon/views/amz_menu.xmlconnector_amazon/views/backend_view.xmlconnector_amazon/views/competitive_price_view.xmlconnector_amazon/views/feed_view.xmlconnector_amazon/views/marketplace_view.xmlconnector_amazon/views/notification_log_view.xmlconnector_amazon/views/order_view.xmlconnector_amazon/views/product_binding_view.xmlconnector_amazon/views/shop_view.xmlrequirements.txtsetup/connector_amazon/odoo/addons/connector_amazonsetup/connector_amazon/setup.py
| <record id="ir_cron_push_amz_stock" model="ir.cron"> | ||
| <field name="name">Amazon: Push Stock Updates</field> | ||
| <field name="model_id" ref="model_amz_shop" /> | ||
| <field name="state">code</field> | ||
| <field name="code">model.cron_push_stock()</field> | ||
| <field name="interval_number">1</field> | ||
| <field name="interval_type">hours</field> | ||
| <field name="numbercall">-1</field> | ||
| <field name="doall" eval="False" /> | ||
| <field name="active" eval="True" /> | ||
| </record> | ||
|
|
||
| <!-- Hourly cron job for order synchronization --> | ||
| <record id="ir_cron_sync_amz_orders" model="ir.cron"> | ||
| <field name="name">Amazon: Sync Orders</field> | ||
| <field name="model_id" ref="model_amz_shop" /> | ||
| <field name="state">code</field> | ||
| <field name="code">model.cron_sync_orders()</field> | ||
| <field name="interval_number">1</field> | ||
| <field name="interval_type">hours</field> | ||
| <field name="numbercall">-1</field> | ||
| <field name="doall" eval="False" /> | ||
| <field name="active" eval="True" /> | ||
| </record> | ||
|
|
||
| <!-- Hourly cron job for pushing shipment tracking --> | ||
| <record id="ir_cron_push_amz_shipments" model="ir.cron"> | ||
| <field name="name">Amazon: Push Shipments</field> | ||
| <field name="model_id" ref="model_amz_shop" /> | ||
| <field name="state">code</field> | ||
| <field name="code">model.cron_push_shipments()</field> | ||
| <field name="interval_number">1</field> | ||
| <field name="interval_type">hours</field> | ||
| <field name="numbercall">-1</field> | ||
| <field name="doall" eval="False" /> | ||
| <field name="active" eval="True" /> | ||
| </record> | ||
|
|
||
| <!-- Daily cron job for syncing competitive pricing --> | ||
| <record id="ir_cron_sync_amz_competitive_prices" model="ir.cron"> | ||
| <field name="name">Amazon: Sync Competitive Pricing</field> | ||
| <field name="model_id" ref="model_amz_shop" /> | ||
| <field name="state">code</field> | ||
| <field name="code">model.cron_sync_competitive_prices()</field> | ||
| <field name="interval_number">1</field> | ||
| <field name="interval_type">days</field> | ||
| <field name="numbercall">-1</field> | ||
| <field name="doall" eval="False" /> | ||
| <field name="active" eval="True" /> | ||
| </record> | ||
|
|
||
| <!-- Daily cron job for bulk catalog sync via Reports API --> | ||
| <record id="ir_cron_sync_amz_catalog_bulk" model="ir.cron"> | ||
| <field name="name">Amazon: Bulk Catalog Sync (Reports)</field> | ||
| <field name="model_id" ref="model_amz_shop" /> | ||
| <field name="state">code</field> | ||
| <field name="code">model.cron_sync_catalog_bulk()</field> | ||
| <field name="interval_number">1</field> | ||
| <field name="interval_type">days</field> | ||
| <field name="numbercall">-1</field> | ||
| <field name="doall" eval="False" /> | ||
| <field name="active" eval="True" /> | ||
| </record> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files | head -20Repository: kencove/connector
Length of output: 437
🏁 Script executed:
find . -name "*.py" -path "*amazon*" -type f | head -20Repository: kencove/connector
Length of output: 852
🏁 Script executed:
rg -l "class amz_shop|def cron_push_stock|def cron_sync_orders" --type=pyRepository: kencove/connector
Length of output: 92
🏁 Script executed:
wc -l connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 96
🏁 Script executed:
cat -n connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 38129
🏁 Script executed:
head -100 connector_amazon/models/backend.pyRepository: kencove/connector
Length of output: 3476
🏁 Script executed:
sed -n '650,671p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 855
🏁 Script executed:
rg -n "@api.model" connector_amazon/models/shop.py | grep -E "(cron_|def )"Repository: kencove/connector
Length of output: 43
🏁 Script executed:
rg -n "def cron_(push_stock|sync_orders|push_shipments|sync_competitive_prices|sync_catalog_bulk)" connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 260
🏁 Script executed:
sed -n '600,650p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 1729
🏁 Script executed:
sed -n '650,730p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 2804
🏁 Script executed:
sed -n '151,170p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 686
🏁 Script executed:
sed -n '167,229p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 2446
🏁 Script executed:
sed -n '50,83p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 1285
🏁 Script executed:
rg -n "lwa_client_id|lwa_client_secret|lwa_refresh_token|aws_role_arn|seller_id" connector_amazon/models/shop.py connector_amazon/models/backend.py | head -20Repository: kencove/connector
Length of output: 849
🏁 Script executed:
sed -n '566,580p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 550
🏁 Script executed:
sed -n '731,778p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 1632
🏁 Script executed:
sed -n '167,229p' connector_amazon/models/shop.py | grep -A 20 "with self.backend_id.work_on"Repository: kencove/connector
Length of output: 1100
🏁 Script executed:
rg -n "work_on|def work_on" connector_amazon/models/backend.py -A 10 | head -40Repository: kencove/connector
Length of output: 43
🏁 Script executed:
rg -n "class.*Adapter|def list_orders|def get_competitive_pricing" connector_amazon/components/ -A 5 | head -50Repository: kencove/connector
Length of output: 3675
🏁 Script executed:
wc -l connector_amazon/components/backend_adapter.pyRepository: kencove/connector
Length of output: 111
🏁 Script executed:
head -50 connector_amazon/components/backend_adapter.pyRepository: kencove/connector
Length of output: 1664
🏁 Script executed:
sed -n '1,100p' connector_amazon/models/backend.py | grep -n "def \|class \|required\|lwa_client"Repository: kencove/connector
Length of output: 477
🏁 Script executed:
rg -n "_call_sp_api|def _call_sp_api" connector_amazon/models/backend.py -A 15Repository: kencove/connector
Length of output: 2080
🏁 Script executed:
rg -n "def _get_access_token" connector_amazon/models/backend.py -A 30Repository: kencove/connector
Length of output: 1272
🏁 Script executed:
sed -n '270,320p' connector_amazon/models/backend.pyRepository: kencove/connector
Length of output: 1832
🏁 Script executed:
sed -n '650,671p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 855
🏁 Script executed:
rg -n "def work_on" connector_amazon/models/backend.py -A 10 | head -30Repository: kencove/connector
Length of output: 43
🏁 Script executed:
rg -n "cron_push_stock|cron_sync_orders|cron_sync_competitive_prices|cron_sync_catalog_bulk" connector_amazon/data/ir_cron.xmlRepository: kencove/connector
Length of output: 328
🏁 Script executed:
cat connector_amazon/data/ir_cron.xmlRepository: kencove/connector
Length of output: 3056
🏁 Script executed:
sed -n '650,671p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 855
🏁 Script executed:
rg -n "self.backend_id.id" connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 481
🏁 Script executed:
rg -n "cron_push_shipments" connector_amazon/data/ir_cron.xml -A 10Repository: kencove/connector
Length of output: 599
🏁 Script executed:
sed -n '650,671p' connector_amazon/models/shop.pyRepository: kencove/connector
Length of output: 855
Fix critical bug in cron_push_shipments() and add credential validation to all cron methods.
The cron methods have basic filters (checking active, sync flags, intervals) but have two issues:
-
Critical:
cron_push_shipments()(line 650) is an instance method but not decorated with@api.model. When invoked by the cron job asmodel.cron_push_shipments(), it receives a recordset of all shops. Accessingself.backend_id.idon a multi-record set will fail. Add@api.modeldecorator and refactor to iterate over shops explicitly, likecron_sync_orders()andcron_sync_competitive_prices()do. -
Risk of repeated API failures: Cron methods don't verify that backend credentials (especially optional fields
lwa_client_secret,lwa_refresh_token) are configured before calling adapters. On fresh installs with unconfigured backends, all five crons will attempt API calls and fail hourly/daily, filling logs with errors. Either default these crons to inactive in XML or add explicit credential validation (e.g.,if not self.backend_id.lwa_client_secret: return) before making API calls.
Exception handling exists but does not prevent noisy failure spam.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/data/ir_cron.xml` around lines 4 - 66, cron_push_shipments
currently is an instance method but invoked as model.cron_push_shipments() — add
the `@api.model` decorator and refactor cron_push_shipments to iterate over shops
(like cron_sync_orders() / cron_sync_competitive_prices()) instead of accessing
self.backend_id on a multi-recordset; additionally, to avoid noisy failures, add
explicit credential validation at the start of each cron method
(cron_push_shipments, cron_sync_orders, cron_push_stock,
cron_sync_competitive_prices, cron_sync_catalog_bulk) to check required backend
fields (e.g., backend_id.lwa_client_secret, backend_id.lwa_refresh_token) and
skip/return when credentials are missing, or alternatively mark the cron records
inactive by default in the XML if backends are unconfigured.
Full-featured Amazon Seller Central integration built on the OCA Connector framework. Uses the `amz.*` model namespace (amz.backend, amz.marketplace, amz.shop, amz.sale.order, amz.product.binding, etc.) to coexist with the Odoo Enterprise `sale_amazon` module which occupies the `amazon.*` namespace. Features: - SP-API authentication (LWA OAuth2 + STS role assumption) - Order import with pagination, partner creation, line sync - Stock push via Feeds API (POST_INVENTORY_AVAILABILITY_DATA) - Shipment tracking push (Order Fulfillment feed) - Competitive pricing sync (bulk ASIN batches) - Bulk catalog sync via Reports API (GET_MERCHANT_LISTINGS_ALL_DATA) - Real-time notifications via SNS webhooks (order, listing, feed, report) - Read-only mode for safe testing against live credentials - EE coexistence: detects sale_amazon, can disable its cron, deduplicates orders by checking sale.order.amazon_order_ref 283 unit tests across 15 test files. Co-Authored-By: Claude Opus 4.6 <[email protected]>
0def8db to
a8cf291
Compare
Hub integration: - Backend: hub_base_url, hub_api_key, hub_brand_slug config fields - Product binding: "View Hub Analysis" button calls Hub /products/lookup endpoint by SKU + brand slug, opens product detail in new tab - Graceful fallback: opens search page with SKU if no API key Lint fixes (pre-existing): - W8113: Remove redundant string= attrs on fields - C8107: Wrap UserError strings with _() for translation - B950: Break long line in webhook token message Co-Authored-By: Claude Opus 4.6 <[email protected]>
Replace Hub-specific fields with a provider-agnostic enrichment
system supporting Amazon Hub, Jungle Scout, SellerApp, SmartScout,
Keepa, DataHawk, Rainforest API, or any custom endpoint.
- Selection field with onchange to pre-fill URL templates and auth
- Configurable API key location (header vs query param)
- URL templates with {asin}, {sku}, {marketplace}, {brand} placeholders
- Separate lookup (API) and dashboard (browser) URL templates
- Product binding button renamed to generic "Listing Analysis"
Co-Authored-By: Claude Opus 4.6 <[email protected]>
setuptools 82+ removed pkg_resources, which broke the setuptools-odoo-make-default and setuptools-odoo-get-requirements hooks. Version 3.3.2 pins setuptools<82 to restore compatibility. Co-Authored-By: Claude Opus 4.6 <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (4)
connector_amazon/models/shop.py (1)
653-673:⚠️ Potential issue | 🔴 CriticalMake
cron_push_shipmentsmodel-safe by iterating shops explicitly.This method uses
self.backend_id.iddirectly. When called as a cron on the model recordset,selfis not a single shop, so the search scope is wrong and shipments can be skipped.🔧 Proposed fix
+ `@api.model` def cron_push_shipments(self): """Cron job to push shipment tracking for shipped orders.""" - order_bindings = self.env["amz.sale.order"].search( - [ - ("backend_id", "=", self.backend_id.id), - ("shipment_confirmed", "=", False), - ] - ) - - for binding in order_bindings: - picking = binding._get_last_done_picking() - if not picking: - continue - # Only push if tracking is present - if not (picking.carrier_id and picking.carrier_tracking_ref): - continue - try: - binding.with_delay().push_shipment() - except Exception: - # let the job record the error; continue others - continue + shops = self.search([("active", "=", True), ("backend_id", "!=", False)]) + for shop in shops: + order_bindings = self.env["amz.sale.order"].search( + [ + ("backend_id", "=", shop.backend_id.id), + ("shop_id", "=", shop.id), + ("shipment_confirmed", "=", False), + ] + ) + + for binding in order_bindings: + picking = binding._get_last_done_picking() + if not picking: + continue + if not (picking.carrier_id and picking.carrier_tracking_ref): + continue + try: + binding.with_delay().push_shipment() + except Exception: + _logger.exception( + "Failed to queue shipment push for order %s", + binding.external_id, + ) + continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` around lines 653 - 673, cron_push_shipments is not model-safe because it uses self.backend_id.id on a potentially multi-record self; change it to iterate each shop in self and run the order search scoped to that shop (use shop.backend_id.id when searching amz.sale.order), then for each binding call binding._get_last_done_picking(), check picking.carrier_id and picking.carrier_tracking_ref and call binding.with_delay().push_shipment() inside the try/except as before; this ensures each shop’s backend_id is used correctly and no shops are skipped.connector_amazon/tests/common.py (1)
9-10:⚠️ Potential issue | 🔴 CriticalAdd
_create_done_pickingto the shared test scaffold.
CommonConnectorAmazonSpapiis missing_create_done_picking, but tests now call it (failing withAttributeError). This breaks the shipment cron test path.🔧 Proposed fix
class CommonConnectorAmazonSpapi(TransactionComponentCase): @@ def _create_sample_pricing_data(self, asin=None): """Create sample competitive pricing data from Amazon API""" return { @@ }, } + + def _create_done_picking(self, sale_order, carrier_id=False, tracking_ref=False): + """Create a done outgoing picking linked to a sale order.""" + if isinstance(sale_order, int): + sale_order = self.env["sale.order"].browse(sale_order) + + warehouse = ( + sale_order.warehouse_id + or self.shop.warehouse_id + or self.env["stock.warehouse"].search( + [("company_id", "=", self.env.company.id)], limit=1 + ) + ) + picking_type = warehouse.out_type_id or self.env["stock.picking.type"].search( + [("code", "=", "outgoing")], limit=1 + ) + + return self.env["stock.picking"].create( + { + "picking_type_id": picking_type.id, + "location_id": picking_type.default_location_src_id.id, + "location_dest_id": picking_type.default_location_dest_id.id, + "partner_id": sale_order.partner_shipping_id.id, + "sale_id": sale_order.id, + "origin": sale_order.name, + "state": "done", + "carrier_id": carrier_id or False, + "carrier_tracking_ref": tracking_ref or False, + "date_done": datetime.now(), + } + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/common.py` around lines 9 - 10, Add a _create_done_picking method to the test scaffold class CommonConnectorAmazonSpapi that the tests expect; implement it to create and return a completed/“done” stock.picking (including related sale.order/moves/partner/carrier as the tests assume) so tests calling self._create_done_picking() no longer raise AttributeError—make the method name _create_done_picking and ensure its return shape matches existing tests (picking record and any objects they assert against).connector_amazon/models/competitive_price.py (1)
199-210:⚠️ Potential issue | 🟠 MajorCurrency mismatch risk when applying price to pricelist.
This issue was previously flagged. The method applies
self.listing_pricedirectly to the pricelist without verifying thatself.currency_idmatchesshop.pricelist_id.currency_id. If the competitive price is in EUR but the pricelist uses USD, this results in incorrect pricing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/competitive_price.py` around lines 199 - 210, The code sets vals["fixed_price"] = self.listing_price without checking currency, causing mismatched prices; update the block that builds vals (near vars pricelist_item, shop.pricelist_id) to compare self.currency_id and shop.pricelist_id.currency_id and, if different, convert self.listing_price into the pricelist currency before assigning fixed_price (use the Odoo currency conversion helper such as currency._convert or env['res.currency']._convert with appropriate company/date/context), then proceed to write or create the pricelist_item with the converted value; also handle missing currency objects defensively.connector_amazon/components/backend_adapter.py (1)
209-229:⚠️ Potential issue | 🔴 Critical
feed_contentparameter is never used - feed content won't be uploaded.This issue was previously flagged. The method accepts
feed_contentbut never uploads it to the feed document URL returned bycreate_feed_document(). The SP-API Feeds workflow requires: (1) create document, (2) upload content to presigned URL, (3) create feed with document ID.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 209 - 229, create_inventory_feed currently never uploads the provided feed_content to the feed document returned by feed_adapter.create_feed_document(), so the content is never sent to SP-API. After obtaining doc_response (and feedDocumentId) call the feed adapter's upload method (e.g., feed_adapter.upload_feed_document or similar) using the presigned URL or doc_response and the feed_content to upload the XML, handle/upload errors, then call feed_adapter.create_feed with the feedDocumentId and marketplace_ids; update create_inventory_feed to perform create_feed_document -> upload content -> create_feed in that order.
🟡 Minor comments (4)
connector/static/description/index.html-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorTitle shows "README.rst" instead of module name.
The HTML title at line 6 displays
README.rstrather thanConnector. Since this is auto-generated, the source RST file may need a document title directive to render a proper page title.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector/static/description/index.html` at line 6, The page title currently shows "README.rst" in the title element; update the source reStructuredText to include a proper document title (e.g., add a top-level title directive such as "Connector" with an underline) so the generated HTML title tag becomes "Connector" instead of "README.rst"; alternatively, adjust the generator/template that produces the title element to prefer the document title over the source filename (look for the code/path that emits the HTML <title> and replace the fallback of "README.rst" with the document title or explicit "Connector").connector_amazon/static/description/index.html-399-401 (1)
399-401:⚠️ Potential issue | 🟡 MinorConsider updating source to use HTTPS for odoo-connector.com.
The static analysis flagged
http://odoo-connector.comas a plaintext HTTP link. Since this file is auto-generated byoca-gen-addon-readme, update the source RST files (likely inconnector_amazon/readme/) to use HTTPS if the site supports it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/static/description/index.html` around lines 399 - 401, The README uses an insecure HTTP link "http://odoo-connector.com" in the generated static HTML; update the source reStructuredText in the README generator inputs (likely under connector_amazon/readme/) to use "https://odoo-connector.com" so the generated file (static/description/index.html) contains the HTTPS URL; regenerate the docs with oca-gen-addon-readme to produce the updated index.html containing the secure link.connector_amazon/tests/test_backend.py-173-177 (1)
173-177:⚠️ Potential issue | 🟡 MinorMock setup may not trigger the HTTPError branch.
The test mocks
raise_for_statusto raise a genericException, but the backend code (_call_sp_api) has a specificexcept requests.exceptions.HTTPErrorclause. This mock will fall through to the generic exception handler instead of testing the HTTP error path.🧪 Proposed fix
+from requests.exceptions import HTTPError + # In test_call_sp_api_http_error: mock_response = mock.Mock() mock_response.status_code = 401 mock_response.text = "Unauthorized" -mock_response.raise_for_status.side_effect = Exception("401 Unauthorized") +http_error = HTTPError(response=mock_response) +mock_response.raise_for_status.side_effect = http_error mock_request.return_value = mock_response🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_backend.py` around lines 173 - 177, The test's mock of raise_for_status currently raises a generic Exception so the _call_sp_api HTTPError except block won't be exercised; change the mock to raise requests.exceptions.HTTPError (e.g., set mock_response.raise_for_status.side_effect = requests.exceptions.HTTPError("401 Unauthorized", response=mock_response)) and ensure the test imports requests.exceptions so the mocked exception matches the except requests.exceptions.HTTPError branch in _call_sp_api.connector_amazon/tests/test_competitive_price.py-463-466 (1)
463-466:⚠️ Potential issue | 🟡 MinorAdd missing
model_nameparameter to mapper component retrieval.The
_get_mapper()method retrieves the mapper without specifying the model, which differs from production code patterns inproduct_binding.pyandshop.py. The mapper component requires themodel_nameparameter to be properly instantiated for the correct model. Update line 466 to includemodel_name="amz.product.binding":with self.backend.work_on("amz.product.binding") as work: return work.component(usage="import.mapper", model_name="amz.product.binding")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_competitive_price.py` around lines 463 - 466, The _get_mapper() method currently calls backend.work_on("amz.product.binding") and returns work.component(usage="import.mapper") without specifying the model; update the call to work.component to include model_name="amz.product.binding" so the mapper is instantiated for the correct model (i.e., modify the work.component invocation inside _get_mapper to pass model_name="amz.product.binding").
🧹 Nitpick comments (17)
README.md (1)
26-26: Missingconnector_amazonentry in the addons table.The PR adds a new
connector_amazonmodule, but it's not listed in this addons table. Consider adding an entry for the new module to maintain consistency with other addons.📝 Suggested addition after line 27
[connector](connector/) | 16.0.1.0.1 | | Connector +[connector_amazon](connector_amazon/) | 16.0.1.0.0 | | Amazon Seller Central integration for orders, stock, and prices [connector_base_product](connector_base_product/) | 16.0.1.0.0 | | Connector Base Product🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 26, Add a table row for the new module to the README addons table so it matches other addons; specifically add an entry for "connector_amazon" (use the module name connector_amazon and link format like [connector_amazon](connector_amazon/)) with the appropriate version string (e.g., 16.0.1.0.1) and a short description such as "Connector" so the README stays consistent with other entries.requirements.txt (1)
1-4: Consider pinning dependency versions for reproducible builds.The dependencies
cryptographyandrequestsare added without version constraints. While this follows OCA conventions, unpinned dependencies can cause unexpected breakage when major versions are released. Consider adding minimum version constraints (e.g.,requests>=2.25.0) for reproducibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@requirements.txt` around lines 1 - 4, Update requirements.txt to pin or add minimum version constraints for the unpinned packages to ensure reproducible installs: change requests to at least requests>=2.25.0 and cryptography to at least cryptography>=3.4.7 (and optionally pin cachetools similarly or to a known compatible version). Ensure the entries for requests and cryptography are updated to the chosen version specifiers and run your dependency install/test to verify compatibility.connector_amazon/views/order_view.xml (1)
21-42: Consider organizing form fields into logical groups.The form view places all fields in a single group. For better readability, consider separating into groups (e.g., "Amazon Details" vs "Sync Status"):
💡 Optional improvement
<form string="Amazon Order"> <sheet> - <group> + <group string="Amazon Details"> <field name="external_id" /> <field name="backend_id" /> <field name="shop_id" /> <field name="marketplace_id" /> <field name="odoo_id" /> + </group> + <group string="Status"> <field name="status" /> <field name="fulfillment_channel" /> <field name="purchase_date" /> <field name="last_update_date" /> <field name="last_sync" /> </group> </sheet> </form>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/views/order_view.xml` around lines 21 - 42, The form view record view_amz_order_form for model amz.sale.order currently places all fields in a single <group>, making the UI crowded; split related fields into logical groups (e.g., an "Amazon Details" group containing external_id, backend_id, shop_id, marketplace_id, odoo_id, fulfillment_channel and purchase_date/last_update_date as needed, and a "Sync Status" group for status, last_sync) by replacing the single group with multiple <group> blocks (or nested <group>/<group col="2"> layouts) so the form is easier to scan and fields are semantically grouped.connector_amazon/tests/test_backend.py (1)
304-322: Test relies on side-effect of_create_shopcreating initial shop.The test assumes
self.backend.shop_idsalready has one shop from the common setup, then createsshop2and expectslen == 2. This implicit dependency on test fixture state could be made more explicit.Consider adding a setup assertion or comment clarifying the expectation:
# Assumes self.shop from CommonConnectorAmazonSpapi setup self.assertEqual(len(self.backend.shop_ids), 1) # Precondition🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_backend.py` around lines 304 - 322, The test test_backend_with_multiple_shops relies on a pre-existing shop created by the test fixture via CommonConnectorAmazonSpapi (referenced as self.shop) but doesn't assert that precondition; add an explicit precondition check before creating the second shop by asserting self.backend.shop_ids has length 1 and contains self.shop (using self.backend.shop_ids and self.shop) or add a clarifying comment so the dependency on _create_shop/common setup is explicit, then proceed to create shop2 and assert the final length is 2 and that both self.shop and shop2 are in self.backend.shop_ids.connector_amazon/tests/README.md (1)
237-241: Minor grammatical improvement for clarity.The adverb placement can be improved for better readability.
📝 Suggested text fix
✅ **Mock External Dependencies** -- Requests library mocked with `@mock.patch` -- External API calls never actually made -- Deterministic test behavior +- Requests library mocked with `@mock.patch` +- External API calls are never actually made +- Deterministic test behavior🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/README.md` around lines 237 - 241, Update the README checklist sentence "External API calls never actually made" to a grammatically correct phrasing (e.g., "External API calls are never actually made" or "No external API calls are made") so the adverb placement and readability improve; adjust the surrounding bullet to match tense consistently with the other items (e.g., "Requests library mocked with `@mock.patch`" → "Requests library is mocked with `@mock.patch`" if you choose present-tense consistency).connector_amazon/tests/test_order_shipment.py (1)
56-66: Unused unpacked variablespickingandcarrier.Static analysis flags that
pickingandcarrierare unpacked but never used in this test. Use underscores to indicate intentionally unused variables.♻️ Prefix unused variables with underscore
`@mock.patch`("odoo.addons.queue_job.models.base.DelayableRecordset.__getattr__") def test_push_shipment_creates_feed(self, mock_delay): """Test push_shipment creates a feed with correct XML.""" mock_delay.return_value = mock.Mock() - binding, picking, carrier = self._setup_order_with_picking("SHIP-001") + binding, _picking, _carrier = self._setup_order_with_picking("SHIP-001") result = binding.push_shipment()Same applies to line 96.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_order_shipment.py` around lines 56 - 66, The test unpacks binding, picking, carrier from _setup_order_with_picking in test_push_shipment_creates_feed but never uses picking and carrier; rename those unused variables to _picking and _carrier in that test (and any other tests in the same file with the same unused unpacking, e.g., the second occurrence noted in the review) so static analysis no longer flags them as unused while keeping binding as-is.connector_amazon/tests/test_competitive_price.py (1)
16-28: Avoid catching blind exception when loading sample data file.The
except Exception:on line 26 is too broad and could mask legitimate errors such as file not found, permission denied, or malformed JSON. This makes debugging harder when tests fail unexpectedly.♻️ Proposed fix for specific exception handling
def load_pricing_api_sample(): here = os.path.dirname(__file__) - with open(os.path.join(here, "productPricingV0.json"), "r") as f: - data = json.load(f) - # Find the sample response for /products/pricing/v0/price - try: - return data["paths"]["/products/pricing/v0/price"]["get"]["responses"]["200"][ - "examples" - ]["application/json"] - except Exception: - return {} + filepath = os.path.join(here, "productPricingV0.json") + if not os.path.exists(filepath): + return {} + with open(filepath, "r") as f: + data = json.load(f) + # Find the sample response for /products/pricing/v0/price + try: + return data["paths"]["/products/pricing/v0/price"]["get"]["responses"]["200"][ + "examples" + ]["application/json"] + except (KeyError, TypeError, IndexError): + return {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_competitive_price.py` around lines 16 - 28, The helper load_pricing_api_sample currently swallows all exceptions with a broad except Exception; narrow this by removing the blanket except and only handle the expected missing-key case when extracting the nested response (e.g., catch KeyError and TypeError around the data["paths"]... lookup and return {} there), while allowing file I/O errors (FileNotFoundError, PermissionError) and JSON parsing errors (json.JSONDecodeError) to propagate so test failures surface real issues; reference the function load_pricing_api_sample and the nested lookup of data["paths"]["/products/pricing/v0/price"]["get"]["responses"]["200"]["examples"]["application/json"] when making the change.connector_amazon/tests/test_shop.py (1)
797-808: Test assertion is conditional and may silently pass without validating behavior.The assertion at lines 803-808 only runs if
mock_push.calledis true, meaning the test could pass without actually verifying the expected behavior. This makes the test fragile and potentially misleading.🔧 Suggested improvement
with mock.patch.object( type(self.env["amz.shop"]), "action_push_stock" ) as mock_push: self.env["amz.shop"].cron_push_stock() - # Verify hourly shop was called - check if mock was called - if mock_push.called: - # Get the shops from the call - call_args = mock_push.call_args - if call_args and len(call_args.args) > 0: - called_shops = call_args.args[0] - self.assertIn(hourly_shop.id, called_shops.ids) + # Verify hourly shop was processed + mock_push.assert_called()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_shop.py` around lines 797 - 808, The test currently only checks the call arguments if mock_push.called is true, letting the test silently pass; change the test to assert that action_push_stock was called (e.g. assertTrue/mock_push.called or assert_called_once) immediately after cron_push_stock(), then unconditionally inspect mock_push.call_args to extract the first positional arg and assert hourly_shop.id is in its .ids; reference the methods cron_push_stock and action_push_stock and the mock object mock_push and the fixture hourly_shop when making these assertions.connector_amazon/controllers/webhook.py (2)
151-152: SHA1 usage is mandated by AWS SNS specification.The static analysis flags SHA1 as insecure (S303), but AWS SNS requires SHA1 for signature verification per their specification. Consider adding a comment to clarify this is not a security weakness but a protocol requirement.
# Verify the signature public_key = cert.public_key() + # Note: SHA1 is required by AWS SNS signature verification spec public_key.verify( signature, string_to_sign.encode("utf-8"), padding.PKCS1v15(), hashes.SHA1(), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/controllers/webhook.py` around lines 151 - 152, The code is intentionally using hashes.SHA1() to comply with the AWS SNS signature spec but the static analyzer flags it as insecure; update the site where hashes.SHA1() is called (in webhook.py) to add a concise inline comment explaining that SHA1 is required by the AWS SNS specification for signature verification and therefore safe here, and add a linter suppression (e.g., # noqa: S303 or # nosec) alongside the comment so the static analysis is acknowledged and won't raise a false positive.
203-220: Consider adding certificate cache expiration.The certificate cache grows unboundedly and never expires certificates. While certificates are fetched from Amazon's URLs, adding a TTL or max-size limit would prevent potential memory issues in long-running processes.
Would you like me to propose a TTL-based cache implementation using a simple expiration mechanism?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/controllers/webhook.py` around lines 203 - 220, The _cert_cache currently stores certificates indefinitely; change it to store (cert, expiry_ts) entries and enforce a TTL (e.g., CERT_TTL_SECONDS) by: on lookup of cert_url check if entry exists and if expiry_ts > now—return cert, otherwise delete expired entry and proceed to fetch; when inserting after x509.load_pem_x509_certificate(), save (cert, now + TTL); also consider a simple max-size eviction (e.g., pop oldest when len(_cert_cache) exceeds MAX_CERT_CACHE) or use an LRU/TTL cache library—update references to _cert_cache and the cert fetch code paths (requests.get, x509.load_pem_x509_certificate) accordingly and ensure lookups remove expired entries before returning.connector_amazon/models/feed.py (2)
206-206: Defaulting unknown status to 'in_progress' may mask unexpected API responses.If Amazon introduces new status values, they'll be silently treated as in-progress, potentially causing feeds to poll indefinitely. Consider logging unknown statuses.
🛠️ Proposed fix
new_state = state_mapping.get(processing_status, "in_progress") + if processing_status not in state_mapping: + _logger.warning( + "Unknown feed processing status '%s' for feed %s, treating as in_progress", + processing_status, + self.external_feed_id, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/feed.py` at line 206, The code currently defaults unknown processing_status values to "in_progress" via state_mapping.get(processing_status, "in_progress"), which can hide new API statuses; update the logic around state_mapping and the assignment to new_state (and where processing_status is read) to detect when get() returns the fallback, log a warning including processing_status and any feed identifier, and choose a safer default (e.g., "unknown" or "error") or trigger retry/backoff behavior instead of silently marking as "in_progress". Ensure the log uses the existing logger and include identifiers (processing_status, new_state, and the feed id or request id) so unknown statuses are visible in logs for later handling.
141-156: Add retry logic with exponential backoff for S3 feed upload.Pipeline failures show network timeouts during feed operations. The S3 upload includes a 60-second timeout, but lacks retry logic for transient failures. Amazon's SP-API best practices recommend exponential backoff with jitter and a maximum retry count for resilience on 5xx errors and timeouts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/feed.py` around lines 141 - 156, In _upload_feed_content implement retry logic with exponential backoff and jitter for transient failures: catch requests.exceptions.Timeout, requests.exceptions.ConnectionError, and HTTP 5xx responses from the requests.put call, retrying up to a configurable max (e.g., 3-5 attempts) with backoff doubling each attempt and a small random jitter between sleeps; preserve the 60s timeout per request, log each retry attempt, and after exhausting retries re-raise the last exception or call response.raise_for_status() so failures surface.connector_amazon/components/mapper.py (1)
231-233: Add defensive handling for price conversion.The
float()conversions could raiseValueErrorif the API returns non-numeric strings. While Amazon's API should return valid data, defensive coding prevents import failures from corrupted responses.🛡️ Proposed fix
+ def _safe_float(value, default=0.0): + try: + return float(value) if value else default + except (ValueError, TypeError): + return default + return { "product_binding_id": product_binding.id, "asin": pricing_data.get("ASIN"), "marketplace_id": product_binding.marketplace_id.id, "competitive_price_id": comp_price.get("CompetitivePriceId"), - "landed_price": float(landed_price_data.get("Amount", 0)), - "listing_price": float(listing_price_data.get("Amount", 0)), - "shipping_price": float(shipping_data.get("Amount", 0)), + "landed_price": _safe_float(landed_price_data.get("Amount", 0)), + "listing_price": _safe_float(listing_price_data.get("Amount", 0)), + "shipping_price": _safe_float(shipping_data.get("Amount", 0)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/mapper.py` around lines 231 - 233, The price mapping currently calls float(...) directly on landed_price_data.get("Amount", 0), listing_price_data.get("Amount", 0), and shipping_data.get("Amount", 0) which can raise ValueError for non-numeric strings; change this to use a small defensive helper (e.g., safe_float) or inline try/except that attempts float(...) and falls back to 0.0 (or None if preferred) on ValueError/TypeError, and update the mapped keys "landed_price", "listing_price", and "shipping_price" to use that safe conversion so corrupted API responses don't crash the import.connector_amazon/components/backend_adapter.py (3)
314-344: Consider validating that marketplace IDs are provided.If both
marketplace_idsandmarketplace_idareNone, line 335 will sendmarketplaceIds=""which Amazon's API will reject. Adding a validation would provide a clearer error message.♻️ Suggested improvement
ids_list = marketplace_ids or ([marketplace_id] if marketplace_id else []) + if not ids_list: + raise ValueError("At least one marketplace ID is required") params = {"marketplaceIds": ",".join(ids_list)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 314 - 344, The search_catalog_items method currently allows marketplace_ids and marketplace_id to be both None which results in marketplaceIds="" being sent; modify search_catalog_items to validate that at least one marketplace ID is provided (inspect ids_list after computing it) and raise a clear exception (e.g., ValueError with a descriptive message) or return an appropriate error before building params and calling _call_api; also ensure params construction only joins marketplaceIds when ids_list is non-empty to avoid sending an empty string in the request.
694-711:patch_listings_itemhardcodes productType as "PRODUCT".Line 708 hardcodes
"productType": "PRODUCT"but Amazon has many product types (LUGGAGE, SHOES, etc.). This could cause issues when patching items with different product types. Consider making it a parameter likeput_listings_itemdoes.♻️ Suggested fix
- def patch_listings_item(self, seller_sku, marketplace_ids, patches): + def patch_listings_item(self, seller_sku, marketplace_ids, patches, product_type="PRODUCT"): """Partially update a listing Args: seller_sku: Seller SKU marketplace_ids: List of marketplace IDs patches: List of JSON Patch operations + product_type: Amazon product type (default: "PRODUCT") Returns: dict: Update response with status """ endpoint = ( f"/listings/2021-08-01/items/{self.backend_record.seller_id}/{seller_sku}" ) - payload = {"productType": "PRODUCT", "patches": patches} + payload = {"productType": product_type, "patches": patches} params = {"marketplaceIds": ",".join(marketplace_ids)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 694 - 711, The patch_listings_item function currently hardcodes productType as "PRODUCT", which breaks cases for other product types; modify patch_listings_item to accept a product_type parameter (e.g., product_type=None or with same default behavior as put_listings_item) and use that value when constructing payload instead of the fixed string; update callers if needed to pass the correct product type and ensure the payload creation in patch_listings_item uses the product_type variable (refer to function patch_listings_item and mirror the parameter pattern used by put_listings_item).
616-625: Add ARN format validation to prevent IndexError on malformed input.Lines 621-622 assume the ARN follows the standard format (
arn:partition:service:region:account-id:resource). A malformed ARN will raise anIndexError.🛡️ Proposed defensive check
elif resource_type == "EVENT_BRIDGE": + arn_parts = arn.split(":") + if len(arn_parts) < 5: + raise ValueError(f"Invalid ARN format: {arn}") payload = { "name": name, "resourceSpecification": { "eventBridge": { - "accountId": arn.split(":")[4], # Extract account ID from ARN - "region": arn.split(":")[3], # Extract region from ARN + "accountId": arn_parts[4], + "region": arn_parts[3], } }, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 616 - 625, The EVENT_BRIDGE branch currently assumes arn.split(":")[3] and [4] exist and can raise IndexError on malformed ARNs; in the block handling resource_type == "EVENT_BRIDGE" (where payload is built using arn), validate the ARN first by splitting into parts (e.g., parts = arn.split(":")) and ensure len(parts) >= 6 before extracting region/accountId, otherwise handle the error gracefully (raise a clear ValueError or log and return a meaningful error response) and avoid constructing payload with missing fields; update the code that builds payload to use the validated parts (parts[3], parts[4]) so malformed input cannot cause an IndexError.connector_amazon/models/competitive_price.py (1)
146-156: Price comparison ignores currency differences.
price_differencecomputeslisting_price - product.list_pricebut these may be in different currencies (self.currency_idvs. company currency). For cross-currency scenarios, this produces meaningless results.If same-currency comparison is intentional (same marketplace), consider adding a note or validation. Otherwise, convert currencies before comparison.
♻️ Suggested fix with currency conversion
`@api.depends`("listing_price", "product_binding_id.odoo_id.list_price") def _compute_price_difference(self): """Calculate difference between competitive price and our price""" for record in self: - if record.listing_price and record.product_binding_id.odoo_id.list_price: + product = record.product_binding_id.odoo_id + if record.listing_price and product.list_price: + # Convert our price to competitive price currency for comparison + our_price = product.list_price + if record.currency_id != record.env.company.currency_id: + our_price = record.env.company.currency_id._convert( + product.list_price, + record.currency_id, + record.env.company, + fields.Date.today(), + ) record.price_difference = ( - record.listing_price - record.product_binding_id.odoo_id.list_price + record.listing_price - our_price ) else: record.price_difference = 0.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/competitive_price.py` around lines 146 - 156, The _compute_price_difference method on CompetitivePrice currently subtracts product_binding_id.odoo_id.list_price from listing_price without converting currencies; update _compute_price_difference to convert one price into the other's currency (e.g., convert listing_price to the product/company currency or vice versa) before subtraction by using the currency conversion utilities (res.currency._convert or env['res.currency'].compute) and the relevant currency fields (record.currency_id and record.product_binding_id.odoo_id.currency_id or company currency); ensure the conversion uses the correct date (e.g., record.date or fields.Date.context_today(self)) and fallback to 0.0 when prices are missing.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
connector_amazon/security/ir.model.access.csvis excluded by!**/*.csvconnector_amazon/static/description/icon.pngis excluded by!**/*.png
📒 Files selected for processing (65)
README.mdconnector/README.rstconnector/__manifest__.pyconnector/static/description/index.htmlconnector/tests/test_mapper.pyconnector_amazon/README.rstconnector_amazon/__init__.pyconnector_amazon/__manifest__.pyconnector_amazon/components/__init__.pyconnector_amazon/components/backend_adapter.pyconnector_amazon/components/binder.pyconnector_amazon/components/mapper.pyconnector_amazon/controllers/__init__.pyconnector_amazon/controllers/webhook.pyconnector_amazon/data/ir_cron.xmlconnector_amazon/models/__init__.pyconnector_amazon/models/backend.pyconnector_amazon/models/competitive_price.pyconnector_amazon/models/feed.pyconnector_amazon/models/marketplace.pyconnector_amazon/models/notification_log.pyconnector_amazon/models/order.pyconnector_amazon/models/product_binding.pyconnector_amazon/models/res_partner.pyconnector_amazon/models/shop.pyconnector_amazon/readme/ARCHITECTURE.rstconnector_amazon/readme/CONFIGURATION.rstconnector_amazon/readme/CONTRIBUTORS.rstconnector_amazon/readme/DESCRIPTION.rstconnector_amazon/readme/FEATURES.rstconnector_amazon/readme/HISTORY.rstconnector_amazon/readme/USAGE.rstconnector_amazon/static/description/index.htmlconnector_amazon/tests/README.mdconnector_amazon/tests/__init__.pyconnector_amazon/tests/common.pyconnector_amazon/tests/ordersV0.jsonconnector_amazon/tests/productPricingV0.jsonconnector_amazon/tests/test_adapters.pyconnector_amazon/tests/test_backend.pyconnector_amazon/tests/test_backend_subscriptions.pyconnector_amazon/tests/test_competitive_price.pyconnector_amazon/tests/test_ee_coexistence.pyconnector_amazon/tests/test_feed.pyconnector_amazon/tests/test_mapper.pyconnector_amazon/tests/test_marketplace.pyconnector_amazon/tests/test_notification_log.pyconnector_amazon/tests/test_order.pyconnector_amazon/tests/test_order_shipment.pyconnector_amazon/tests/test_product_binding.pyconnector_amazon/tests/test_shop.pyconnector_amazon/tests/test_shop_sync.pyconnector_amazon/tests/test_webhook.pyconnector_amazon/views/amz_menu.xmlconnector_amazon/views/backend_view.xmlconnector_amazon/views/competitive_price_view.xmlconnector_amazon/views/feed_view.xmlconnector_amazon/views/marketplace_view.xmlconnector_amazon/views/notification_log_view.xmlconnector_amazon/views/order_view.xmlconnector_amazon/views/product_binding_view.xmlconnector_amazon/views/shop_view.xmlrequirements.txtsetup/connector_amazon/odoo/addons/connector_amazonsetup/connector_amazon/setup.py
✅ Files skipped from review due to trivial changes (2)
- connector/manifest.py
- connector_amazon/readme/HISTORY.rst
🚧 Files skipped from review as they are similar to previous changes (18)
- connector_amazon/views/feed_view.xml
- connector_amazon/views/shop_view.xml
- connector_amazon/tests/test_marketplace.py
- connector_amazon/readme/USAGE.rst
- connector_amazon/readme/CONFIGURATION.rst
- connector_amazon/README.rst
- connector_amazon/components/init.py
- connector_amazon/models/init.py
- connector_amazon/tests/init.py
- connector_amazon/components/binder.py
- connector_amazon/tests/test_product_binding.py
- setup/connector_amazon/odoo/addons/connector_amazon
- connector_amazon/readme/DESCRIPTION.rst
- connector_amazon/readme/ARCHITECTURE.rst
- connector_amazon/views/amz_menu.xml
- connector_amazon/models/res_partner.py
- connector_amazon/views/product_binding_view.xml
- setup/connector_amazon/setup.py
Critical fixes: - Upload feed_content to presigned S3 URL in create_inventory_feed - Add @api.model to cron_push_shipments, iterate shops explicitly - Add missing _create_done_picking and _create_sample_listings_tsv helpers to test common base class - Replace datetime.timedelta with relativedelta in XML domain filters Major fixes: - Add currency validation/conversion in action_apply_to_pricelist - Use search-or-create for sale_amazon module record in EE tests - Move _set_qty_in_stock_location to common test base class Minor doc fixes: - Update USAGE.rst with actual connector setup instructions - Update HISTORY.rst changelog from 12.0 to 16.0.1.0.0 - Fix ARCHITECTURE.rst file paths to match actual module layout Co-Authored-By: Claude Opus 4.6 <[email protected]>
|
/ocabot merge |
Critical fixes: - Fix create_price_feed: use 3-step Feeds API flow (create doc, upload to presigned S3 URL, create feed) instead of passing content as doc ID - Add SNS subscription tracking fields and implement action_manage_subscriptions() for webhook subscription management - Fix order-item pagination: pass next_token to get_order_items() - Remove fetch_date from competitive_price unique constraint to prevent duplicate key violations during bulk sync Major fixes: - Fix merchant_id: use seller_id instead of lwa_client_id in shipment and inventory feed XML builders - Skip order lines without Amazon binding in shipment feed XML - Remove premature shipment_confirmed=True from push_shipment() - Downgrade PII email logging from INFO to DEBUG - Move `import requests` to module level in feed.py - Guard against None payload_json in read-only mode feed logging Minor fixes: - Validate ARN format before indexing in EventBridge destination - Fix datetime.now() to fields.Datetime.now() for timezone awareness - Fix lambda variable capture in marketplace filtering - Remove dead code (unused row.get in shop listings import) - Add exception logging in cron_sync_competitive_prices - Add XML escaping for merchant_id and seller_sku in inventory feed - Normalize API response shape in product_binding before iteration Test fixes: - Rewrite test_mapper.py: use backend.work_on() context manager to get mapper components instead of cls.env["component.name"] (Components are not Odoo models) - Fix test_webhook.py: patch type(self.backend) instead of instance - Add @tagged decorator to test_shop.py Co-Authored-By: Claude Opus 4.6 <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (9)
connector_amazon/components/backend_adapter.py (1)
200-216: Moverequestsimport to module level.The
requestsimport is inside the function body at line 200 (and similarly at line 238). While this works, it's cleaner to import at module level sincerequestsis already a module dependency.♻️ Proposed fix
Add at the top of the file (after line 1):
import requestsThen remove the inline imports at lines 200 and 238.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/components/backend_adapter.py` around lines 200 - 216, Move the inline "import requests" statements to the module level: add a single "import requests" at the top of the file and remove the local imports inside the function(s) that call requests (the blocks around feed_adapter.create_feed_document / requests.put using upload_url and feed_content and the similar usage near line ~238); ensure the code continues to call requests.put(...) without any local import so the module-level import is used.connector_amazon/models/product_binding.py (1)
100-110: Mapper is acquired insidewithblock but used outside it.The mapper component is obtained within the
work_oncontext manager but then used outside that context (lines 106-110). While this may work in practice, it's safer to keep the usage within the context manager scope.♻️ Proposed fix to keep mapper usage within context
# Use mapper to transform API response via work_on context with self.backend_id.work_on("amz.product.binding") as work: mapper = work.component( usage="import.mapper", model_name="amz.product.binding" ) - - competitive_price_vals_list = [] - for pricing_data in result: - vals = mapper.map_competitive_price(pricing_data, self) - if vals: - competitive_price_vals_list.append(vals) + competitive_price_vals_list = [] + for pricing_data in result: + vals = mapper.map_competitive_price(pricing_data, self) + if vals: + competitive_price_vals_list.append(vals)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/product_binding.py` around lines 100 - 110, The mapper component is created inside the with self.backend_id.work_on("amz.product.binding") context but then used afterwards; move the loop that iterates over result and calls mapper.map_competitive_price(...) (which builds competitive_price_vals_list) into the with-block so mapper is used while the work_on context is active; locate the mapper acquisition (work.component with usage="import.mapper", model_name="amz.product.binding") and the loop that fills competitive_price_vals_list and place the loop body inside that same with scope to ensure safe resource/context lifetime.connector_amazon/models/order.py (1)
374-383: Clarify operator precedence and simplify logic.The chained
and/orexpressions are hard to read and the static analyzer flags them. Theship_methodlogic is particularly convoluted.♻️ Proposed refactor for clarity
- carrier_name = picking.carrier_id and picking.carrier_id.name or "" + carrier_name = picking.carrier_id.name if picking.carrier_id else "" tracking = picking.carrier_tracking_ref or "" - ship_method = ( - self.marketplace_id - and self.marketplace_id.get_delivery_carrier_for_amazon_shipping( - self.odoo_id.carrier_id.name - ) - and self.odoo_id.carrier_id.name - or "Standard" - ) + ship_method = "Standard" + if self.marketplace_id and self.odoo_id.carrier_id: + mapped_carrier = self.marketplace_id.get_delivery_carrier_for_amazon_shipping( + self.odoo_id.carrier_id.name + ) + if mapped_carrier: + ship_method = self.odoo_id.carrier_id.name🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/order.py` around lines 374 - 383, Replace the unclear chained boolean expressions with explicit, readable conditionals: use carrier_name = getattr(picking.carrier_id, "name", "") and tracking = picking.carrier_tracking_ref or "" to safely read values, and set ship_method with a clear if/else (or a ternary) — e.g. if self.marketplace_id and self.marketplace_id.get_delivery_carrier_for_amazon_shipping(self.odoo_id.carrier_id.name): ship_method = self.odoo_id.carrier_id.name else: ship_method = "Standard" — referencing carrier_name, tracking, ship_method, marketplace_id, get_delivery_carrier_for_amazon_shipping and odoo_id.carrier_id.name to locate the changes.connector_amazon/tests/common.py (1)
218-235: Raw SQL insert may cause ORM cache inconsistencies.The fallback SQL insert for
delivery.carrier(lines 223-234) bypasses the ORM, which could cause cache inconsistencies if the carrier is later modified through ORM methods in the same test. Consider callingself.env["delivery.carrier"].invalidate_recordset()after the SQL insert, or ensure the browsed record is refreshed.♻️ Proposed fix
carrier_id = self.env.cr.fetchone()[0] + self.env["delivery.carrier"].invalidate_recordset() return carrier_model.browse(carrier_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/common.py` around lines 218 - 235, The raw SQL fallback using self.env.cr.execute inserts into delivery_carrier bypassing the ORM and can leave stale cache; after fetching carrier_id and before returning carrier_model.browse(carrier_id), call the ORM invalidation/refresh (e.g. self.env["delivery.carrier"].invalidate_recordset() or use carrier_model.invalidate_cache()/re-read the record) so subsequent ORM operations see the up-to-date row; update the try/except block around carrier_model.create and the path that uses self.env.cr.execute to perform this invalidation/refresh prior to returning the browsed record.connector_amazon/models/shop.py (3)
293-296: Consider moving imports to module level.The
gzip,time, andrequestsimports insidesync_catalog_bulkadd overhead on each call. Sincerequestsis already imported elsewhere in the codebase and these are standard library modules, consider moving them to the module level for consistency and slight performance improvement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` around lines 293 - 296, The imports gzip, time, and requests are currently inside the sync_catalog_bulk function causing repeated import overhead; move these three imports to the module level (top of connector_amazon/models/shop.py) and remove the in-function import statements so sync_catalog_bulk uses the module-level gzip, time, and requests symbols directly.
565-567: Redundant exception object in logging.exception call.Per static analysis,
logging.exceptionalready captures the exception, so passingstr(e)is redundant.♻️ Proposed fix
except Exception as e: - _logger.exception("Error during catalog sync: %s", str(e)) + _logger.exception("Error during catalog sync") raise UserError(_("Error syncing catalog: %s") % str(e)) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` around lines 565 - 567, The logging call is passing the exception string redundantly to _logger.exception; update the except block to call _logger.exception("Error during catalog sync") (no str(e) argument) since logging.exception already records the exception/traceback, and keep the subsequent raise UserError(_("Error syncing catalog: %s") % str(e)) from e unchanged so the user-facing error still includes the message.
617-617: Timezone-naive midnight check may miss scheduled runs.The daily cron checks
datetime.now().hour == 0to trigger daily shops, but this uses server-local time without timezone awareness. If the cron runs slightly before or after midnight (common with cron scheduling variance), daily shops might be skipped or processed twice.Consider using a more robust approach like checking if enough time has passed since
last_stock_syncorlast_order_sync, or storing and comparing dates rather than relying on exact hour checks.Also applies to: 642-642
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/shop.py` at line 617, The check using datetime.now().hour == 0 is timezone‑naive and brittle; replace it with a timezone-aware date/elapsed-time comparison: get a timezone-aware "now" (e.g. use datetime.utcnow() with tz=timezone.utc or your app's timezone helper) and then either (a) compare dates (if last_stock_sync.date() != now.date() / last_order_sync.date() != now.date()) to ensure the job runs once per day, or (b) check elapsed time (now - last_stock_sync >= timedelta(hours=24) / now - last_order_sync >= timedelta(hours=24)) to tolerate cron jitter. Apply this change to the two occurrences using the expression datetime.now().hour == 0 and reference last_stock_sync and last_order_sync when implementing the new checks.connector_amazon/tests/test_shop.py (1)
652-684: Test creates order binding withoutodoo_id, which may be incomplete.The test at lines 655-663 creates an
amz.sale.orderwithout anodoo_id(linkedsale.order). If_get_last_done_pickingorpush_shipmentmethods require a linked sale order, this test may not exercise the complete code path. Consider usingself._create_amazon_order()helper which properly creates the linked sale order.Also, line 666 uses
self.env.ref("delivery.free_delivery_carrier")which may not exist in minimal test databases.♻️ Proposed fix using helper
def test_cron_push_shipments(self): """Test cron_push_shipments queues shipment jobs for shipped orders.""" - # Create order binding with tracking info - order = self.env["amz.sale.order"].create( - { - "external_id": "111-7777777-7777777", - "backend_id": self.backend.id, - "marketplace_id": self.marketplace.id, - "partner_id": self.partner.id, - "shipment_confirmed": False, - } - ) + # Create order binding with linked sale.order + order = self._create_amazon_order( + external_id="111-7777777-7777777", + shipment_confirmed=False, + ) # Create picking with tracking - carrier = self.env.ref("delivery.free_delivery_carrier") + carrier = self._create_test_carrier()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/tests/test_shop.py` around lines 652 - 684, The test_cron_push_shipments creates an amz.sale.order without a linked odoo_id and relies on a carrier reference that may be missing; update the test to create the order via the existing helper (use self._create_amazon_order() or equivalent) so the amz.sale.order has a proper linked sale.order (odoo_id) before patching type(order)._get_last_done_picking, and ensure the carrier exists in the minimal test DB (either obtain a guaranteed carrier via a test fixture/helper or create a carrier record instead of using self.env.ref("delivery.free_delivery_carrier")); keep the patch of _get_last_done_picking and then call self.shop.cron_push_shipments() to exercise the full push_shipment path.connector_amazon/models/feed.py (1)
225-225: Use explicit conversion flag instead ofstr(e).Per static analysis hint, the
str(e)call is redundant inside an f-string. Use an explicit conversion flag for cleaner code.♻️ Proposed fix
- "last_state_message": f"Status check failed: {str(e)}", + "last_state_message": f"Status check failed: {e!s}",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@connector_amazon/models/feed.py` at line 225, Replace the redundant str(e) inside the f-string for the "last_state_message" assignment with an explicit conversion flag: change the interpolation to use {e!s} so the line becomes "last_state_message": f"Status check failed: {e!s}", ensuring cleaner, static-analysis-friendly string conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@connector_amazon/components/backend_adapter.py`:
- Around line 209-216: The S3 upload via requests.put in the feed upload step
can fail silently and the code proceeds to create_feed with an empty document;
update the upload logic in the method that calls create_feed (reference
upload_url, feed_content, requests.put, and document_id) to check for exceptions
and non-2xx responses: wrap requests.put in a try/except, inspect
response.status_code (treat >=400 as failure), log or raise an error and
abort/return before calling create_feed so you don't submit an empty feed; apply
the identical change to the create_inventory_feed upload block (same
upload_url/feed_content/requests.put flow) so both upload paths validate success
before proceeding.
In `@connector_amazon/models/backend.py`:
- Around line 659-673: The code is passing an HTTP webhook URL
(self.webhook_url) as the arn to adapter.create_destination while using
resource_type="SQS", which is invalid; fix by either (A) requiring/using a real
SQS queue ARN: obtain the SQS queue ARN (e.g., from configuration or by creating
the queue), pass that ARN into adapter.create_destination and keep
resource_type="SQS" and then set up forwarding from SQS to the webhook, or (B)
switch to EventBridge: change resource_type to "EVENT_BRIDGE" and pass the
EventBridge event bus ARN (not the HTTP webhook URL) to
adapter.create_destination (and update any config field used to hold the
EventBridge ARN). Ensure the created destination id is still saved to
self.sns_destination_id and do not send the HTTP webhook URL as the arn
parameter.
In `@connector_amazon/models/feed.py`:
- Around line 167-171: The payload construction is dereferencing
self.marketplace_id.marketplace_id without checking self.marketplace_id (it can
be null due to ondelete="set null"); update the code that builds payload (the
block that references feed_type and marketplace_id) to first check if
self.marketplace_id is None and handle it explicitly—either raise a clear
ValueError/RuntimeError indicating missing marketplace id or provide a safe
fallback, then use self.marketplace_id.marketplace_id only after the null-check
so AttributeError cannot be raised.
In `@connector_amazon/models/order.py`:
- Around line 101-106: The loop over picking.move_ids uses a lambda that
miscaptures loop variable move and names its arg `l` ambiguously; fix by
renaming the lambda argument to a clear name (e.g., line or order_line_item) and
bind the current move's product id into the lambda default to avoid closure
issues, for example replace self.odoo_id.order_line.filtered(lambda l:
l.product_id == move.product_id)[:1] with either
self.odoo_id.order_line.filtered(lambda order_line_item, pid=move.product_id:
order_line_item.product_id == pid)[:1] or use a generator like next((ol for ol
in self.odoo_id.order_line if ol.product_id == move.product_id), None) and store
as order_line to ensure correct matching per move.
In `@connector_amazon/static/description/index.html`:
- Line 372: Update the insecure LGPL link in the HTML snippet by replacing the
http:// URL in the anchor whose image alt is "License: LGPL-3" (the <a> tag
wrapping the <img alt="License: LGPL-3">) so its href uses https:// instead of
http://; ensure the anchor href becomes
"https://www.gnu.org/licenses/lgpl-3.0-standalone.html".
---
Nitpick comments:
In `@connector_amazon/components/backend_adapter.py`:
- Around line 200-216: Move the inline "import requests" statements to the
module level: add a single "import requests" at the top of the file and remove
the local imports inside the function(s) that call requests (the blocks around
feed_adapter.create_feed_document / requests.put using upload_url and
feed_content and the similar usage near line ~238); ensure the code continues to
call requests.put(...) without any local import so the module-level import is
used.
In `@connector_amazon/models/feed.py`:
- Line 225: Replace the redundant str(e) inside the f-string for the
"last_state_message" assignment with an explicit conversion flag: change the
interpolation to use {e!s} so the line becomes "last_state_message": f"Status
check failed: {e!s}", ensuring cleaner, static-analysis-friendly string
conversion.
In `@connector_amazon/models/order.py`:
- Around line 374-383: Replace the unclear chained boolean expressions with
explicit, readable conditionals: use carrier_name = getattr(picking.carrier_id,
"name", "") and tracking = picking.carrier_tracking_ref or "" to safely read
values, and set ship_method with a clear if/else (or a ternary) — e.g. if
self.marketplace_id and
self.marketplace_id.get_delivery_carrier_for_amazon_shipping(self.odoo_id.carrier_id.name):
ship_method = self.odoo_id.carrier_id.name else: ship_method = "Standard" —
referencing carrier_name, tracking, ship_method, marketplace_id,
get_delivery_carrier_for_amazon_shipping and odoo_id.carrier_id.name to locate
the changes.
In `@connector_amazon/models/product_binding.py`:
- Around line 100-110: The mapper component is created inside the with
self.backend_id.work_on("amz.product.binding") context but then used afterwards;
move the loop that iterates over result and calls
mapper.map_competitive_price(...) (which builds competitive_price_vals_list)
into the with-block so mapper is used while the work_on context is active;
locate the mapper acquisition (work.component with usage="import.mapper",
model_name="amz.product.binding") and the loop that fills
competitive_price_vals_list and place the loop body inside that same with scope
to ensure safe resource/context lifetime.
In `@connector_amazon/models/shop.py`:
- Around line 293-296: The imports gzip, time, and requests are currently inside
the sync_catalog_bulk function causing repeated import overhead; move these
three imports to the module level (top of connector_amazon/models/shop.py) and
remove the in-function import statements so sync_catalog_bulk uses the
module-level gzip, time, and requests symbols directly.
- Around line 565-567: The logging call is passing the exception string
redundantly to _logger.exception; update the except block to call
_logger.exception("Error during catalog sync") (no str(e) argument) since
logging.exception already records the exception/traceback, and keep the
subsequent raise UserError(_("Error syncing catalog: %s") % str(e)) from e
unchanged so the user-facing error still includes the message.
- Line 617: The check using datetime.now().hour == 0 is timezone‑naive and
brittle; replace it with a timezone-aware date/elapsed-time comparison: get a
timezone-aware "now" (e.g. use datetime.utcnow() with tz=timezone.utc or your
app's timezone helper) and then either (a) compare dates (if
last_stock_sync.date() != now.date() / last_order_sync.date() != now.date()) to
ensure the job runs once per day, or (b) check elapsed time (now -
last_stock_sync >= timedelta(hours=24) / now - last_order_sync >=
timedelta(hours=24)) to tolerate cron jitter. Apply this change to the two
occurrences using the expression datetime.now().hour == 0 and reference
last_stock_sync and last_order_sync when implementing the new checks.
In `@connector_amazon/tests/common.py`:
- Around line 218-235: The raw SQL fallback using self.env.cr.execute inserts
into delivery_carrier bypassing the ORM and can leave stale cache; after
fetching carrier_id and before returning carrier_model.browse(carrier_id), call
the ORM invalidation/refresh (e.g.
self.env["delivery.carrier"].invalidate_recordset() or use
carrier_model.invalidate_cache()/re-read the record) so subsequent ORM
operations see the up-to-date row; update the try/except block around
carrier_model.create and the path that uses self.env.cr.execute to perform this
invalidation/refresh prior to returning the browsed record.
In `@connector_amazon/tests/test_shop.py`:
- Around line 652-684: The test_cron_push_shipments creates an amz.sale.order
without a linked odoo_id and relies on a carrier reference that may be missing;
update the test to create the order via the existing helper (use
self._create_amazon_order() or equivalent) so the amz.sale.order has a proper
linked sale.order (odoo_id) before patching type(order)._get_last_done_picking,
and ensure the carrier exists in the minimal test DB (either obtain a guaranteed
carrier via a test fixture/helper or create a carrier record instead of using
self.env.ref("delivery.free_delivery_carrier")); keep the patch of
_get_last_done_picking and then call self.shop.cron_push_shipments() to exercise
the full push_shipment path.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
.pre-commit-config.yamlconnector_amazon/README.rstconnector_amazon/components/backend_adapter.pyconnector_amazon/models/backend.pyconnector_amazon/models/competitive_price.pyconnector_amazon/models/feed.pyconnector_amazon/models/order.pyconnector_amazon/models/product_binding.pyconnector_amazon/models/shop.pyconnector_amazon/readme/ARCHITECTURE.rstconnector_amazon/readme/HISTORY.rstconnector_amazon/readme/USAGE.rstconnector_amazon/static/description/index.htmlconnector_amazon/tests/common.pyconnector_amazon/tests/test_ee_coexistence.pyconnector_amazon/tests/test_mapper.pyconnector_amazon/tests/test_shop.pyconnector_amazon/tests/test_shop_sync.pyconnector_amazon/tests/test_webhook.pyconnector_amazon/views/backend_view.xmlconnector_amazon/views/competitive_price_view.xmlconnector_amazon/views/product_binding_view.xml
✅ Files skipped from review due to trivial changes (1)
- connector_amazon/tests/test_ee_coexistence.py
🚧 Files skipped from review as they are similar to previous changes (4)
- connector_amazon/views/backend_view.xml
- connector_amazon/tests/test_webhook.py
- connector_amazon/tests/test_mapper.py
- connector_amazon/README.rst
| # Step 2: Upload feed content to the presigned S3 URL | ||
| if upload_url and feed_content: | ||
| requests.put( | ||
| upload_url, | ||
| data=feed_content.encode("utf-8"), | ||
| headers={"Content-Type": "text/xml; charset=UTF-8"}, | ||
| timeout=60, | ||
| ) |
There was a problem hiding this comment.
Add error handling for S3 upload failure.
If the requests.put to the presigned URL fails, the method continues to create_feed with the document ID, submitting an empty feed. This could cause hard-to-debug issues with Amazon rejecting feeds silently.
🐛 Proposed fix
# Step 2: Upload feed content to the presigned S3 URL
if upload_url and feed_content:
- requests.put(
+ upload_resp = requests.put(
upload_url,
data=feed_content.encode("utf-8"),
headers={"Content-Type": "text/xml; charset=UTF-8"},
timeout=60,
)
+ upload_resp.raise_for_status()The same fix should be applied to create_inventory_feed at lines 247-254.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/components/backend_adapter.py` around lines 209 - 216, The
S3 upload via requests.put in the feed upload step can fail silently and the
code proceeds to create_feed with an empty document; update the upload logic in
the method that calls create_feed (reference upload_url, feed_content,
requests.put, and document_id) to check for exceptions and non-2xx responses:
wrap requests.put in a try/except, inspect response.status_code (treat >=400 as
failure), log or raise an error and abort/return before calling create_feed so
you don't submit an empty feed; apply the identical change to the
create_inventory_feed upload block (same upload_url/feed_content/requests.put
flow) so both upload paths validate success before proceeding.
| # Ensure destination exists | ||
| if not self.sns_destination_id: | ||
| dest_resp = adapter.create_destination( | ||
| name=f"odoo-{self.code or self.id}", | ||
| arn=self.webhook_url, | ||
| resource_type="SQS", | ||
| ) | ||
| dest_id = (dest_resp.get("payload") or {}).get("destinationId") | ||
| if not dest_id: | ||
| raise UserError( | ||
| _( | ||
| "Failed to create SNS destination: no destinationId returned." | ||
| ) | ||
| ) | ||
| self.sns_destination_id = dest_id |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's search for ARN-related configuration and the create_destination method
rg -n "arn|ARN|sqs|eventbridge|webhook_url" connector_amazon/models/backend.py -i -A 2 -B 2Repository: kencove/connector
Length of output: 2261
🏁 Script executed:
# Now let's look at the backend_adapter.py to understand create_destination signature and implementation
fd "backend_adapter.py" connector_amazon --exec cat {} +Repository: kencove/connector
Length of output: 25183
🏁 Script executed:
# Let's also check if there's any SNS/SQS configuration fields in the backend model
rg -n "sns_destination|webhook" connector_amazon/models/backend.py -i -B 1 -A 1Repository: kencove/connector
Length of output: 2566
Incorrect ARN passed to create_destination — webhook URL is not a valid SQS ARN.
The code passes self.webhook_url (an HTTP URL like https://example.com/amz/webhook/token) as the arn parameter to create_destination with resource_type="SQS". The adapter expects an actual SQS queue ARN (format: arn:aws:sqs:region:account:queue-name), not an HTTP endpoint. The adapter's own docstring explicitly states: "For HTTP/HTTPS webhooks, use EventBridge instead of SQS. Amazon SP-API doesn't support direct HTTP endpoints; you need either SQS or EventBridge as intermediary." You must either:
- Configure an actual SQS queue and pass its ARN, then set up a Lambda/service to forward notifications to your webhook
- Use EventBridge instead and pass the EventBridge event bus ARN with
resource_type="EVENT_BRIDGE"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/models/backend.py` around lines 659 - 673, The code is
passing an HTTP webhook URL (self.webhook_url) as the arn to
adapter.create_destination while using resource_type="SQS", which is invalid;
fix by either (A) requiring/using a real SQS queue ARN: obtain the SQS queue ARN
(e.g., from configuration or by creating the queue), pass that ARN into
adapter.create_destination and keep resource_type="SQS" and then set up
forwarding from SQS to the webhook, or (B) switch to EventBridge: change
resource_type to "EVENT_BRIDGE" and pass the EventBridge event bus ARN (not the
HTTP webhook URL) to adapter.create_destination (and update any config field
used to hold the EventBridge ARN). Ensure the created destination id is still
saved to self.sns_destination_id and do not send the HTTP webhook URL as the arn
parameter.
| payload = { | ||
| "feedType": self.feed_type, | ||
| "marketplaceIds": [self.marketplace_id.marketplace_id], | ||
| "inputFeedDocumentId": feed_document_id, | ||
| } |
There was a problem hiding this comment.
Potential null pointer dereference on marketplace_id access.
The marketplace_id field is defined with ondelete="set null" (line 23), meaning it can be None. However, line 169 accesses self.marketplace_id.marketplace_id without a null check, which will raise an AttributeError if marketplace_id is unset.
🛡️ Proposed fix to add null check
def _create_feed(self, feed_document_id):
"""Create the feed with Amazon.
POST /feeds/2021-06-30/feeds
Args:
feed_document_id: ID from create feed document response
"""
+ if not self.marketplace_id:
+ raise UserError(_("Marketplace is required to submit feed"))
endpoint = "/feeds/2021-06-30/feeds"
payload = {
"feedType": self.feed_type,
"marketplaceIds": [self.marketplace_id.marketplace_id],
"inputFeedDocumentId": feed_document_id,
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/models/feed.py` around lines 167 - 171, The payload
construction is dereferencing self.marketplace_id.marketplace_id without
checking self.marketplace_id (it can be null due to ondelete="set null"); update
the code that builds payload (the block that references feed_type and
marketplace_id) to first check if self.marketplace_id is None and handle it
explicitly—either raise a clear ValueError/RuntimeError indicating missing
marketplace id or provide a safe fallback, then use
self.marketplace_id.marketplace_id only after the null-check so AttributeError
cannot be raised.
| for move in picking.move_ids.filtered(lambda m: m.state == "done"): | ||
| # Try to find corresponding order line to get Amazon item ID | ||
| order_line = self.odoo_id.order_line.filtered( | ||
| lambda l: l.product_id == move.product_id | ||
| )[:1] | ||
|
|
There was a problem hiding this comment.
Fix lambda closure bug and rename ambiguous variable.
Two issues here:
- Variable name
lis ambiguous (E741) - The lambda captures
moveby reference, not value. Iforder_linehas multiple items, all will compare against the finalmovevalue from the loop.
🐛 Proposed fix
# Add line items (shipped quantities)
for move in picking.move_ids.filtered(lambda m: m.state == "done"):
# Try to find corresponding order line to get Amazon item ID
order_line = self.odoo_id.order_line.filtered(
- lambda l: l.product_id == move.product_id
+ lambda line, mv=move: line.product_id == mv.product_id
)[:1]🧰 Tools
🪛 Flake8 (7.3.0)
[error] 104-104: ambiguous variable name 'l'
(E741)
🪛 Ruff (0.15.2)
[error] 104-104: Ambiguous variable name: l
(E741)
[warning] 104-104: Function definition does not bind loop variable move
(B023)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/models/order.py` around lines 101 - 106, The loop over
picking.move_ids uses a lambda that miscaptures loop variable move and names its
arg `l` ambiguously; fix by renaming the lambda argument to a clear name (e.g.,
line or order_line_item) and bind the current move's product id into the lambda
default to avoid closure issues, for example replace
self.odoo_id.order_line.filtered(lambda l: l.product_id == move.product_id)[:1]
with either self.odoo_id.order_line.filtered(lambda order_line_item,
pid=move.product_id: order_line_item.product_id == pid)[:1] or use a generator
like next((ol for ol in self.odoo_id.order_line if ol.product_id ==
move.product_id), None) and store as order_line to ensure correct matching per
move.
| !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | ||
| !! source digest: sha256:PLACEHOLDER_DIGEST | ||
| !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! --> | ||
| <p><a class="reference external image-reference" href="https://odoo-community.org/page/development-status"><img alt="Beta" src="https://img.shields.io/badge/maturity-Beta-yellow.png" /></a> <a class="reference external image-reference" href="http://www.gnu.org/licenses/lgpl-3.0-standalone.html"><img alt="License: LGPL-3" src="https://img.shields.io/badge/licence-LGPL--3-blue.png" /></a> <a class="reference external image-reference" href="https://github.com/OCA/connector/tree/16.0/connector_amazon"><img alt="OCA/connector" src="https://img.shields.io/badge/github-OCA%2Fconnector-lightgray.png?logo=github" /></a> <a class="reference external image-reference" href="https://translation.odoo-community.org/projects/connector-16-0/connector-16-0-connector_amazon"><img alt="Translate me on Weblate" src="https://img.shields.io/badge/weblate-Translate%20me-F47D42.png" /></a> <a class="reference external image-reference" href="https://runboat.odoo-community.org/builds?repo=OCA/connector&target_branch=16.0"><img alt="Try me on Runboat" src="https://img.shields.io/badge/runboat-Try%20me-875A7B.png" /></a></p> |
There was a problem hiding this comment.
Use HTTPS for the LGPL badge URL.
Line 372 uses a plaintext http:// link for the LGPL page. Please switch it to https:// to avoid insecure-link warnings.
Suggested patch
- href="http://www.gnu.org/licenses/lgpl-3.0-standalone.html"
+ href="https://www.gnu.org/licenses/lgpl-3.0-standalone.html"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <p><a class="reference external image-reference" href="https://odoo-community.org/page/development-status"><img alt="Beta" src="https://img.shields.io/badge/maturity-Beta-yellow.png" /></a> <a class="reference external image-reference" href="http://www.gnu.org/licenses/lgpl-3.0-standalone.html"><img alt="License: LGPL-3" src="https://img.shields.io/badge/licence-LGPL--3-blue.png" /></a> <a class="reference external image-reference" href="https://github.com/OCA/connector/tree/16.0/connector_amazon"><img alt="OCA/connector" src="https://img.shields.io/badge/github-OCA%2Fconnector-lightgray.png?logo=github" /></a> <a class="reference external image-reference" href="https://translation.odoo-community.org/projects/connector-16-0/connector-16-0-connector_amazon"><img alt="Translate me on Weblate" src="https://img.shields.io/badge/weblate-Translate%20me-F47D42.png" /></a> <a class="reference external image-reference" href="https://runboat.odoo-community.org/builds?repo=OCA/connector&target_branch=16.0"><img alt="Try me on Runboat" src="https://img.shields.io/badge/runboat-Try%20me-875A7B.png" /></a></p> | |
| <p><a class="reference external image-reference" href="https://odoo-community.org/page/development-status"><img alt="Beta" src="https://img.shields.io/badge/maturity-Beta-yellow.png" /></a> <a class="reference external image-reference" href="https://www.gnu.org/licenses/lgpl-3.0-standalone.html"><img alt="License: LGPL-3" src="https://img.shields.io/badge/licence-LGPL--3-blue.png" /></a> <a class="reference external image-reference" href="https://github.com/OCA/connector/tree/16.0/connector_amazon"><img alt="OCA/connector" src="https://img.shields.io/badge/github-OCA%2Fconnector-lightgray.png?logo=github" /></a> <a class="reference external image-reference" href="https://translation.odoo-community.org/projects/connector-16-0/connector-16-0-connector_amazon"><img alt="Translate me on Weblate" src="https://img.shields.io/badge/weblate-Translate%20me-F47D42.png" /></a> <a class="reference external image-reference" href="https://runboat.odoo-community.org/builds?repo=OCA/connector&target_branch=16.0"><img alt="Try me on Runboat" src="https://img.shields.io/badge/runboat-Try%20me-875A7B.png" /></a></p> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@connector_amazon/static/description/index.html` at line 372, Update the
insecure LGPL link in the HTML snippet by replacing the http:// URL in the
anchor whose image alt is "License: LGPL-3" (the <a> tag wrapping the <img
alt="License: LGPL-3">) so its href uses https:// instead of http://; ensure the
anchor href becomes "https://www.gnu.org/licenses/lgpl-3.0-standalone.html".
- Add _sync_order_from_api() to amz.sale.order (called by notification_log but never defined) - test_mapper: use mapper._options instead of read-only .options property, add odoo_id to product binding creates, use _create_amazon_order helper for proper sale.order creation - test_adapters: update assertion to include next_token=None parameter added in order line pagination fix - test_order/test_order_shipment: remove shipment_confirmed assertions (push_shipment no longer sets this flag) - test_webhook: change invalid UNKNOWN_TYPE to valid "other" selection value, patch with_delay on type() not instance - test_notification_log: use try/except instead of assertRaises to avoid Odoo savepoint rollback of error-state writes Co-Authored-By: Claude Opus 4.6 <[email protected]>
Summary
amz.*model namespace to coexist with Odoo EEsale_amazon(amazon.*namespace)connector_amazon_spapimodule (OCA PR draft: [16.0][ADD] connector_amazon_spapi OCA/connector#512, now closed)Key features
sale_amazon, can disable its cron, deduplicates ordersModel namespace (
amz.*)amz.backendamz_backendamz.marketplaceamz_marketplaceamz.shopamz_shopamz.sale.orderamz_sale_orderamz.sale.order.lineamz_sale_order_lineamz.product.bindingamz_product_bindingamz.competitive.priceamz_competitive_priceamz.feedamz_feedamz.notification.logamz_notification_logTest plan
python -m py_compilepasses on all .py filessale_amazon— no model conflictsread_only_mode=True🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
UI
Models
Documentation
Tests