Skip to content

feat(api): add import preflight validation#1633

Closed
JSONbored wants to merge 18 commits into
we-promise:mainfrom
JSONbored:codex/feat-api-import-preflight
Closed

feat(api): add import preflight validation#1633
JSONbored wants to merge 18 commits into
we-promise:mainfrom
JSONbored:codex/feat-api-import-preflight

Conversation

@JSONbored
Copy link
Copy Markdown
Contributor

@JSONbored JSONbored commented May 1, 2026

Summary

  • Adds a non-mutating import preflight endpoint for CSV and Sure NDJSON content.
  • Returns counts, headers, validation errors, and warnings without creating imports, rows, attachments, or jobs.

What changed

  • Adds POST /api/v1/imports/preflight.
  • Reuses existing import type configuration, CSV parsing, Sure NDJSON size/type constraints, and SureImport count helpers.
  • Adds Minitest coverage plus docs-only rswag/OpenAPI updates.

Why

API clients and self-hosted operators need a safe way to inspect import content before starting a state-changing import workflow.

Validation

  • docker compose -f .devcontainer/docker-compose.yml exec -T app bash -lc 'cd /workspace && bundle exec rails test test/controllers/api/v1/imports_controller_test.rb'
  • docker compose -f .devcontainer/docker-compose.yml exec -T app bash -lc 'cd /workspace && RAILS_ENV=test bundle exec rake rswag:specs:swaggerize'
  • docker compose -f .devcontainer/docker-compose.yml exec -T app bash -lc 'cd /workspace && bundle exec rubocop app/controllers/api/v1/imports_controller.rb test/controllers/api/v1/imports_controller_test.rb spec/requests/api/v1/imports_spec.rb spec/swagger_helper.rb'
  • git diff --check

Summary by CodeRabbit

  • New Features
    • Added an import preflight endpoint to validate CSV and NDJSON uploads and preview validity, errors/warnings, header detection, row/entity counts, and file metadata without creating import records.
  • Documentation
    • OpenAPI docs updated with new preflight schemas and endpoint.
  • Bug Fixes
    • Stricter CSV/NDJSON validation, MIME/size checks, clearer missing-header and empty-file handling, and application of import defaults before header validation.
  • Tests
    • Added integration tests covering success, malformed input, auth, missing content, and edge cases.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds POST /api/v1/imports/preflight and controller logic to validate CSV or Sure NDJSON input without persisting imports; normalizes import type, enforces size/type, parses content, computes headers/row/entity counts, and returns structured preflight payload with errors/warnings. Routes, OpenAPI schemas, request specs, controller tests, and SureImport validation updated.

Changes

Import Preflight Flow

Layer / File(s) Summary
Data Shape / Schemas
docs/api/openapi.yaml, spec/swagger_helper.rb
Adds OpenAPI schemas: ImportPreflightContent, ImportPreflightError, ImportPreflightStats, ImportPreflight, ImportPreflightResponse.
Routing
config/routes.rb
Adds POST /api/v1/imports/preflight as a collection route under api/v1 imports.
Controller: Entry / Authorization
app/controllers/api/v1/imports_controller.rb
Adds preflight action; expands ensure_read_scope to include :preflight; normalizes params[:type], validates against PREFLIGHT_IMPORT_TYPES, dispatches CSV vs SureImport flows, and rescues RecordNotFound/CSV::MalformedCSVError/StandardError.
Core CSV Preflight Logic
app/controllers/api/v1/imports_controller.rb
Validates CSV presence/size/MIME from file or raw_file_content; builds a non-persisted family.imports record with raw_file_str; optionally assigns account; applies MintImport default column mappings; rejects non-CSV-workflow types; runs import.valid?; parses CSV (honors rows_to_skip), computes headers, row counts, required/missing headers, errors, and warnings; returns structured JSON payload.
Core SureImport (NDJSON) Preflight Logic
app/controllers/api/v1/imports_controller.rb, app/models/sure_import.rb
Validates NDJSON presence/size/content-type; parses nonblank NDJSON lines as JSON objects requiring type and data; collects per-type counts and parse/shape errors; derives entity totals via SureImport.dry_run_totals_from_ndjson; sets valid based on errors and presence of rows; emits warnings for unsupported record types, no importable records, and exceeding publish limits. SureImport tightened to only access keys when parsed line is a Hash.
Helpers / Validation
app/controllers/api/v1/imports_controller.rb
New helpers for upload selection, max-size/type enforcement, CSV parsing/skip, default filename/content-type for raw content, applying Mint defaults, header normalization, and missing-required-header detection.
Tests / Specs
test/controllers/api/v1/imports_controller_test.rb, spec/requests/api/v1/imports_spec.rb
Adds controller tests for CSV and NDJSON preflight success/failure (missing headers, invalid NDJSON, non-object records, empty content, auth, account-not-found, Mint defaults). Adds request specs and Swagger examples for the new endpoint.
Minor Model Change
app/models/sure_import.rb
Tightened NDJSON validation: only increment per-type counts when parsed JSON is a Hash with a truthy "type"; first-line validation now requires a Hash with "type" and "data".

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant API as ImportsController/preflight
  participant DB as Accounts
  participant CSV as CSVParser
  participant Sure as SureImportLogic

  Client->>API: POST /api/v1/imports/preflight (file or raw_file_content, type, account_id, options)
  API->>DB: lookup account_id (optional)
  alt type == TransactionImport (CSV)
    API->>CSV: parse content (skip rows, extract headers, count rows)
    CSV-->>API: headers, row_count, parse_errors
    API->>API: build non-persisted Import, apply Mint defaults, validate headers
  else type == SureImport (NDJSON)
    API->>Sure: stream lines, parse JSON objects, validate shape
    Sure-->>API: per-type counts, entity totals, parse_errors
  end
  API-->>Client: 200 { data: { type, valid, content, stats, headers, errors, warnings } } or 4xx error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

codex

Suggested reviewers

  • jjmata
  • sokie

Poem

🐰 I hop through rows and count each line,
I sniff the headers, check their sign.
No seeds are planted, no records stay—
Just gentle checks to light the way.
A whisker twitch — the preflight's fine!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 4.35% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(api): add import preflight validation' accurately and concisely describes the main change—adding a new preflight validation endpoint to the API.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@JSONbored JSONbored force-pushed the codex/feat-api-import-preflight branch from 7a053e1 to 9b893d0 Compare May 2, 2026 00:52
@JSONbored JSONbored marked this pull request as ready for review May 2, 2026 06:08
@brin-security-scanner brin-security-scanner Bot added contributor:flagged Contributor flagged for review by trust analysis. pr:verified PR passed security analysis. and removed contributor:flagged Contributor flagged for review by trust analysis. labels May 2, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9b893d0cd5

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Line 8: The before_action currently applies :ensure_write_scope to both create
and preflight; change it so preflight uses a read-only guard instead: keep
:ensure_write_scope for the create action and add (or replace for preflight) a
:ensure_read_scope before_action for the preflight action so preflight does not
require write credentials; update the controller to call ensure_write_scope only
for :create and ensure_read_scope only for :preflight (referencing the
before_action declarations and the ensure_write_scope and ensure_read_scope
methods).
- Around line 146-149: The 500 response in ImportsController currently returns a
generic message; update the rescue/render block in
app/controllers/api/v1/imports_controller.rb (the code that renders status
:internal_server_error) to match the API v1 pattern by including the exception
message, e.g. set message to "Error: #{e.message}" (ensure the rescue uses the
exception variable named e or adjust accordingly) while keeping error:
"internal_server_error" and status: :internal_server_error so response format
matches other controllers.

In `@spec/requests/api/v1/imports_spec.rb`:
- Around line 299-355: The OpenAPI schema only documents raw_file_content but
the controller also accepts an uploaded file via params[:file] for the preflight
action; add a multipart/form-data file parameter so clients can send CSV/NDJSON
files. Update the request body schema (the same parameter block that defines
raw_file_content and type) to include a file property (name "file") with type:
string and format: binary (or add a multipart/form-data content section with
schema having a "file" property type: string, format: binary), and keep
raw_file_content as an alternative; reference the preflight endpoint and
params[:file] when making the change so the generated spec documents file
uploads correctly.

In `@test/controllers/api/v1/imports_controller_test.rb`:
- Around line 408-530: Add two behavioral tests to
test/controllers/api/v1/imports_controller_test.rb that exercise the Preflight
action: one that issues a POST to preflight_api_v1_imports_url without any auth
headers to assert an HTTP 401/unauthorized response and the expected error
payload for unauthenticated requests, and another that posts a valid CSV/params
but with a non-existent account_id (or invalid id) to assert an HTTP 404
response and the controller's "record_not_found" error behavior; use the same
request shape as existing preflight tests
(raw_file_content/date_col_label/amount_col_label/etc.) and mirror the existing
assertion style (assert_response and JSON.parse(response.body) checks) so they
integrate with the suite alongside the tests that use `@api_key` and
`@read_only_api_key`.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1ec06a87-d4e4-4f7f-b004-0a8413814ecf

📥 Commits

Reviewing files that changed from the base of the PR and between 268f5ae and 9b893d0cd51dfc7e1a5feefd4371ead13b362bcf.

📒 Files selected for processing (6)
  • app/controllers/api/v1/imports_controller.rb
  • config/routes.rb
  • docs/api/openapi.yaml
  • spec/requests/api/v1/imports_spec.rb
  • spec/swagger_helper.rb
  • test/controllers/api/v1/imports_controller_test.rb

Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb
Comment thread spec/requests/api/v1/imports_spec.rb
Comment thread test/controllers/api/v1/imports_controller_test.rb
@brin-security-scanner brin-security-scanner Bot added the contributor:flagged Contributor flagged for review by trust analysis. label May 2, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
app/controllers/api/v1/imports_controller.rb (1)

146-149: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

The generic 500 message is still inconsistent with the established API v1 pattern.

All other rescue => e handlers in this file (and across TransactionsController, AccountsController, ValuationsController) expose "Error: #{e.message}" so clients get actionable diagnostics. This one still returns a static string.

Based on learnings: API v1 controllers should consistently use message: "Error: #{e.message}" for 500 responses.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 146 - 149, In the
rescue block that currently renders a static 500 payload in
imports_controller.rb, replace the static message with the same pattern used
elsewhere: include the exception message via message: "Error: #{e.message}" so
the JSON response becomes { error: "internal_server_error", message: "Error:
#{e.message}" }, keeping the status: :internal_server_error and leaving the
surrounding rescue => e handler intact.
🧹 Nitpick comments (1)
app/controllers/api/v1/imports_controller.rb (1)

400-403: ⚡ Quick win

Hardcoded supported-type list duplicates the implicit list in SureImport.dry_run_totals_from_ndjson.

Both the unsupported_types derivation here and the hash construction in dry_run_totals_from_ndjson rely on the same 10 type strings ("Account", "Category", …). If a new type is added to dry_run_totals_from_ndjson, the unsupported_types warning will silently flag it as unsupported.

Extract a constant in SureImport and reference it in both places:

♻️ Proposed refactor

In app/models/sure_import.rb, add:

+  SUPPORTED_RECORD_TYPES = %w[
+    Account Category Tag Merchant Transaction Trade
+    Valuation Budget BudgetCategory Rule
+  ].freeze

Then in sure_import_preflight_payload (controller, lines 401-403):

-      unsupported_types = line_counts.keys - %w[
-        Account Category Tag Merchant Transaction Trade Valuation Budget BudgetCategory Rule
-      ]
+      unsupported_types = line_counts.keys - SureImport::SUPPORTED_RECORD_TYPES
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 400 - 403, Extract
the hardcoded list of supported import types into a single constant on the
SureImport model (e.g. SureImport::SUPPORTED_TYPES) and update
SureImport.dry_run_totals_from_ndjson to build its hash/logic from that
constant; then replace the inline array in the controller where
unsupported_types is computed (currently using %w[Account Category Tag Merchant
Transaction Trade Valuation Budget BudgetCategory Rule]) with a reference to
SureImport::SUPPORTED_TYPES so both locations use the same canonical list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 146-149: In the rescue block that currently renders a static 500
payload in imports_controller.rb, replace the static message with the same
pattern used elsewhere: include the exception message via message: "Error:
#{e.message}" so the JSON response becomes { error: "internal_server_error",
message: "Error: #{e.message}" }, keeping the status: :internal_server_error and
leaving the surrounding rescue => e handler intact.

---

Nitpick comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 400-403: Extract the hardcoded list of supported import types into
a single constant on the SureImport model (e.g. SureImport::SUPPORTED_TYPES) and
update SureImport.dry_run_totals_from_ndjson to build its hash/logic from that
constant; then replace the inline array in the controller where
unsupported_types is computed (currently using %w[Account Category Tag Merchant
Transaction Trade Valuation Budget BudgetCategory Rule]) with a reference to
SureImport::SUPPORTED_TYPES so both locations use the same canonical list.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 22ed766a-9181-4d1b-b480-1769b2313f86

📥 Commits

Reviewing files that changed from the base of the PR and between 9b893d0cd51dfc7e1a5feefd4371ead13b362bcf and 849e83facbc3a4807de6bd66c82897edf1f4bb83.

📒 Files selected for processing (5)
  • app/controllers/api/v1/imports_controller.rb
  • app/models/sure_import.rb
  • docs/api/openapi.yaml
  • spec/requests/api/v1/imports_spec.rb
  • test/controllers/api/v1/imports_controller_test.rb
🚧 Files skipped from review as they are similar to previous changes (2)
  • spec/requests/api/v1/imports_spec.rb
  • docs/api/openapi.yaml

Copy link
Copy Markdown
Collaborator

jjmata commented May 2, 2026

Code Review – feat(api): add import preflight validation

Overview

This PR adds POST /api/v1/imports/preflight, a non-mutating endpoint that validates CSV and Sure NDJSON content before a real import is created. The idea is sound and useful for API clients. The SureImport model bug-fix (guarding record.is_a?(Hash) before accessing record["type"]) is a clear improvement and is correct. The OpenAPI and rswag coverage is thorough and follows the project's API consistency pattern.

That said, there are several issues worth addressing before this merges.


1. Skinny Controllers convention is significantly violated

The project convention (CLAUDE.md) is explicit: business logic belongs in app/models/. This PR adds ~290 lines of private controller methods implementing CSV parsing, NDJSON line-counting, header normalization, and validation logic entirely inside ImportsController. If a model misbehaves or the logic needs reuse, callers will have no way to reach it.

The cleaner path is to extract a PORO or model method, e.g.:

# app/models/import/preflight.rb
class Import::Preflight
  def self.call(type:, content:, filename:, content_type:, import_params:, family:) = ...
end

Then the controller action becomes a few lines that delegate to the model and render the result.


2. Silent type fallback in normalized_import_type

def normalized_import_type
  type = params[:type].to_s
  Import::TYPES.include?(type) ? type : "TransactionImport"
end

If a client passes type: "Typo" the API silently validates it as a TransactionImport with no indication that the type was ignored. This is a subtle foot-gun. Better to return an explicit 422 for unrecognised types:

def normalized_import_type
  type = params[:type].presence || "TransactionImport"
  return type if Import::TYPES.include?(type)
  render json: { error: "invalid_type", message: "Unknown import type: #{type}" }, status: :unprocessable_entity
  nil
end

Then guard with return unless type in the preflight action.


3. Import::TYPES includes PdfImport and QifImport, but normalized_import_type passes them through

Import::TYPES = %w[TransactionImport TradeImport AccountImport MintImport CategoryImport RuleImport PdfImport QifImport SureImport]

A client that sends type: "PdfImport" will have it accepted by normalized_import_type, routed into preflight_csv_import, and then rejected with unsupported_import_type after building an unsaved model. The OpenAPI enum also omits these two types, creating a mismatch between the docs and the code. The type allowlist for preflight should be explicitly defined (e.g. PREFLIGHT_TYPES) rather than relying on requires_csv_workflow? as a late gate.


4. CSV row-level stats conflate import-level and row-level errors

stats: {
  rows_count: row_count,
  valid_rows_count: errors.empty? ? row_count : 0,
  invalid_rows_count: errors.empty? ? 0 : row_count
}

When a CSV has a missing required header (an import configuration error), every row is reported as invalid even though the data rows themselves parsed fine. A file with 500 rows and a missing header column reports { valid_rows_count: 0, invalid_rows_count: 500 }, which is misleading — the issue is with the mapping config, not the row data. valid_rows_count / invalid_rows_count should reflect actual per-row parsing outcomes, or the field names should make clear they reflect import-level validity.


5. valid semantics are inconsistent between CSV and SureImport

  • Empty CSV: valid: true (with a warning about zero rows)
  • Empty NDJSON: valid: false

Clients consuming both types will get different behaviour for the same "empty file" scenario. Pick one convention and document it.


6. SureImport.new.max_row_count instantiates a model for a class constant

warnings << "Row count exceeds this import type's publish limit." if nonblank_rows_count > SureImport.new.max_row_count

SureImport is being instantiated solely to call an instance method that is effectively a constant. Either expose MAX_ROW_COUNT as a class constant on SureImport, or make max_row_count a class method. Allocating a model object here is wasteful.


7. Backtrace logging concatenates onto a single line

Rails.logger.error e.backtrace.join("\n")

join("\n") produces one large log entry rather than separate structured lines. Most log aggregators (and the Rails logger itself) handle per-line output better:

e.backtrace.each { |line| Rails.logger.error "  #{line}" }

8. Missing test coverage

The Minitest suite is a good start, but several paths have no test:

  • rows_to_skip is respected by preflight_csv_content — no test exercises this path.
  • PdfImport or QifImport as the type param — no test verifies the unsupported_import_type response.
  • File-size rejection (file.size > Import::MAX_CSV_SIZE) — no test for the 413-style 422 path.
  • CSV::MalformedCSVError rescue in preflight — the rescue exists but no test triggers it.
  • col_sep: ";" for semicolon-delimited CSV — no test confirms the separator is forwarded correctly.

Minor nits

  • apply_preflight_import_defaults only handles MintImport. If other types gain preset defaults, this method will silently skip them. Consider a pattern where each import subclass exposes its own preflight_defaults method.
  • sure_import_preflight_payload iterates content.each_line and then calls SureImport.dry_run_totals_from_ndjson(content), which iterates again. For 10 MB files this is a double pass. Consolidating the two into a single pass would be cleaner.
  • The preflight_required_header_labels / preflight_missing_required_headers / preflight_normalized_header trio of helpers are generic enough that they belong on the model, not the controller.

Summary

The feature direction is correct and the model-level guard fixes are good. The main asks before merging are:

  1. Move preflight logic to the model layer per project convention.
  2. Surface an error (not a silent fallback) for unrecognised type params.
  3. Fix the misleading row-level stats when only import-level config errors are present.
  4. Pick a consistent valid convention for empty files across CSV and NDJSON.
  5. Add the missing Minitest cases listed above.

Generated by Claude Code

@JSONbored JSONbored force-pushed the codex/feat-api-import-preflight branch from b930968 to 26e1f00 Compare May 3, 2026 05:43
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
app/controllers/api/v1/imports_controller.rb (5)

143-144: 💤 Low value

Split backtrace into separate log entries for better aggregator compatibility.

e.backtrace.join("\n") produces one large log entry. Most log aggregators index per-line, making this harder to search and filter.

♻️ Suggested fix
     Rails.logger.error "ImportsController#preflight error: #{e.message}"
-    Rails.logger.error e.backtrace.join("\n")
+    e.backtrace&.each { |line| Rails.logger.error line }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 143 - 144, Replace
the single multi-line backtrace log in ImportsController#preflight with per-line
logging: keep the existing error message using the exception variable e (include
e.class and e.message for clarity), then iterate over e.backtrace and emit one
Rails.logger.error call per backtrace line (using the same logger) instead of
using e.backtrace.join("\n") so log aggregators can index each stack frame
separately.

407-407: ⚡ Quick win

Avoid model instantiation for constant lookup.

SureImport.new.max_row_count allocates a model instance just to access a fixed value (100_000). Consider exposing this as a class constant or class method.

♻️ Suggested approach

In app/models/sure_import.rb:

MAX_ROW_COUNT = 100_000

def max_row_count
  MAX_ROW_COUNT
end

Then use SureImport::MAX_ROW_COUNT in the controller.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` at line 407, Replace the
dynamic instance allocation used to read the import row limit by exposing it as
a class-level constant and accessor: add MAX_ROW_COUNT = 100_000 to the
SureImport model, have the existing instance method max_row_count read that
constant (or add a class method self.max_row_count returning MAX_ROW_COUNT), and
update the controller usage from SureImport.new.max_row_count to reference the
constant/class method (e.g., SureImport::MAX_ROW_COUNT or
SureImport.max_row_count) in the imports controller where the row-count check
occurs.

248-252: ⚖️ Poor tradeoff

Row stats conflate import-level errors with per-row validity.

When any error exists (including import-level issues like missing headers), valid_rows_count becomes 0 and invalid_rows_count becomes row_count. This doesn't reflect actual per-row parsing outcomes.

Consider either:

  1. Computing actual per-row parse errors separately, or
  2. Renaming these fields to clarify they reflect overall import validity (e.g., processable_rows_count when no import-level errors exist)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 248 - 252, The
stats block is using the import-level errors (errors) to set valid_rows_count
and invalid_rows_count, conflating import-wide issues with per-row parsing
results; update the controller to compute per-row validity instead: collect or
use the per-row parsing results (e.g., a per_row_errors or parsed_row_statuses
array produced by the import/parsing logic), set valid_rows_count to the count
of rows without per-row errors and invalid_rows_count to row_count - that count,
and keep the import-level errors (errors) separate in the response (or rename
the fields to `processable_rows_count` if you prefer the alternative). Ensure
you update the stats hash (the keys valid_rows_count and invalid_rows_count) to
use these per-row counts rather than errors.empty? and leave row_count and
errors untouched.

206-260: 🏗️ Heavy lift

Consider extracting preflight logic to the model layer.

Per coding guidelines, business logic should be placed in app/models/. The preflight parsing, validation, header normalization, and row counting logic (~290 lines) could be extracted to a PORO or model service (e.g., Import::Preflight) that the controller delegates to.

This would also make unit testing easier and keep the controller focused on request/response handling.

Also applies to: 363-423

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 206 - 260, The
controller method preflight_csv_import contains business logic (CSV parsing,
validation, header normalization, row counting) that should be extracted to a
model/service; create a PORO (e.g., Import::Preflight) that accepts the Import
instance and raw content and exposes methods for parse_csv, headers,
missing_required_headers, row_count, warnings, and errors (replacing uses of
preflight_csv_content, preflight_missing_required_headers,
preflight_required_header_labels and the inline Import.parse_csv_str/validation
logic), then refactor preflight_csv_import to build the Import, delegate all
parsing/validation calls to Import::Preflight, and render the response from the
service payload; ensure the new service returns the payload keys used by the
controller (content payload, stats, headers, required_headers,
missing_required_headers, errors, warnings) so unit tests can target the PORO
and the controller remains thin.

400-400: ⚡ Quick win

Double iteration over content is inefficient.

content.each_line (lines 369-398) already builds line_counts with per-type counts, then SureImport.dry_run_totals_from_ndjson(content) iterates again to compute the same type counts.

Reuse the existing line_counts to derive entity totals:

♻️ Suggested approach

Extract a helper or inline the mapping:

entity_counts = {
  accounts: line_counts["Account"] || 0,
  categories: line_counts["Category"] || 0,
  tags: line_counts["Tag"] || 0,
  merchants: line_counts["Merchant"] || 0,
  transactions: line_counts["Transaction"] || 0,
  trades: line_counts["Trade"] || 0,
  valuations: line_counts["Valuation"] || 0,
  budgets: line_counts["Budget"] || 0,
  budget_categories: line_counts["BudgetCategory"] || 0,
  rules: line_counts["Rule"] || 0
}

Or consider adding a class method SureImport.entity_counts_from_type_counts(line_counts) to keep the mapping in the model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` at line 400, The current
implementation performs two separate iterations over the content variable: once
in the content.each_line loop (lines 369-398) to build line_counts, and again in
the SureImport.dry_run_totals_from_ndjson(content) call. Replace the
SureImport.dry_run_totals_from_ndjson(content) call with a direct mapping from
the already-computed line_counts hash. You can either inline the mapping to
create entity_counts as a hash with keys like accounts, categories, tags,
merchants, transactions, trades, valuations, budgets, budget_categories, and
rules mapped from the corresponding type count values in line_counts (using
line_counts["Account"], line_counts["Category"], etc.), or extract a class
method SureImport.entity_counts_from_type_counts(line_counts) in the model that
encapsulates this mapping logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Line 411: The NDJSON validity check uses "valid: errors.empty? &&
nonblank_rows_count.positive?" which conflicts with the CSV path that currently
treats an empty CSV as valid with a warning; harmonize semantics by deciding
whether empty files should be allowed or rejected and updating this expression
accordingly (either remove the nonblank_rows_count.positive? requirement so
NDJSON mirrors CSV and add a warning entry to errors when nonblank_rows_count ==
0, or enforce nonblank_rows_count.positive? for CSV as well); adjust the "valid"
assignment and the related warning generation so both CSV and NDJSON use the
same rule, referencing the "valid" field, "errors" collection, and
"nonblank_rows_count" variable in imports_controller.rb.
- Around line 192-195: The current normalized_import_type method silently maps
unknown params[:type] values to "TransactionImport"; change it to validate
against Import::TYPES and return nil (or a sentinel) when invalid instead of
defaulting, e.g., have normalized_import_type return nil unless
Import::TYPES.include?(type), and then update the preflight action (preflight)
to check the returned value and render a 422 Unprocessable Entity with an error
message when normalized_import_type is nil so API clients are informed of
invalid type parameters.

---

Nitpick comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 143-144: Replace the single multi-line backtrace log in
ImportsController#preflight with per-line logging: keep the existing error
message using the exception variable e (include e.class and e.message for
clarity), then iterate over e.backtrace and emit one Rails.logger.error call per
backtrace line (using the same logger) instead of using e.backtrace.join("\n")
so log aggregators can index each stack frame separately.
- Line 407: Replace the dynamic instance allocation used to read the import row
limit by exposing it as a class-level constant and accessor: add MAX_ROW_COUNT =
100_000 to the SureImport model, have the existing instance method max_row_count
read that constant (or add a class method self.max_row_count returning
MAX_ROW_COUNT), and update the controller usage from
SureImport.new.max_row_count to reference the constant/class method (e.g.,
SureImport::MAX_ROW_COUNT or SureImport.max_row_count) in the imports controller
where the row-count check occurs.
- Around line 248-252: The stats block is using the import-level errors (errors)
to set valid_rows_count and invalid_rows_count, conflating import-wide issues
with per-row parsing results; update the controller to compute per-row validity
instead: collect or use the per-row parsing results (e.g., a per_row_errors or
parsed_row_statuses array produced by the import/parsing logic), set
valid_rows_count to the count of rows without per-row errors and
invalid_rows_count to row_count - that count, and keep the import-level errors
(errors) separate in the response (or rename the fields to
`processable_rows_count` if you prefer the alternative). Ensure you update the
stats hash (the keys valid_rows_count and invalid_rows_count) to use these
per-row counts rather than errors.empty? and leave row_count and errors
untouched.
- Around line 206-260: The controller method preflight_csv_import contains
business logic (CSV parsing, validation, header normalization, row counting)
that should be extracted to a model/service; create a PORO (e.g.,
Import::Preflight) that accepts the Import instance and raw content and exposes
methods for parse_csv, headers, missing_required_headers, row_count, warnings,
and errors (replacing uses of preflight_csv_content,
preflight_missing_required_headers, preflight_required_header_labels and the
inline Import.parse_csv_str/validation logic), then refactor
preflight_csv_import to build the Import, delegate all parsing/validation calls
to Import::Preflight, and render the response from the service payload; ensure
the new service returns the payload keys used by the controller (content
payload, stats, headers, required_headers, missing_required_headers, errors,
warnings) so unit tests can target the PORO and the controller remains thin.
- Line 400: The current implementation performs two separate iterations over the
content variable: once in the content.each_line loop (lines 369-398) to build
line_counts, and again in the SureImport.dry_run_totals_from_ndjson(content)
call. Replace the SureImport.dry_run_totals_from_ndjson(content) call with a
direct mapping from the already-computed line_counts hash. You can either inline
the mapping to create entity_counts as a hash with keys like accounts,
categories, tags, merchants, transactions, trades, valuations, budgets,
budget_categories, and rules mapped from the corresponding type count values in
line_counts (using line_counts["Account"], line_counts["Category"], etc.), or
extract a class method SureImport.entity_counts_from_type_counts(line_counts) in
the model that encapsulates this mapping logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6f270826-8948-4e58-b01b-440b21baaa1a

📥 Commits

Reviewing files that changed from the base of the PR and between 849e83facbc3a4807de6bd66c82897edf1f4bb83 and 26e1f00a28c941f46d1cba3cf938b27b5bb1b3e0.

📒 Files selected for processing (7)
  • app/controllers/api/v1/imports_controller.rb
  • app/models/sure_import.rb
  • config/routes.rb
  • docs/api/openapi.yaml
  • spec/requests/api/v1/imports_spec.rb
  • spec/swagger_helper.rb
  • test/controllers/api/v1/imports_controller_test.rb
✅ Files skipped from review due to trivial changes (1)
  • test/controllers/api/v1/imports_controller_test.rb
🚧 Files skipped from review as they are similar to previous changes (4)
  • config/routes.rb
  • spec/requests/api/v1/imports_spec.rb
  • spec/swagger_helper.rb
  • docs/api/openapi.yaml

Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
app/controllers/api/v1/imports_controller.rb (1)

147-150: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

500 response message doesn't match the established API v1 pattern.

All other controllers (TransactionsController, AccountsController, ValuationsController, etc.) use message: "Error: #{e.message}". The generic "An unexpected error occurred" string breaks that contract and hides the root cause from callers.

🛠️ Proposed fix
     render json: {
       error: "internal_server_error",
-      message: "An unexpected error occurred"
+      message: "Error: #{e.message}"
     }, status: :internal_server_error

Based on learnings: "API v1 controllers should consistently expose error messages in 500 internal_server_error responses using the pattern message: 'Error: #{e.message}' across all controllers."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 147 - 150, The 500
response in ImportsController currently returns a generic message; update the
rescue block that renders the JSON error (the render json: { error:
"internal_server_error", ... } call) to follow the API v1 pattern by returning
message: "Error: #{e.message}" instead of "An unexpected error occurred" and
ensure the rescue captures the exception as e (e.g., rescue => e) so the
exception message is available for the interpolation.
🧹 Nitpick comments (5)
app/controllers/api/v1/imports_controller.rb (5)

434-434: 💤 Low value

nonblank_rows_count.positive? is redundant in the valid expression.

When nonblank_rows_count == 0, the no_data_rows error is already pushed at line 417, so errors.empty? is already false. The second condition doesn't change the outcome but adds reader noise.

🛠️ Proposed fix
-        valid: errors.empty? && nonblank_rows_count.positive?,
+        valid: errors.empty?,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` at line 434, The boolean
expression setting the valid field includes a redundant check: remove the second
condition by changing the value for valid from "errors.empty? &&
nonblank_rows_count.positive?" to just "errors.empty?" in the response-building
code (the hash key named valid in ImportsController). This simplifies logic
because the no_data_rows error is already added to errors when
nonblank_rows_count == 0, so errors.empty? fully determines validity.

145-145: ⚡ Quick win

e.backtrace.join("\n") emits one giant log entry; prefer per-line logging.

Log aggregators (e.g., Datadog, Splunk) treat each Rails.logger.error call as one event. Joining the full backtrace produces a single multi-KB blob that truncates or fails to index properly.

🔧 Proposed fix
-    Rails.logger.error e.backtrace.join("\n")
+    e.backtrace.each { |line| Rails.logger.error line }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` at line 145, The current
single-call Rails.logger.error e.backtrace.join("\n") emits a giant blob; change
it to log the exception message and then log each entry of e.backtrace in its
own Rails.logger.error call to produce one event per line. Locate the error
handling in ImportsController (the block referencing variable e and calling
e.backtrace.join) and replace the join usage with: first log e.message (or
e.inspect) via Rails.logger.error, then iterate e.backtrace and call
Rails.logger.error for each backtrace line so downstream log collectors index
lines correctly.

122-151: 🏗️ Heavy lift

Preflight business logic belongs in a model/service, not the controller.

The preflight action adds ~17 private methods and ~290 lines of parsing, validation, and payload-building logic directly to the controller. Project convention (flagged by reviewer jjmata) requires business logic to live in app/models/. Methods like sure_import_preflight_payload, preflight_missing_required_headers, preflight_required_header_labels, preflight_normalized_header, and apply_preflight_import_defaults are all domain-logic concerns that should be extracted to a PORO or model service (e.g., Import::Preflight) so the controller action becomes a thin orchestrator:

def preflight
  result = Import::Preflight.new(params, current_resource_owner.family).run
  render json: { data: result.payload }, status: result.status
rescue ActiveRecord::RecordNotFound
  render json: { error: "record_not_found", ... }, status: :not_found
end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 122 - 151, The
preflight action contains heavy business logic that should be extracted into a
model/service; create a PORO Import::Preflight (e.g., in app/models or
app/services) that accepts params and the family and exposes a run method
returning a result object with payload and status. Move all domain methods
(sure_import_preflight_payload, preflight_missing_required_headers,
preflight_required_header_labels, preflight_normalized_header,
apply_preflight_import_defaults and any private helpers used by them) into
Import::Preflight, implement the parsing/validation there, and update the
controller preflight to simply call Import::Preflight.new(params,
current_resource_owner.family).run and render result.payload with result.status
while keeping the existing rescue handlers unchanged.

462-478: ⚡ Quick win

apply_preflight_import_defaults duplicates MintImport#set_mappings column labels verbatim.

The same eleven column-label defaults exist in both this controller method and MintImport#set_mappings (see app/models/mint_import.rb:84-98). When MintImport defaults change, this controller code becomes stale silently. Extract a non-persisting class method or constant on MintImport and call it here:

🛠️ Proposed fix

On MintImport:

def self.default_column_mappings
  {
    signage_convention: "inflows_positive",
    date_col_label: "Date",
    date_format: "%m/%d/%Y",
    name_col_label: "Description",
    amount_col_label: "Amount",
    currency_col_label: "Currency",
    account_col_label: "Account Name",
    category_col_label: "Category",
    tags_col_label: "Labels",
    notes_col_label: "Notes",
    entity_type_col_label: "Transaction Type"
  }
end

In controller:

 def apply_preflight_import_defaults(import)
   return unless import.is_a?(MintImport)
-  import.assign_attributes(
-    signage_convention: "inflows_positive",
-    date_col_label: "Date",
-    ...
-  )
+  import.assign_attributes(MintImport.default_column_mappings)
 end

MintImport#set_mappings can then call assign_attributes(self.class.default_column_mappings) before save! to keep a single source of truth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 462 - 478, The
controller method apply_preflight_import_defaults duplicates the column-label
defaults already defined in MintImport#set_mappings; extract a single source of
truth by adding a non-persisting class method or constant on MintImport (e.g.
MintImport.default_column_mappings) that returns the hash of defaults
(signage_convention, date_col_label, date_format, name_col_label,
amount_col_label, currency_col_label, account_col_label, category_col_label,
tags_col_label, notes_col_label, entity_type_col_label) and then update
apply_preflight_import_defaults to call
import.assign_attributes(MintImport.default_column_mappings) and update
MintImport#set_mappings to use self.class.default_column_mappings (or the
constant) before saving.

423-430: NDJSON content is parsed twice, and SureImport.new is instantiated wastefully.

Two separate inefficiencies:

  1. Double parse — the loop above (lines 385–414) already builds line_counts with per-type counts. Calling SureImport.dry_run_totals_from_ndjson(content) at line 423 iterates the entire content again to produce the same information. For large NDJSON files this doubles parsing cost. entity_counts can be derived directly from line_counts:
🛠️ Proposed fix for double parse
-      entity_counts = SureImport.dry_run_totals_from_ndjson(content)
+      entity_counts = {
+        accounts: line_counts["Account"] || 0,
+        categories: line_counts["Category"] || 0,
+        tags: line_counts["Tag"] || 0,
+        merchants: line_counts["Merchant"] || 0,
+        transactions: line_counts["Transaction"] || 0,
+        trades: line_counts["Trade"] || 0,
+        valuations: line_counts["Valuation"] || 0,
+        budgets: line_counts["Budget"] || 0,
+        budget_categories: line_counts["BudgetCategory"] || 0,
+        rules: line_counts["Rule"] || 0
+      }
  1. Wasteful instantiationSureImport.new.max_row_count allocates and initializes an AR object to read what is effectively a constant. Since max_row_count is polymorphic (subclasses override it), convert it to a class method and use:
-      warnings << "Row count exceeds this import type's publish limit." if nonblank_rows_count > SureImport.new.max_row_count
+      warnings << "Row count exceeds this import type's publish limit." if nonblank_rows_count > SureImport.max_row_count

Then in SureImport, change the instance method to a class method:

-  def max_row_count
-    100_000
-  end
+  def self.max_row_count
+    100_000
+  end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 423 - 430, The
code currently re-parses the NDJSON by calling
SureImport.dry_run_totals_from_ndjson(content) and also instantiates SureImport
just to read max_row_count; replace these with cheaper alternatives: derive
entity_counts from the already-populated line_counts (e.g. map line_counts to
the same shape returned by dry_run_totals_from_ndjson) instead of calling
SureImport.dry_run_totals_from_ndjson(content), and change the max_row_count
access to a class method (call SureImport.max_row_count) rather than
instantiating SureImport.new; update any method definition in SureImport from
def max_row_count to def self.max_row_count so subclasses can override it as
before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 266-267: The current CSV branch sets
valid_rows_count/invalid_rows_count as a binary flip based on import.valid?
which misrepresents per-row validity; update the CSV path in imports_controller
(the code that currently assigns valid_rows_count, invalid_rows_count, and uses
row_count and import.valid?) to stop producing the misleading split: rename the
CSV response field to data_rows_count (set to row_count) and remove
valid_rows_count/invalid_rows_count for CSV imports so the NDJSON per-line
counts remain unchanged; ensure any client-facing documentation or response
schema keys referencing valid_rows_count/invalid_rows_count are adjusted so CSV
responses return data_rows_count instead.
- Around line 193-204: preflight_import_type currently validates against
Import::TYPES (which includes PdfImport/QifImport) causing a second rejection in
preflight_csv_import; change preflight_import_type to validate against the
narrower CSV-only allowlist used by the CSV import path (e.g. a new constant or
existing CSV-specific array) so only genuinely supported types are accepted,
update the error message to list that scoped allowlist, remove the redundant
errors array that duplicates message, and keep the method returning nil after
render; update any references in preflight_csv_import to match the same
allowlist if needed.
- Around line 248-256: When row_count.zero? the code currently appends a
no_data_rows entry to errors and also pushes "No data rows were found." into
warnings, causing duplicated messages and inconsistency with the NDJSON path;
update the imports_controller.rb logic so that only the errors array gets the
no_data_rows entry (leave the warnings array unchanged for this case) — remove
or guard the line that does warnings << "No data rows were found." when
row_count.zero? so only errors (the no_data_rows code) are emitted.

---

Duplicate comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 147-150: The 500 response in ImportsController currently returns a
generic message; update the rescue block that renders the JSON error (the render
json: { error: "internal_server_error", ... } call) to follow the API v1 pattern
by returning message: "Error: #{e.message}" instead of "An unexpected error
occurred" and ensure the rescue captures the exception as e (e.g., rescue => e)
so the exception message is available for the interpolation.

---

Nitpick comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Line 434: The boolean expression setting the valid field includes a redundant
check: remove the second condition by changing the value for valid from
"errors.empty? && nonblank_rows_count.positive?" to just "errors.empty?" in the
response-building code (the hash key named valid in ImportsController). This
simplifies logic because the no_data_rows error is already added to errors when
nonblank_rows_count == 0, so errors.empty? fully determines validity.
- Line 145: The current single-call Rails.logger.error e.backtrace.join("\n")
emits a giant blob; change it to log the exception message and then log each
entry of e.backtrace in its own Rails.logger.error call to produce one event per
line. Locate the error handling in ImportsController (the block referencing
variable e and calling e.backtrace.join) and replace the join usage with: first
log e.message (or e.inspect) via Rails.logger.error, then iterate e.backtrace
and call Rails.logger.error for each backtrace line so downstream log collectors
index lines correctly.
- Around line 122-151: The preflight action contains heavy business logic that
should be extracted into a model/service; create a PORO Import::Preflight (e.g.,
in app/models or app/services) that accepts params and the family and exposes a
run method returning a result object with payload and status. Move all domain
methods (sure_import_preflight_payload, preflight_missing_required_headers,
preflight_required_header_labels, preflight_normalized_header,
apply_preflight_import_defaults and any private helpers used by them) into
Import::Preflight, implement the parsing/validation there, and update the
controller preflight to simply call Import::Preflight.new(params,
current_resource_owner.family).run and render result.payload with result.status
while keeping the existing rescue handlers unchanged.
- Around line 462-478: The controller method apply_preflight_import_defaults
duplicates the column-label defaults already defined in MintImport#set_mappings;
extract a single source of truth by adding a non-persisting class method or
constant on MintImport (e.g. MintImport.default_column_mappings) that returns
the hash of defaults (signage_convention, date_col_label, date_format,
name_col_label, amount_col_label, currency_col_label, account_col_label,
category_col_label, tags_col_label, notes_col_label, entity_type_col_label) and
then update apply_preflight_import_defaults to call
import.assign_attributes(MintImport.default_column_mappings) and update
MintImport#set_mappings to use self.class.default_column_mappings (or the
constant) before saving.
- Around line 423-430: The code currently re-parses the NDJSON by calling
SureImport.dry_run_totals_from_ndjson(content) and also instantiates SureImport
just to read max_row_count; replace these with cheaper alternatives: derive
entity_counts from the already-populated line_counts (e.g. map line_counts to
the same shape returned by dry_run_totals_from_ndjson) instead of calling
SureImport.dry_run_totals_from_ndjson(content), and change the max_row_count
access to a class method (call SureImport.max_row_count) rather than
instantiating SureImport.new; update any method definition in SureImport from
def max_row_count to def self.max_row_count so subclasses can override it as
before.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7e5da8c0-f41d-4b39-b4db-d4643775319b

📥 Commits

Reviewing files that changed from the base of the PR and between 26e1f00a28c941f46d1cba3cf938b27b5bb1b3e0 and aee717fc29b1efc638b1318dc3f043a6489c1497.

📒 Files selected for processing (2)
  • app/controllers/api/v1/imports_controller.rb
  • test/controllers/api/v1/imports_controller_test.rb
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/controllers/api/v1/imports_controller_test.rb

Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Comment thread app/controllers/api/v1/imports_controller.rb Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/controllers/api/v1/imports_controller.rb (1)

174-195: 🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win

Permit rows_to_skip or this preflight path will ignore it.

preflight_csv_content uses import.rows_to_skip, but import_config_params never allows :rows_to_skip through. A client can send the field and still get parsing from Line 1, which breaks header detection and row counts for files with leading preamble rows.

🛠️ Minimal fix
     def import_config_params
       params.permit(
+        :rows_to_skip,
         :date_col_label,
         :amount_col_label,
         :name_col_label,
         :category_col_label,

As per coding guidelines, app/controllers/**/*.rb: Implement strong parameters and CSRF protection throughout the application.

Also applies to: 460-464

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 174 - 195, The
preflight flow reads import.rows_to_skip but import_config_params does not
permit :rows_to_skip, so add :rows_to_skip to the strong params in
import_config_params so incoming requests can set rows_to_skip; update any other
identical params permit block that configures import settings (the same param
list used elsewhere around the controller) to include :rows_to_skip as well so
header detection and row counts honor the client-sent value (references:
import_config_params, preflight_csv_content, import.rows_to_skip).
🧹 Nitpick comments (1)
app/controllers/api/v1/imports_controller.rb (1)

427-435: ⚡ Quick win

Avoid reparsing the NDJSON payload in preflight.

Lines 389-418 already walk every nonblank line and build line_counts, but Line 427 parses the whole payload again via SureImport.dry_run_totals_from_ndjson(content). On the 10MB limit this doubles work in the hot path; it would be cheaper to derive entity_counts from the first pass or have one model/service method return both validation output and totals.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/controllers/api/v1/imports_controller.rb` around lines 427 - 435, The
code reparses the NDJSON by calling
SureImport.dry_run_totals_from_ndjson(content) to build entity_counts even
though the earlier pass already produced line_counts; replace that call by
deriving entity_counts from the existing line_counts (or change the earlier
parsing routine to return both validation output and totals) so you don't
reprocess content during preflight—update references around entity_counts,
line_counts, nonblank_rows_count and the code that uses
SureImport.dry_run_totals_from_ndjson to consume the already-computed totals
instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 127-156: The preflight logic in ImportsController (method
preflight and its helpers preflight_sure_import and preflight_csv_import) should
be extracted into a PORO or model-side service (e.g., ImportPreflight or
Preflight::Engine) that encapsulates file validation, CSV/NDJSON parsing, header
normalization, stats computation, and error construction; update
ImportsController#preflight to only instantiate and call this new service with
current_resource_owner.family and preflight_import_type and render the returned
result or map service-raised errors (ActiveRecord::RecordNotFound,
CSV::MalformedCSVError, StandardError) to HTTP responses; ensure complex
validations use ActiveModel/ActiveRecord validations on the new class and keep
controller limited to delegation and rendering, preserving existing behavior of
preflight_sure_import and preflight_csv_import by moving their logic into
methods on the new service.

---

Outside diff comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 174-195: The preflight flow reads import.rows_to_skip but
import_config_params does not permit :rows_to_skip, so add :rows_to_skip to the
strong params in import_config_params so incoming requests can set rows_to_skip;
update any other identical params permit block that configures import settings
(the same param list used elsewhere around the controller) to include
:rows_to_skip as well so header detection and row counts honor the client-sent
value (references: import_config_params, preflight_csv_content,
import.rows_to_skip).

---

Nitpick comments:
In `@app/controllers/api/v1/imports_controller.rb`:
- Around line 427-435: The code reparses the NDJSON by calling
SureImport.dry_run_totals_from_ndjson(content) to build entity_counts even
though the earlier pass already produced line_counts; replace that call by
deriving entity_counts from the existing line_counts (or change the earlier
parsing routine to return both validation output and totals) so you don't
reprocess content during preflight—update references around entity_counts,
line_counts, nonblank_rows_count and the code that uses
SureImport.dry_run_totals_from_ndjson to consume the already-computed totals
instead.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2ab62327-5e9e-42a9-be4b-1b46c0baad1c

📥 Commits

Reviewing files that changed from the base of the PR and between aee717fc29b1efc638b1318dc3f043a6489c1497 and 316c3e7fd045cfb32f2dea21a37bd29436b4825e.

📒 Files selected for processing (2)
  • app/controllers/api/v1/imports_controller.rb
  • test/controllers/api/v1/imports_controller_test.rb
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/controllers/api/v1/imports_controller_test.rb

Comment thread app/controllers/api/v1/imports_controller.rb
Copy link
Copy Markdown
Collaborator

jjmata commented May 3, 2026

Follow-up Review — addressing the previous round

Good iteration. The two biggest structural concerns are resolved and a number of test gaps have been filled. Here's a precise status on each item, plus new concerns that emerged from the refactor.


Previously raised — now resolved ✅

# Issue Status
1 Skinny controllers ✅ Logic is now in app/models/import/preflight.rb and the action is ~4 lines
2 Silent type fallback ✅ Unknown types return an explicit 422 invalid_import_type
3 PdfImport/QifImport leak Import::Preflight::IMPORT_TYPES explicitly lists the 7 supported types
5 valid inconsistency for empty files ✅ Both CSV and NDJSON now emit a no_data_rows error and return valid: false

Previously raised — still open ❌

#6SureImport.new.max_row_count still allocates a model for a constant

# app/models/import/preflight.rb, inside sure_import_preflight_payload
warnings << "Row count exceeds this import type's publish limit." if nonblank_rows_count > SureImport.new.max_row_count

This is unchanged. A class constant or class method would avoid the allocation.


#7 — Backtrace still logged as one concatenated line

# app/controllers/api/v1/imports_controller.rb
Rails.logger.error e.backtrace.join("\n")

Also unchanged.


#4 (CSV stats) — partially resolved but still semantically odd

The stats block is now hardcoded for CSV:

stats: {
  rows_count: row_count,
  valid_rows_count: row_count,  # always row_count, even when valid: false
  invalid_rows_count: 0         # always 0, even when valid: false
}

A response where valid: false but valid_rows_count == rows_count and invalid_rows_count == 0 will be confusing to API clients. They'll reasonably ask: "if no rows are invalid, why is the import invalid?" The intent (CSV rows are always parseable; invalidity comes from config) is reasonable but should be reflected in the field names. Consider either renaming to parsed_rows_count / unparseable_rows_count, or documenting that for CSV valid_rows_count reflects parsing success, not config completeness.


#8 (test coverage) — partially resolved; gaps remain

Added: unknown type, QifImport, empty CSV, empty NDJSON file. Still missing:

  • rows_to_skip — the csv_content_for method has a branch that drops header rows; no test exercises it.
  • File size limit — no test sends a file/content that exceeds Import::MAX_CSV_SIZE.
  • CSV::MalformedCSVError rescue — the rescue clause in the controller exists but no test triggers it.
  • col_sep: ";" — no test confirms the separator is forwarded through Import.parse_csv_str.

New issues introduced by the refactor

N1 — Dual return type from upload helper methods is hard to follow

Every upload helper can return either a plain array [content, filename, content_type] or a Response object (on error). The callers detect this with:

return upload_attributes if upload_attributes.is_a?(Response)
content, filename, content_type = upload_attributes

This is an implicit protocol that is easy to break silently — a future contributor adding a new error case inside an upload helper must know to return a Response, not raise or render. The pattern also obscures intent. A cleaner approach: raise a domain exception class (e.g., PreflightError) from upload helpers and rescue it once in call, or have helpers return nil/Response consistently with a single distinguished type.

N2 — Import::Preflight::IMPORT_TYPES must be manually kept in sync with Import::TYPES

Import::TYPES (in import.rb) and Import::Preflight::IMPORT_TYPES (in preflight.rb) are separate constants. Adding a new import subtype to the application requires updating both files or the new type is silently unsupported by preflight. Consider deriving the preflight list from the canonical one:

IMPORT_TYPES = (Import::TYPES - %w[PdfImport QifImport]).freeze

This makes the exclusion explicit and ensures new types are automatically considered.

N3 — params.permit(...) called inside a model PORO

import_config_params calls params.permit(...) inside Import::Preflight. Strong parameters are a controller-layer concern; ActionController::Parameters#permit only works because the raw params object is passed in. If the PORO is ever instantiated outside a request context (e.g. in a background job or test helper), this will fail or silently permit nothing. The controller should strip and pass a plain hash to the PORO, keeping the boundary clean.

N4 — Test name is misleading for QifImport rejection

test "should reject preflight types that do not use CSV or Sure NDJSON content" do
  # sends type: "QifImport"
  assert_equal "invalid_import_type", response_data["error"]
end

QifImport is rejected because it is not in Import::Preflight::IMPORT_TYPES at all, not because it "does not use CSV or Sure NDJSON content" — a QifImport actually does use a text format. The test name implies the reason is content-type incompatibility, but the real reason is a missing allowlist entry. This matters if the allowlist changes in the future.


Summary

The structural concerns from the first round are fixed. The two minor nits (#6, #7) and the test gaps carry over. Three new issues from the refactor deserve attention before merge: the dual-return-type protocol in upload helpers (N1), the manually-maintained type list (N2), and the params.permit call inside the model (N3).


Generated by Claude Code

@jjmata jjmata self-requested a review May 3, 2026 09:32
@jjmata jjmata removed the contributor:flagged Contributor flagged for review by trust analysis. label May 3, 2026
@jjmata jjmata added this to the v0.7.1 milestone May 3, 2026
Copy link
Copy Markdown
Collaborator

@jjmata jjmata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See review comments, @JSONbored.

@JSONbored JSONbored force-pushed the codex/feat-api-import-preflight branch from f0b39fa to 727d504 Compare May 3, 2026 16:52
@brin-security-scanner brin-security-scanner Bot added the contributor:flagged Contributor flagged for review by trust analysis. label May 3, 2026
@JSONbored JSONbored force-pushed the codex/feat-api-import-preflight branch from 727d504 to d41695e Compare May 3, 2026 19:05
@jjmata jjmata removed the contributor:flagged Contributor flagged for review by trust analysis. label May 3, 2026
Copy link
Copy Markdown
Collaborator

jjmata commented May 3, 2026

Round 3 Re-review

The branch was rebased onto a newer main but the code in app/models/import/preflight.rb, app/controllers/api/v1/imports_controller.rb, and the test file is unchanged from the previous round. None of the open items from the last review have been addressed.

Carrying them forward for visibility:

Still open from round 2:

  1. SureImport.new.max_row_count — instantiating a model object to read what should be a class-level constant. Expose MAX_ROW_COUNT as a constant on SureImport or make max_row_count a class method.

  2. Backtrace logginge.backtrace.join("\n") emits one giant log entry. Use e.backtrace.each { |l| Rails.logger.error l } instead.

  3. valid_rows_count always equals row_count for CSV — when valid: false due to missing headers, the response still shows valid_rows_count == rows_count and invalid_rows_count == 0. The naming implies per-row validity, but the field reflects parse success, not config completeness. Either rename the fields (parsed_rows_count/unparseable_rows_count) or add a note in the OpenAPI description clarifying the distinction.

  4. Missing test coveragerows_to_skip branch in csv_content_for, file-size rejection, CSV::MalformedCSVError rescue path, and col_sep: ";" forwarding still have no tests.

  5. Dual return type in upload helpers — methods like csv_file_upload_attributes return either a [content, filename, content_type] array or a Response error object. Callers detect this with is_a?(Response). This implicit protocol is fragile. Prefer a single return type or raise a typed exception.

  6. Import::Preflight::IMPORT_TYPES must be kept in sync with Import::TYPES manually — derive it instead: IMPORT_TYPES = (Import::TYPES - %w[PdfImport QifImport]).freeze.

  7. params.permit(...) in the model PORO — strong parameters belong in the controller. Pass a plain hash to Import::Preflight, not an ActionController::Parameters object.

No new issues were introduced (no code changed). Please address the items above before requesting a merge.


Generated by Claude Code

@JSONbored JSONbored force-pushed the codex/feat-api-import-preflight branch from d41695e to 80564ab Compare May 4, 2026 04:55
@brin-security-scanner brin-security-scanner Bot added the contributor:flagged Contributor flagged for review by trust analysis. label May 4, 2026
@JSONbored
Copy link
Copy Markdown
Contributor Author

Round 3 Re-review

The branch was rebased onto a newer main but the code in app/models/import/preflight.rb, app/controllers/api/v1/imports_controller.rb, and the test file is unchanged from the previous round. None of the open items from the last review have been addressed.

Carrying them forward for visibility:

Still open from round 2:

  1. SureImport.new.max_row_count — instantiating a model object to read what should be a class-level constant. Expose MAX_ROW_COUNT as a constant on SureImport or make max_row_count a class method.
  2. Backtrace logginge.backtrace.join("\n") emits one giant log entry. Use e.backtrace.each { |l| Rails.logger.error l } instead.
  3. valid_rows_count always equals row_count for CSV — when valid: false due to missing headers, the response still shows valid_rows_count == rows_count and invalid_rows_count == 0. The naming implies per-row validity, but the field reflects parse success, not config completeness. Either rename the fields (parsed_rows_count/unparseable_rows_count) or add a note in the OpenAPI description clarifying the distinction.
  4. Missing test coveragerows_to_skip branch in csv_content_for, file-size rejection, CSV::MalformedCSVError rescue path, and col_sep: ";" forwarding still have no tests.
  5. Dual return type in upload helpers — methods like csv_file_upload_attributes return either a [content, filename, content_type] array or a Response error object. Callers detect this with is_a?(Response). This implicit protocol is fragile. Prefer a single return type or raise a typed exception.
  6. Import::Preflight::IMPORT_TYPES must be kept in sync with Import::TYPES manually — derive it instead: IMPORT_TYPES = (Import::TYPES - %w[PdfImport QifImport]).freeze.
  7. params.permit(...) in the model PORO — strong parameters belong in the controller. Pass a plain hash to Import::Preflight, not an ActionController::Parameters object.

No new issues were introduced (no code changed). Please address the items above before requesting a merge.

Generated by Claude Code

Thanks again for the detailed review, and sorry for the extra churn here. I missed the carried-forward items from the earlier review pass when I first refreshed the branch, so this follow-up is meant to address those directly.

I pushed 80564ab with the remaining cleanup:

  • rebased onto current upstream/main
  • removed the remaining Sure NDJSON preflight double-parse by deriving entity_counts from the already-built line_counts
  • added SureImport.dry_run_totals_from_line_type_counts(...) so preflight and normal dry-run totals share the same mapping
  • kept SureImport.max_row_count class-level
  • kept the PreflightError upload-helper protocol and controller-bound strong params
  • extracted Mint CSV defaults into MintImport.default_column_mappings
  • added model coverage for SureImport line-count totals and Mint default mappings

The previous fixes are still in place: preflight logic lives in Import::Preflight, unsupported types return invalid_import_type, CSV stats are documented as parse-level counts, empty CSV/NDJSON both return no_data_rows, and the missing test cases called out earlier are covered.

Validation:

  • bin/rails test test/controllers/api/v1/imports_controller_test.rb test/models/sure_import_test.rb test/models/mint_import_test.rb test/models/import_encoding_test.rb
  • RAILS_ENV=test bundle exec rake rswag:specs:swaggerize
  • scoped bin/rubocop
  • git diff --check

@JSONbored JSONbored requested a review from jjmata May 4, 2026 05:51
@jjmata
Copy link
Copy Markdown
Collaborator

jjmata commented May 4, 2026

Bunch of merge conflicts here @JSONbored.

@jjmata jjmata removed the contributor:flagged Contributor flagged for review by trust analysis. label May 4, 2026
Copy link
Copy Markdown
Collaborator

@jjmata jjmata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix merge conflicts, please.

@brin-security-scanner brin-security-scanner Bot added the contributor:flagged Contributor flagged for review by trust analysis. label May 5, 2026
@JSONbored JSONbored requested a review from jjmata May 5, 2026 02:43
@jjmata jjmata removed the contributor:flagged Contributor flagged for review by trust analysis. label May 6, 2026
Copy link
Copy Markdown

Round 4 Re-review

Good progress. Items 1, 2, 4, 5, 6, and 7 from Round 3 are correctly addressed. One carry-over remains open, and four new issues came up in this pass.


Still open from Round 3

#3valid_rows_count/invalid_rows_count naming is still misleading for CSV

In app/models/import/preflight.rb, the CSV stats block is:

stats: {
  rows_count: parsed_rows_count,
  valid_rows_count: parsed_rows_count,   # always == rows_count regardless of errors
  invalid_rows_count: 0                  # always 0 regardless of errors
},

The test confirms this — "should report missing required CSV headers during preflight" asserts valid_rows_count: 1 and invalid_rows_count: 0 even when valid: false and required headers are missing. A response where valid: false but valid_rows_count == rows_count && invalid_rows_count == 0 is internally contradictory from a client's perspective.

The OpenAPI description was updated to explain the asymmetry but that documents the confusion rather than fixing it. Please either:

  • Rename to parsed_rows_count / unparseable_rows_count in the CSV path, or
  • Remove valid_rows_count / invalid_rows_count from the CSV response entirely (they only have meaning in the NDJSON path where they reflect per-line parse outcomes)

New issues

N1 — CSV_IMPORT_TYPES is defined but never used (dead code)

# app/models/import/preflight.rb
CSV_IMPORT_TYPES = (IMPORT_TYPES - [ "SureImport" ]).freeze

No method in Import::Preflight references this constant. The dispatch is type == "SureImport" ? sure_import_response : csv_import_response(type). Remove it.


N2 — response local variable shadows ActionController::Base#response

# app/controllers/api/v1/imports_controller.rb
def preflight
  response = Import::Preflight.new(family: current_resource_owner.family, params: preflight_params).call
  render json: response.payload, status: response.status

response is a built-in Rails controller method that returns the ActionDispatch::Response object. Shadowing it with a local variable is a Rails anti-pattern. Rename the local to preflight_result or result.


N3 — Constant mutation in the file-size test is unsafe for parallel test execution

# test/controllers/api/v1/imports_controller_test.rb
original_value = Import::MAX_CSV_SIZE
Import.send(:remove_const, :MAX_CSV_SIZE)
Import.const_set(:MAX_CSV_SIZE, test_limit)
# ...
ensure
  Import.send(:remove_const, :MAX_CSV_SIZE)
  Import.const_set(:MAX_CSV_SIZE, original_value)

Rails Minitest runs in parallel threads by default. A second test thread that reads Import::MAX_CSV_SIZE between the remove_const and const_set in the ensure block will either see the wrong value or raise a NameError. Per project conventions, stub using mocha instead — e.g. Import.stubs(:const_get).with(:MAX_CSV_SIZE).returns(test_limit) — or make the size limit injectable via a method that can be stubbed without touching the constant.


N4 — invalid_import_type_response has a redundant errors array that duplicates message

# app/models/import/preflight.rb
payload: {
  error: "invalid_import_type",
  message: "type must be one of: #{IMPORT_TYPES.join(', ')}",
  errors: [ "type must be one of: #{IMPORT_TYPES.join(', ')}" ]  # exact duplicate of message
}

No other 422 response in this controller includes an errors array alongside message. Drop the errors key here.


Generated by Claude Code


Generated by Claude Code

@brin-security-scanner brin-security-scanner Bot added the contributor:verified Contributor passed trust analysis. label May 6, 2026
Copy link
Copy Markdown

Merge from main received (May 6). Good progress across four rounds — most items are resolved. One carry-over and four new items remain open from the Round 4 re-review (May 6):

Carry-over from Round 3:

  • valid_rows_count/invalid_rows_count for CSV — these always report rows_count/0 regardless of validation errors, making the API response misleading when valid: false. Row-level validity counts should reflect actual parse/header errors.

New items from Round 4 (per the May 6 re-review comment) — please address those before tagging for a Round 5 review.


Code review note from Claude Code


Generated by Claude Code

@JSONbored
Copy link
Copy Markdown
Contributor Author

Merge from main received (May 6). Good progress across four rounds — most items are resolved. One carry-over and four new items remain open from the Round 4 re-review (May 6):

Carry-over from Round 3:

  • valid_rows_count/invalid_rows_count for CSV — these always report rows_count/0 regardless of validation errors, making the API response misleading when valid: false. Row-level validity counts should reflect actual parse/header errors.

New items from Round 4 (per the May 6 re-review comment) — please address those before tagging for a Round 5 review.

Code review note from Claude Code

Generated by Claude Code

This is done as well.

Copy link
Copy Markdown

New commits pushed 2026-05-07. jjmata requested changes twice — first citing inline review comments, then "fix merge conflicts." New commits were pushed after each. Needs @jjmata to re-review the current state and confirm the earlier inline issues are fully resolved (notably: preflight using ensure_write_scope instead of ensure_read_scope, and any outstanding CRB actionable comments) before this can proceed.

Generated by Claude Code


Generated by Claude Code


class Import::Preflight
Response = Struct.new(:status, :payload, keyword_init: true)
UUID_PATTERN = /\A[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\z/i
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use the one in base_controller.rb instead.

Comment thread app/models/import.rb
end

def self.max_csv_size
MAX_CSV_SIZE
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What dictates this size? Should we document it somewhere more public?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor:verified Contributor passed trust analysis. pr:verified PR passed security analysis.

Development

Successfully merging this pull request may close these issues.

3 participants