Skip to content

Add FuseConcatPass to eliminate redundant concat ops (#18827)#18827

Open
ryan-monroe wants to merge 1 commit intopytorch:mainfrom
ryan-monroe:export-D97667069
Open

Add FuseConcatPass to eliminate redundant concat ops (#18827)#18827
ryan-monroe wants to merge 1 commit intopytorch:mainfrom
ryan-monroe:export-D97667069

Conversation

@ryan-monroe
Copy link
Copy Markdown

@ryan-monroe ryan-monroe commented Apr 11, 2026

Summary:

Adds an FX-level pass that eliminates concat ops which can be proven structurally redundant before the TOSA backend / Vela compiler ever sees them. In the Gen2 Executorch ARM / Ethos-U stack, torch.cat lowers to TOSA CONCAT, which Vela converts to N MemoryCopy ops — real DMA on the NPU. Catching the obvious cases up front keeps the TOSA flatbuffer fed to Vela smaller, keeps debug graphs honest, and provides defensive coverage on TOSA targets where Vela's own scheduler doesn't run (e.g., the VGF backend).

Five rewrite patterns are handled (inspired by Espresso's bolt/nn/espresso/transforms/remove_nops.py):

  1. Single-input concat: cat([x], dim) ≡ x — replace cat with x.
  2. Concat-then-slice (exact): cat([a, b, ...], dim) feeding a slice_copy that extracts exactly one original input — replace the slice with the corresponding cat input directly.
  3. Slice-then-concat (full): cat([slice(x, d, s0, e0), slice(x, d, s1, e1), ...], dim) reconstructing x exactly (contiguous slices covering the full source dimension) — replace cat with x.
  4. Concat-then-sub-slice: a slice_copy whose range falls entirely within one cat input — replace with an adjusted slice on that input directly.
  5. Slice-then-concat (partial): contiguous slices of the same tensor concatenated back but covering only a sub-range of the source — replace with a single slice on the source.

Empirical impact across the production EMG model fleet

Measured by running every frl/ctrl/torchstream/torchstream/pt2/tests/test_emg_lowering_* quantize+lower test with FuseConcatPass instrumented to log per-call counters, then comparing against the same target with the pass commented out in arm_pass_manager.py. All 8 model targets pass under both configurations.

Model Cats scanned Eliminated Pattern fired
cascade_classifier 5 3 (60%) single-input
mux_fusion 8 3 (38%) single-input
combined_control 11 3 (27%) single-input
cascade_detector 14 0
cascade_hw_classifier 12 0
handwriting 106 0
wake 11 0
auth 6 0
Total 173 9 all single-input

Two findings worth highlighting

Patterns 2–5 (the slice-related rewrites) never matched on any production EMG model. PyTorch's Aten lowering on this fleet doesn't produce the cat↔slice algebra these patterns target. They remain useful for non-EMG TOSA workloads — and for the VGF backend where Vela's own optimizer doesn't run — but on the current EMG production set they are unexercised.

Vela already folds single-input cats during compilation. A before/after measurement on cascade_classifier (the model with the highest hit rate, 3/5 cats eliminated) shows Vela emits the same 9 MemoryCopy ops and consumes the same 481,339 NPU cycles either way. The eliminated cats reappear in the Vela operator table as Reshape → MemoryCopy instead of Concat → MemoryCopy. Total NPU runtime is unchanged. Pre-Vela artifacts do shrink (TOSA flatbuffer −16 KB / −0.68%, peak staging −1.5 KB / −0.45%), but post-Vela on-device performance is identical.

Net effect

This pass is value-additive even where it doesn't move NPU cycles:

  • Cleaner TOSA fed into Vela (~16 KB smaller per cascade_classifier instance).
  • Slightly tighter peak staging during Vela scheduling (~1.5 KB).
  • Defensive coverage for TOSA-only targets without a Vela-grade scheduler (notably the VGF / Vulkan path).
  • More truthful FX / EXIR debug graphs — concats that were genuinely no-ops no longer show up in model-explorer, delegation_metadata.json, or the lowered graph dumps.

It does not produce measurable NPU cycle savings on the current EMG production fleet. The patterns that would have produced real Vela savings (cat↔slice algebra) don't appear in these models.

Authored with Claude.

Differential Revision: D97667069

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 11, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18827

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures

As of commit 5cc8286 with merge base ada8e35 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 11, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 11, 2026

@ryan-monroe has exported this pull request. If you are a Meta employee, you can view the originating Diff in D97667069.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Add FuseConcatPass to eliminate redundant concat ops Add FuseConcatPass to eliminate redundant concat ops (#18827) Apr 13, 2026
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 13, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 14, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 14, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 14, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
@ryan-monroe
Copy link
Copy Markdown
Author

@pytorchbot rerun -f

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 14, 2026

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: argument command: invalid choice: 'rerun' (choose from 'merge', 'revert', 'rebase', 'label', 'drci', 'lint', 'fix-lint', 'apply-lint', 'cherry-pick')

usage: @pytorchbot [-h]
                   
                   {merge,revert,rebase,label,drci,lint,fix-lint,apply-lint,cherry-pick}
                   ...

Try @pytorchbot --help for more info.

ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 14, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 16, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
@Ninja91 Ninja91 added partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm ciflow/trunk labels Apr 17, 2026
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 17, 2026

Workflows were awaiting approval. CI has now been triggered for the ciflow labels on this PR.

@Ninja91 Ninja91 requested review from Ninja91 and gggekov April 17, 2026 21:13
@zingo zingo changed the title Add FuseConcatPass to eliminate redundant concat ops (#18827) Arm backend: Add FuseConcatPass to eliminate redundant concat ops (#18827) Apr 19, 2026
@meta-codesync meta-codesync Bot changed the title Arm backend: Add FuseConcatPass to eliminate redundant concat ops (#18827) Add FuseConcatPass to eliminate redundant concat ops (#18827) Apr 20, 2026
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 20, 2026
Summary:
Pull Request resolved: pytorch#18827

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
return first_start, expected_start


class FuseConcatPass(ArmPass):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gggekov Here's another optimization pass we plan to add. Please comment if this will impact downstream passes/regor transforms

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend testing the passes using EthosU85PipelineINT or EthosU55PipelineINT in order to know if the pass works with the current version of Vela.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, for me it makes sense for the optimization of some CONCATs to be done in ExecuTorch rather than Vela.

ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 22, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
@github-actions github-actions Bot added the module: arm Issues related to arm backend label Apr 22, 2026
@gggekov
Copy link
Copy Markdown
Collaborator

gggekov commented Apr 22, 2026

Helo @ryan-monroe ,
Thanks for the patch, great initiative to reduce the number of memcpy from CONCATs!

I see in the unit tests, you only test with the executorch.backends.arm.test.tester.test_pipeline.PassPipeline. Given that the end metric you care for is number of CONCATs in the TOSA flatbuffer, wouldn't it make sense to test instead either with the EthosU85PipelineINT / EthosU55PipelineINT or TosaPipelineINT as these pipelines will generate a TOSA fb that is run on device? For counting the number of CONCAT ops in the TOSA fb, you can use the count_tosa_ops in backends/arm/test/tester/test_pipeline.py. See example how to do that here - https://github.com/pytorch/executorch/blob/main/backends/arm/test/misc/test_transpose_counts.py#L537

rank = len(get_first_fake_tensor(node).shape)
dim = _int_arg(node, 1, 0)
dim = (dim + rank) % rank
start = _int_arg(node, 2, 0)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need to normalize both start and end as these can be negative?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be negative even when we export with static shapes? @oscarandersson8218

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I would think so as you can do x[-3:3].

ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 23, 2026
Summary:
Pull Request resolved: pytorch#18827

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 24, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
@ryan-monroe ryan-monroe force-pushed the export-D97667069 branch 2 times, most recently from b3afe9f to b9418dd Compare April 29, 2026 15:22
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 29, 2026
Summary:
Pull Request resolved: pytorch#18827

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 29, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request Apr 30, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
@ryan-monroe ryan-monroe force-pushed the export-D97667069 branch 2 times, most recently from 4580303 to 0aaef0d Compare May 6, 2026 21:59
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request May 6, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request May 7, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
ryan-monroe added a commit to ryan-monroe/executorch that referenced this pull request May 7, 2026
Summary:

Concat (torch.cat) in the Gen2 Executorch ARM/Ethos-U stack is lowered to
TOSA CONCAT, which Vela then converts to N x MemoryCopy operations — real
DMA data movement on the NPU. This pass eliminates concat operations that
can be proven unnecessary at the FX graph level, preventing Vela from
generating MemoryCopy ops entirely.

Inspired by Espresso's concat elimination techniques
(bolt/nn/espresso/transforms/remove_nops.py), three patterns are handled:

1. Single-input concat: cat([x]) is a no-op, replaced with x.
2. Concat-then-slice: if every consumer of cat([a, b, ...]) is a
   slice_copy that extracts exactly one original input, bypass both.
3. Slice-then-concat: if contiguous slices of the same tensor are
   concatenated back, the result is the original tensor.

Differential Revision: D97667069
Summary:

Adds an FX-level pass that eliminates concat ops which can be proven structurally redundant before the TOSA backend / Vela compiler ever sees them. In the Gen2 Executorch ARM / Ethos-U stack, `torch.cat` lowers to TOSA `CONCAT`, which Vela converts to N `MemoryCopy` ops — real DMA on the NPU. Catching the obvious cases up front keeps the TOSA flatbuffer fed to Vela smaller, keeps debug graphs honest, and provides defensive coverage on TOSA targets where Vela's own scheduler doesn't run (e.g., the VGF backend).

Five rewrite patterns are handled (inspired by Espresso's `bolt/nn/espresso/transforms/remove_nops.py`):

1. **Single-input concat**: `cat([x], dim) ≡ x` — replace cat with x.
2. **Concat-then-slice (exact)**: `cat([a, b, ...], dim)` feeding a `slice_copy` that extracts exactly one original input — replace the slice with the corresponding cat input directly.
3. **Slice-then-concat (full)**: `cat([slice(x, d, s0, e0), slice(x, d, s1, e1), ...], dim)` reconstructing x exactly (contiguous slices covering the full source dimension) — replace cat with x.
4. **Concat-then-sub-slice**: a `slice_copy` whose range falls entirely within one cat input — replace with an adjusted slice on that input directly.
5. **Slice-then-concat (partial)**: contiguous slices of the same tensor concatenated back but covering only a sub-range of the source — replace with a single slice on the source.

## Empirical impact across the production EMG model fleet

Measured by running every `frl/ctrl/torchstream/torchstream/pt2/tests/test_emg_lowering_*` quantize+lower test with FuseConcatPass instrumented to log per-call counters, then comparing against the same target with the pass commented out in `arm_pass_manager.py`. All 8 model targets pass under both configurations.

| Model | Cats scanned | Eliminated | Pattern fired |
| --- | --- | --- | --- |
| cascade_classifier | 5 | **3 (60%)** | single-input |
| mux_fusion | 8 | **3 (38%)** | single-input |
| combined_control | 11 | **3 (27%)** | single-input |
| cascade_detector | 14 | 0 | — |
| cascade_hw_classifier | 12 | 0 | — |
| handwriting | 106 | 0 | — |
| wake | 11 | 0 | — |
| auth | 6 | 0 | — |
| **Total** | **173** | **9** | all single-input |

## Two findings worth highlighting

**Patterns 2–5 (the slice-related rewrites) never matched on any production EMG model.** PyTorch's Aten lowering on this fleet doesn't produce the cat↔slice algebra these patterns target. They remain useful for non-EMG TOSA workloads — and for the VGF backend where Vela's own optimizer doesn't run — but on the current EMG production set they are unexercised.

**Vela already folds single-input cats during compilation.** A before/after measurement on cascade_classifier (the model with the highest hit rate, 3/5 cats eliminated) shows Vela emits the same 9 `MemoryCopy` ops and consumes the same 481,339 NPU cycles either way. The eliminated cats reappear in the Vela operator table as `Reshape → MemoryCopy` instead of `Concat → MemoryCopy`. Total NPU runtime is unchanged. Pre-Vela artifacts do shrink (TOSA flatbuffer −16 KB / −0.68%, peak staging −1.5 KB / −0.45%), but post-Vela on-device performance is identical.

## Net effect

This pass is value-additive even where it doesn't move NPU cycles:

- Cleaner TOSA fed into Vela (~16 KB smaller per cascade_classifier instance).
- Slightly tighter peak staging during Vela scheduling (~1.5 KB).
- Defensive coverage for TOSA-only targets without a Vela-grade scheduler (notably the VGF / Vulkan path).
- More truthful FX / EXIR debug graphs — concats that were genuinely no-ops no longer show up in `model-explorer`, `delegation_metadata.json`, or the lowered graph dumps.

It does **not** produce measurable NPU cycle savings on the current EMG production fleet. The patterns that would have produced real Vela savings (cat↔slice algebra) don't appear in these models.

Authored with Claude.

Differential Revision: D97667069
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: arm Issues related to arm backend partner: arm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Arm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants