Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
08b2987
Arm backend: Generate random op test inputs lazily
perheld Apr 24, 2026
acffcb0
CI: skip default-packages on every macos_job.yml callsite (#19297)
rascani May 5, 2026
83ac75c
feat(mlx): add handler for aten.roll (#19038)
ishangodawatta May 5, 2026
9915faf
Lora fix (#19304)
JacobSzwejbka May 5, 2026
ff25a2f
QNN SDK download: validate archive and retry on all errors (#19233)
rascani May 5, 2026
a0d6e9b
switch correctness checks to SNR-based assertion for cuda quant int4_…
Gasoonjia May 5, 2026
ca7d5cf
Add missing program_builder dep to arm test targets (#19266)
rascani May 5, 2026
15da1d1
change ffmpeg install path away from conda on linux (#19306)
JacobSzwejbka May 5, 2026
10a0c91
Improve Android Emulator Robustness (#19310)
JacobSzwejbka May 5, 2026
fe2ce06
Improve huggingface robustness (#19311)
JacobSzwejbka May 5, 2026
e0cc468
arm: validate archive and retry on all errors for FVP and toolchain d…
rascani May 5, 2026
9b95dd2
runner fix to mitigate the numerical issue (#19286)
billmguo May 5, 2026
165ac2e
Relax lora string test (#19312)
JacobSzwejbka May 5, 2026
5d07ce0
Cadence tests should retry (#19313)
JacobSzwejbka May 5, 2026
5dcf0ed
llama/rope: gate fp64 hf_precompute_freqs_cis on cos/sin scaling (#19…
rascani May 6, 2026
0f9de6a
Move CUDAGuard/CUDAStreamGuard static_assert tests out of CUDA fixtur…
psiddh May 6, 2026
5faf36e
Restrict XOR python export targets to fbcode (#19316)
psiddh May 6, 2026
6a8d341
Retry op numeric tests Arm (#19321)
JacobSzwejbka May 6, 2026
5b337e9
Declare pip as explicit dep (#19322)
JacobSzwejbka May 6, 2026
8ae05c2
Fix FuseMMWithAdd returning False after graph mutation
abeakkas May 6, 2026
1debeb6
Re-apply U55 reject split for bool permute test (#19320)
psiddh May 6, 2026
c7e8628
Fix retry variable on mac (#19333)
JacobSzwejbka May 6, 2026
3ffaf27
Revert torch-family pin centralization (#19334)
JacobSzwejbka May 6, 2026
cdcc915
Fix C++ -Werror regressions in llama runner (#19326) (#19326)
psiddh May 6, 2026
3a4c3a1
Fix ExecutorTorch → ExecuTorch in comments only
WongJohnson May 6, 2026
3c4ec8f
Limit ARM retries to operator tests (#19343)
JacobSzwejbka May 6, 2026
851cffb
Fix missing check (#19340)
JacobSzwejbka May 6, 2026
af90130
route EthosU input/output memcpy through overridable hook (#19264)
3l1 May 6, 2026
1414bc1
Disable HF Xet storage to fix CI export timeouts (#19358)
digantdesai May 7, 2026
0ee31fc
Bump iOS XCTest timeout for ExecuTorchLLMTests (#19354)
psiddh May 7, 2026
dd4397f
Fix optimized grid sampler validation (#19373)
JacobSzwejbka May 7, 2026
a8ce9ce
Convert unconditional GTEST_SKIP tests to DISABLED_ prefix (#19355)
rascani May 7, 2026
563e237
Docathon: Add workflow to assign user on comment (#19294)
nil-is-all May 7, 2026
76d941e
More generic slice propagation before unary ops which works for non-c…
DrJessop May 7, 2026
74c7c91
Docathon automation: Add script to sync labels from issue to PR (#19374)
nil-is-all May 7, 2026
1643611
Disable HF Xet storage across all CI scripts (#19371)
digantdesai May 7, 2026
226c1c5
[DOC] Add redirects for moved ExecuTorch pages (#19338)
ymrohit May 7, 2026
bf8abb6
[DOC] Fix outdated version-pinned doc URLs (#19325)
XAheli May 7, 2026
d5ba603
Restore VGF skip guards and preload_deps shape (#19375)
psiddh May 7, 2026
d858cd9
Add optional offset arg to quantized_conv1d_nlc and precompute it AOT…
khazaei May 7, 2026
180edd3
Reorder slice before binary broadcast ops (#19346)
DrJessop May 7, 2026
fa857bd
ci: fix macOS PyTorch wheel cache key for branch-ref pins (#19350)
rascani May 7, 2026
91aef57
Update target.bzl to remove a comment (#19380)
psiddh May 7, 2026
ada8e35
Enable VGF tests and add Vulkan format compatibility shim (#19383)
psiddh May 8, 2026
3185f02
ci: install pinned torch before requirements-ci.txt on macOS (#19342)
rascani May 8, 2026
6df43e1
Skip test_mimi in internal CI
rascani May 8, 2026
1284d54
test_passes: wire QNN SDK into runtime env
rascani May 8, 2026
c564936
Use gpu_cpp_unittest for slim CUDA guard tests
rascani May 8, 2026
b57ac03
Re-land XNNPACK update (#19237)
GregoryComer May 8, 2026
e969a98
Fix torch.split fails in to_edge with alias annotations (#18700)
Lidang-Jiang May 8, 2026
7e16433
Add a16w8 reduce_sum FVP coverage for Ethos-U85 (#19319)
Ninja91 May 8, 2026
b3baac5
Replace external_deps with deps for prettytable (#19401)
psiddh May 8, 2026
9889c7c
Remove Vulkan shader DotSlash label
jiawei-lyu May 8, 2026
9e4e497
Make op_upsample_bilinear2d_aa_test deterministic (#19357)
psiddh May 8, 2026
4413a5c
[DOC] Add extension APIs to runtime API reference (#19385)
ymrohit May 9, 2026
93b764e
Hoist W4A8 activation quantization out of GEMM K-loop (#19209)
Gasoonjia May 9, 2026
dbbe9cb
[ET-VK] Make libtorch optional in custom op test binaries
May 8, 2026
0cafcb2
[ET-VK] Plumb subgroup property queries + VK_EXT_subgroup_size_control
May 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 0 additions & 17 deletions .ci/docker/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -92,18 +92,6 @@ esac
TORCH_VERSION=$(cat ci_commit_pins/pytorch.txt)
BUILD_DOCS=1

# Pull channel + spec/url helpers out of torch_pin.py so install_pytorch.sh
# (which runs inside the docker build, where torch_pin.py isn't available)
# can decide between wheel install (test/release) and source build (nightly).
# Self-hosted runners often have python3 but not the unversioned python alias.
PYTHON_BIN=$(command -v python3 || command -v python)
TORCH_PIN_HELPERS=$(cd ../.. && "$PYTHON_BIN" -c "from torch_pin import CHANNEL, torch_spec, torchaudio_spec, torchvision_spec, torch_index_url_base; print(CHANNEL); print(torch_spec()); print(torchaudio_spec()); print(torchvision_spec()); print(torch_index_url_base())")
TORCH_CHANNEL=$(echo "${TORCH_PIN_HELPERS}" | sed -n '1p')
TORCH_SPEC=$(echo "${TORCH_PIN_HELPERS}" | sed -n '2p')
TORCHAUDIO_SPEC=$(echo "${TORCH_PIN_HELPERS}" | sed -n '3p')
TORCHVISION_SPEC=$(echo "${TORCH_PIN_HELPERS}" | sed -n '4p')
TORCH_INDEX_URL=$(echo "${TORCH_PIN_HELPERS}" | sed -n '5p')

# Copy requirements-lintrunner.txt from root to here
cp ../../requirements-lintrunner.txt ./

Expand All @@ -116,11 +104,6 @@ docker build \
--build-arg "PYTHON_VERSION=${PYTHON_VERSION}" \
--build-arg "MINICONDA_VERSION=${MINICONDA_VERSION}" \
--build-arg "TORCH_VERSION=${TORCH_VERSION}" \
--build-arg "TORCH_CHANNEL=${TORCH_CHANNEL}" \
--build-arg "TORCH_SPEC=${TORCH_SPEC}" \
--build-arg "TORCHAUDIO_SPEC=${TORCHAUDIO_SPEC}" \
--build-arg "TORCHVISION_SPEC=${TORCHVISION_SPEC}" \
--build-arg "TORCH_INDEX_URL=${TORCH_INDEX_URL}" \
--build-arg "BUCK2_VERSION=${BUCK2_VERSION}" \
--build-arg "LINTRUNNER=${LINTRUNNER:-}" \
--build-arg "BUILD_DOCS=${BUILD_DOCS}" \
Expand Down
2 changes: 1 addition & 1 deletion .ci/docker/ci_commit_pins/pytorch.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
release/2.11
release/2.11
29 changes: 3 additions & 26 deletions .ci/docker/common/install_pytorch.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,24 +17,6 @@ install_domains() {
}

install_pytorch_and_domains() {
if [ "${TORCH_CHANNEL}" != "nightly" ]; then
# Test/release: install the published wheels directly. The specs and URL
# are passed in as docker build args (computed from torch_pin.py by
# .ci/docker/build.sh). RC wheels at /whl/test/ get re-uploaded under the
# same version, so use --no-cache-dir there to avoid stale cache hits.
local cache_flag=""
if [ "${TORCH_CHANNEL}" = "test" ]; then
cache_flag="--no-cache-dir"
fi
pip_install --force-reinstall ${cache_flag} \
"${TORCH_SPEC}" "${TORCHVISION_SPEC}" "${TORCHAUDIO_SPEC}" \
--index-url "${TORCH_INDEX_URL}/cpu"
return
fi

# Nightly: build pytorch from source against the pinned SHA in pytorch.txt
# so we catch upstream regressions, then install audio/vision from the
# commits that pytorch itself pins.
git clone https://github.com/pytorch/pytorch.git

# Fetch the target commit
Expand All @@ -45,19 +27,14 @@ install_pytorch_and_domains() {
chown -R ci-user .

export _GLIBCXX_USE_CXX11_ABI=1
# PyTorch's FindARM.cmake hard-fails when the SVE+BF16 compile probe
# doesn't pass — gcc-11 in this image is too old to accept the combined
# NEON/SVE/bfloat16 intrinsics the probe exercises. Executorch's aarch64
# runtime targets (phones, embedded) don't use SVE, so bypass the check.
export BUILD_IGNORE_SVE_UNAVAILABLE=1
# Then build and install PyTorch
conda_run python setup.py bdist_wheel
pip_install "$(echo dist/*.whl)"

# Defer to PyTorch's own pinned audio/vision commits.
TORCHAUDIO_VERSION=$(cat .github/ci_commit_pins/audio.txt)
# Grab the pinned audio and vision commits from PyTorch
TORCHAUDIO_VERSION=release/2.11
export TORCHAUDIO_VERSION
TORCHVISION_VERSION=$(cat .github/ci_commit_pins/vision.txt)
TORCHVISION_VERSION=release/0.26
export TORCHVISION_VERSION

install_domains
Expand Down
5 changes: 0 additions & 5 deletions .ci/docker/ubuntu/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,6 @@ ENV SCCACHE_S3_KEY_PREFIX executorch
ENV SCCACHE_REGION us-east-1

ARG TORCH_VERSION
ARG TORCH_CHANNEL
ARG TORCH_SPEC
ARG TORCHAUDIO_SPEC
ARG TORCHVISION_SPEC
ARG TORCH_INDEX_URL
ARG SKIP_PYTORCH
COPY ./common/install_pytorch.sh install_pytorch.sh
COPY ./common/utils.sh utils.sh
Expand Down
3 changes: 3 additions & 0 deletions .ci/scripts/download_hf_hub.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
#!/bin/bash

# Disable HF Xet storage to avoid stalled downloads on CI runners
export HF_HUB_DISABLE_XET=1

# Function to download files from the Hugging Face Hub
# Arguments:
# 1. model_id: The Hugging Face repository ID (e.g., "organization/model_name")
Expand Down
3 changes: 3 additions & 0 deletions .ci/scripts/export_model_artifact.sh
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,9 @@ if [ -z "${1:-}" ]; then
exit 1
fi

# Disable HF Xet storage to avoid stalled downloads on CI runners
export HF_HUB_DISABLE_XET=1

set -eux

DEVICE="$1"
Expand Down
12 changes: 9 additions & 3 deletions .ci/scripts/setup-macos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,6 @@ setup_macos_env_variables
# buck2 atm
install_buck
brew install libomp
install_pip_dependencies

# TODO(huydhn): Unlike our self-hosted runner, GitHub runner doesn't have access
# to our infra, so compiler caching needs to be setup differently using GitHub
Expand All @@ -125,10 +124,17 @@ if [[ -z "${GITHUB_RUNNER:-}" ]]; then
install_sccache
fi

# Install pinned torch before requirements-ci.txt so torchsr's transitive
# torch dep is satisfied by the existing install and pip does not pull a
# separate copy from PyPI. sccache is initialized above so source-build
# cache misses still hit the cache.
print_cmake_info
install_pytorch_and_domains
# We build PyTorch from source here instead of using nightly. This allows CI to test against
# the pinned commit from PyTorch

install_pip_dependencies

# install_executorch's --use-pt-pinned-commit skips re-installing torch since
# install_pytorch_and_domains already installed the pinned build above.
if [[ "$EDITABLE" == "true" ]]; then
install_executorch --use-pt-pinned-commit --editable
else
Expand Down
11 changes: 10 additions & 1 deletion .ci/scripts/test_backend.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ export PYTHON_EXECUTABLE=python

# CMake options to use, in addition to the defaults.
EXTRA_BUILD_ARGS=""
PYTEST_RETRY_ARGS=()

if [[ "$FLOW" == *qnn* ]]; then
# Setup QNN sdk and deps - note that this is a bit hacky due to the nature of the
Expand All @@ -57,6 +58,9 @@ if [[ "$FLOW" == *vulkan* ]]; then
fi

if [[ "$FLOW" == *arm* ]]; then
if [[ "$SUITE" == "operators" ]]; then
PYTEST_RETRY_ARGS=(--reruns 2 --reruns-delay 1)
fi

# Setup ARM deps.
if [[ "$FLOW" == *vgf* ]]; then
Expand Down Expand Up @@ -95,6 +99,11 @@ GOLDEN_DIR="${ARTIFACT_DIR}/golden-artifacts"
export GOLDEN_ARTIFACTS_DIR="${GOLDEN_DIR}"

EXIT_CODE=0
${CONDA_RUN_CMD} pytest -c /dev/null -n auto backends/test/suite/$SUITE/ -m flow_$FLOW --json-report --json-report-file="$REPORT_FILE" || EXIT_CODE=$?
PYTEST_ARGS=(-c /dev/null -n auto)
if [[ ${#PYTEST_RETRY_ARGS[@]} -gt 0 ]]; then
PYTEST_ARGS+=("${PYTEST_RETRY_ARGS[@]}")
fi
PYTEST_ARGS+=("backends/test/suite/$SUITE/" -m "flow_$FLOW" --json-report --json-report-file="$REPORT_FILE")
${CONDA_RUN_CMD} pytest "${PYTEST_ARGS[@]}" || EXIT_CODE=$?
# Generate markdown summary.
${CONDA_RUN_CMD} python -m executorch.backends.test.suite.generate_markdown_summary_json "$REPORT_FILE" > ${GITHUB_STEP_SUMMARY:-"step_summary.md"} --exit-code $EXIT_CODE
6 changes: 3 additions & 3 deletions .ci/scripts/test_coreml_bc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ source "${REPO_ROOT}/.ci/scripts/utils.sh"
# Create a conda environment with Python 3.10 for compatibility with old ET versions
# ET 1.0.0 only supports Python >=3.10,<3.13
CONDA_ENV_NAME="coreml_bc_test_env"
conda create -y -n "${CONDA_ENV_NAME}" python=3.10
conda create -y -n "${CONDA_ENV_NAME}" python=3.10 pip packaging

# Use conda run to execute commands in the new environment
CONDA_RUN="conda run --no-capture-output -n ${CONDA_ENV_NAME}"
Expand Down Expand Up @@ -69,7 +69,7 @@ git submodule sync --recursive
git submodule update --init --recursive

# Install executorch
${CONDA_RUN} pip install --upgrade pip
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python install_executorch.py

# Step 3: Export model
Expand Down Expand Up @@ -129,7 +129,7 @@ git submodule update --init --recursive

# Step 5: Install current version
echo "=== Step 5: Installing current ET version ==="
${CONDA_RUN} pip install --upgrade pip
${CONDA_RUN} python -m pip install --upgrade pip
${CONDA_RUN} python install_executorch.py

# Step 6: Run the old pte file
Expand Down
35 changes: 30 additions & 5 deletions .ci/scripts/test_huggingface_optimum_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,17 @@
import gc
import logging
import math
import os
import shutil
import subprocess
import tempfile
import time
from pathlib import Path
from typing import List

# Disable HF Xet storage to avoid stalled downloads on CI runners
os.environ.setdefault("HF_HUB_DISABLE_XET", "1")

import torch
from datasets import load_dataset

Expand All @@ -25,6 +31,17 @@
)


EXPORT_RETRIES = 3


def _clear_export_dir(model_dir):
for path in Path(model_dir).iterdir():
if path.is_dir() and not path.is_symlink():
shutil.rmtree(path)
else:
path.unlink()


def cli_export(command, model_dir):
p = Path(model_dir)
if p.exists():
Expand All @@ -34,11 +51,19 @@ def cli_export(command, model_dir):
raise Exception(
f"Existing directory {model_dir} is non-empty. Please remove it first."
)
try:
subprocess.run(command, check=True)
print("Export completed successfully.")
except subprocess.CalledProcessError as e:
print(f"Export failed with error: {e}")

for attempt in range(1, EXPORT_RETRIES + 1):
try:
subprocess.run(command, check=True)
print("Export completed successfully.")
return
except subprocess.CalledProcessError as e:
print(f"Export attempt {attempt}/{EXPORT_RETRIES} failed with error: {e}")
if attempt == EXPORT_RETRIES:
raise
if p.exists():
_clear_export_dir(model_dir)
time.sleep(attempt * 10)


def check_causal_lm_output_quality(
Expand Down
35 changes: 32 additions & 3 deletions .ci/scripts/test_lora.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
# LICENSE file in the root directory of this source tree.

set -exu
# Disable HF Xet storage to avoid stalled downloads on CI runners
export HF_HUB_DISABLE_XET=1
# shellcheck source=/dev/null
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"

Expand Down Expand Up @@ -33,6 +35,24 @@ cleanup_files() {
rm result*.txt
}

matches_base_response_prefix() {
local output_file="$1"
python - "$output_file" <<'PY'
import pathlib
import re
import sys

text = pathlib.Path(sys.argv[1]).read_text()
pattern = re.compile(
r"^<\|im_start\|>user Calculate 15% of 80\?<\|im_end\|><\|im_start\|>assistant:\n"
r"(?:<think>\n)+"
r"Okay, so I need to calculate 15% of 80\.",
re.MULTILINE,
)
sys.exit(0 if pattern.match(text) else 1)
PY
}

# Hosting lora adapter in personal repo for now.
python -m pip install -q huggingface_hub
HF_ADAPTER_REPO="lucylq/qwen3_06B_lora_math"
Expand Down Expand Up @@ -139,7 +159,15 @@ Okay, so I need to calculate 15% of 80."
EXPECTED_QUANT_LORA_PREFIX="
<|im_start|>user Calculate 15% of 80?<|im_end|><|im_start|>assistant
To calculate 15% of 80, we can multiply 80 by 15/100.
So, 15% of 80 is equal to (80 * 15) / 100 = 1200 / 100 = 12.
80 * 15/100 = 12.
So, 15% of 80 is 12.
#### 12
The answer is: 12<|im_end|>"
EXPECTED_QUANT_LORA_ALTERNATE_PREFIX="
<|im_start|>user Calculate 15% of 80?<|im_end|><|im_start|>assistant
To calculate 15% of 80, we can multiply 80 by 15/100.
80 * 15/100 = 12.
So, 15% of 80 is 12.
#### 12
The answer is: 12<|im_end|>"

Expand Down Expand Up @@ -186,7 +214,7 @@ cmake-out/examples/models/llama/llama_main --model_path=qwen_q.pte --data_paths=
NOW=$(date +"%H:%M:%S")
echo "Finished at ${NOW}"
RESULT=$(cat result.txt)
if [[ "${RESULT}" == "${EXPECTED_QUANT_PREFIX}"* ]]; then
if matches_base_response_prefix result.txt; then
echo "Expected result prefix: ${EXPECTED_QUANT_PREFIX}"
echo "Actual result: ${RESULT}"
echo "Test 3: Success"
Expand All @@ -207,12 +235,13 @@ NOW=$(date +"%H:%M:%S")
echo "Finished at ${NOW}"

RESULT=$(cat result.txt)
if [[ "${RESULT}" == "${EXPECTED_QUANT_LORA_PREFIX}"* ]]; then
if [[ "${RESULT}" == "${EXPECTED_QUANT_LORA_PREFIX}"* ]] || [[ "${RESULT}" == "${EXPECTED_QUANT_LORA_ALTERNATE_PREFIX}"* ]]; then
echo "Expected result prefix: ${EXPECTED_QUANT_LORA_PREFIX}"
echo "Actual result: ${RESULT}"
echo "Test 4: Success"
else
echo "Expected result prefix: ${EXPECTED_QUANT_LORA_PREFIX}"
echo "Alternate expected result prefix: ${EXPECTED_QUANT_LORA_ALTERNATE_PREFIX}"
echo "Actual result: ${RESULT}"
echo "Test 4: Failure; results not the same"
cleanup_files
Expand Down
22 changes: 21 additions & 1 deletion .ci/scripts/test_lora_multimethod.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
# LICENSE file in the root directory of this source tree.

set -exu
# Disable HF Xet storage to avoid stalled downloads on CI runners
export HF_HUB_DISABLE_XET=1
# shellcheck source=/dev/null
source "$(dirname "${BASH_SOURCE[0]}")/utils.sh"

Expand Down Expand Up @@ -33,6 +35,24 @@ cleanup_files() {
rm -f result*.txt
}

matches_base_response_prefix() {
local output_file="$1"
python - "$output_file" <<'PY'
import pathlib
import re
import sys

text = pathlib.Path(sys.argv[1]).read_text()
pattern = re.compile(
r"^<\|im_start\|>user Calculate 15% of 80\?<\|im_end\|><\|im_start\|>assistant:\n"
r"(?:<think>\n)+"
r"Okay, so I need to calculate 15% of 80\.",
re.MULTILINE,
)
sys.exit(0 if pattern.match(text) else 1)
PY
}

# Download LoRA adapter.
python -m pip install -q huggingface_hub
HF_ADAPTER_REPO="lucylq/qwen3_06B_lora_math"
Expand Down Expand Up @@ -107,7 +127,7 @@ NOW=$(date +"%H:%M:%S")
echo "Finished at ${NOW}"

RESULT=$(cat result_base.txt)
if [[ "${RESULT}" == "${EXPECTED_BASE_PREFIX}"* ]]; then
if matches_base_response_prefix result_base.txt; then
echo "Test 2 (base_forward): Success"
else
echo "Test 2 (base_forward): Failure"
Expand Down
Loading
Loading