diff --git a/docs/source/api-life-cycle.md b/docs/source/api-life-cycle.md index 0327f23a985..3ccaa4eddb1 100644 --- a/docs/source/api-life-cycle.md +++ b/docs/source/api-life-cycle.md @@ -104,7 +104,7 @@ decorator. Use .. warning:: in the docstrings of deprecated and experimental APIs. See -example +example usage. @@ -115,7 +115,7 @@ usage. -Use the ET_DEPRECATED annotation macro. See example usage. +Use the ET_DEPRECATED annotation macro. See example usage.

@@ -125,7 +125,7 @@ Use the ET_EXPERIMENTAL annotation macro. Start Doxygen comments with DEPRECATED: See -example +example usage.

diff --git a/docs/source/backends-qualcomm.md b/docs/source/backends-qualcomm.md index 6feddcc803c..c4465c8290d 100644 --- a/docs/source/backends-qualcomm.md +++ b/docs/source/backends-qualcomm.md @@ -608,7 +608,7 @@ Supports: For details, see: backends/qualcomm/quantizer/quantizer.py ### Operator Support -[The full operator support matrix](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository. +[The full operator support matrix](https://github.com/pytorch/executorch/tree/main/backends/qualcomm/builders#operator-support-status) is tracked and frequently updated in the ExecuTorch repository. It lists: - Supported PyTorch ops (aten.*, custom ops) @@ -633,4 +633,4 @@ If you encounter any issues while reproducing the tutorial, please file a github [issue](https://github.com/pytorch/executorch/issues) on ExecuTorch repo and tag use `#qcom_aisw` tag ### Debugging tips - - Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/f32cdc3de6f7176d70a80228f1a60bcd45d93437/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see it if works one device. + - Before trying any complicated models, try out [a simple model example](https://github.com/pytorch/executorch/tree/main/examples/qualcomm#simple-examples-to-verify-the-backend-is-working) and see if it works on your device. diff --git a/docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md b/docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md index e6d35c0646e..9fe485e9f04 100644 --- a/docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md +++ b/docs/source/backends/arm-ethos-u/arm-ethos-u-troubleshooting.md @@ -24,7 +24,7 @@ You can see how this coupling between the memory mode and runtime application i ## Using Bundled.io and ETdump -The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/0.4/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools. +The arm_executor_runner supports [bundled-io](https://docs.pytorch.org/executorch/stable/bundled-io.html) and [ETdump](https://docs.pytorch.org/executorch/stable/etdump.html) debugging tools. To enable bundled-io, set `-DEXECUTORCH_BUILD_DEVTOOLS=ON` when building Executorch and `-DET_BUNDLE_IO=ON` when building the executor_runner. To enable ETdump, set `-DEXECUTORCH_BUILD_ARM_ETDUMP=ON` when building Executorch and `-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` when building the executor_runner. diff --git a/docs/source/backends/nxp/nxp-overview.md b/docs/source/backends/nxp/nxp-overview.md index 2bf66e28e5c..6070f86e458 100644 --- a/docs/source/backends/nxp/nxp-overview.md +++ b/docs/source/backends/nxp/nxp-overview.md @@ -39,7 +39,7 @@ $ ./examples/nxp/setup.sh To test the eIQ Neutron Backend, both AoT flow for model preparation and Runtime for execution, refer to the [Getting started with eIQ Neutron NPU ExecuTorch backend](tutorials/nxp-basic-tutorial.md) -For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/release/1.0/examples/nxp/aot_neutron_compile.py). +For a quick overview how to convert a custom PyTorch model, take a look at our [example python script](https://github.com/pytorch/executorch/tree/main/examples/nxp/aot_neutron_compile.py). ## Runtime Integration diff --git a/docs/source/backends/nxp/nxp-partitioner.rst b/docs/source/backends/nxp/nxp-partitioner.rst index 9aa65b1d0d9..c568959883f 100644 --- a/docs/source/backends/nxp/nxp-partitioner.rst +++ b/docs/source/backends/nxp/nxp-partitioner.rst @@ -28,7 +28,7 @@ Following fields can be set: Custom Delegation Options ------------------------- By default the Neutron backend is defensive, what means it does not delegate operators which cannot be decided statically during partitioning. But as the model author you typically have insight into the model and so you can allow opportunistic delegation for some cases. For list of options, see -`CustomDelegationOptions `_ +`CustomDelegationOptions `_ ================ Operator Support @@ -37,7 +37,7 @@ Operator Support Operators are the building blocks of the ML model. See `IRs `_ for more information on the PyTorch operator set. This section lists the Edge operators supported by the Neutron backend. -For detailed constraints of the operators see the conditions in the ``is_supported_*`` functions in the `Node converters `_ +For detailed constraints of the operators see the ``is_supported`` / ``_is_supported_in_IR`` / ``_is_supported_on_target`` checks in the `Node converters `_ .. csv-table:: Operator Support diff --git a/docs/source/backends/xnnpack/xnnpack-partitioner.rst b/docs/source/backends/xnnpack/xnnpack-partitioner.rst index a0881aa3a6a..85dc3bf9c61 100644 --- a/docs/source/backends/xnnpack/xnnpack-partitioner.rst +++ b/docs/source/backends/xnnpack/xnnpack-partitioner.rst @@ -2,10 +2,10 @@ Partitioner API =============== -The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor `_: +The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor `_: -- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See `../config/__init__.py `_ for an exhaustive list of available operator configs. -- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType `_. +- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators are delegated. See `../config/__init__.py `_ for an exhaustive list of available operator configs. +- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType `_. - ``per_op_mode``: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false. - ``verbose``: If true, print additional information during lowering. diff --git a/docs/source/bundled-io.md b/docs/source/bundled-io.md index d901710bfb7..2597b991920 100644 --- a/docs/source/bundled-io.md +++ b/docs/source/bundled-io.md @@ -199,17 +199,17 @@ This stage mainly focuses on executing the model with the bundled inputs and com ### Get ExecuTorch Program Pointer from `BundledProgram` Buffer We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API for this -`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API. +`executorch::bundled_program::get_program_data`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L128-L137) of this API. ### Load Bundled Input to Method -To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API. +To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L253-L259) of this API. ### Verify the Method's Output. -We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API. +We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Check out an [example usage](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp#L301-L307) of this API. ### Runtime Example -Please checkout our [example runner](https://github.com/pytorch/executorch/blob/release/0.6/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step: +Please check out our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/README.md#bundledprogram) for a bundled program. You could run these commands to test with the BundledProgram binary (`.bpte`) file you generated in the previous step: ```bash cd executorch @@ -218,7 +218,7 @@ cd executorch ``` It is expected to see no output from running the above mentioned snippet. -For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/release/1.0/examples/devtools/example_runner/example_runner.cpp). +For a detailed example of how the runner should be like, please refer to our [example runner](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp). ### Try the Complete Workflow diff --git a/docs/source/compiler-custom-compiler-passes.md b/docs/source/compiler-custom-compiler-passes.md index ff0013a1929..d40440c0891 100644 --- a/docs/source/compiler-custom-compiler-passes.md +++ b/docs/source/compiler-custom-compiler-passes.md @@ -25,7 +25,7 @@ Our projection on the frequency of these use cases are: For level 1 uses cases (creating one-to-X mappings, performing forwards iterations, and looking at local node information), we can utilize a helper class called -[`ExportPass`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/pass_base.py#L44). +[`ExportPass`](https://github.com/pytorch/executorch/blob/main/exir/pass_base.py#L655). This is an [interpreter-based](https://pytorch.org/docs/stable/fx.html#the-interpreter-pattern) way where we execute each node and recreate the graph except with @@ -35,7 +35,7 @@ metadata such as stack trace, FakeTensor values, and torch.nn.Module hierarchy are preserved and updated depending on the transformations made. To implement this pass, we can create a subclass of -[`ExportPass`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/pass_base.py#L44) +[`ExportPass`](https://github.com/pytorch/executorch/blob/main/exir/pass_base.py#L655) and implement the exposed functions. When called with a graph module, it will run the graph module and create a new graph containing the changes specified by the pass. This means that the graph module passed in must be runnable on CPU, @@ -171,7 +171,7 @@ class ScalarToTensorPass(ExportPass): ### Level 2 For creating many-to-one mappings, we can utilize FX's [subgraph -rewriter](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/subgraph_rewriter.py#L77). +rewriter](https://github.com/pytorch/pytorch/blob/main/torch/fx/subgraph_rewriter.py#L226). Given a `pattern`, it creates a subgraph of operators matching to the pattern, and then replaces each matched subgraph with the `replacement`. @@ -229,7 +229,7 @@ class ReplacedPatterns: ### Level 3 For the third way of creating a pass, we can utilize the most basic -[`PassBase`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/infra/pass_base.py#L22). +[`PassBase`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/pass_base.py#L28). To create a pass, we can subclass this and implement the function `call` with the pass contents. Additionally, we can implement the functions `requires` and `ensures` which will be called before and after the function `call`. Note that @@ -315,7 +315,7 @@ with IR Spec, so be careful when using them. For finding subgraphs within a graph that match a specific pattern, we can utilize FX's -[`SubgraphMatcher`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/utils/matcher_utils.py#L51). +[`SubgraphMatcher`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/utils/matcher_utils.py#L63). Class Attributes: @@ -382,7 +382,7 @@ class InternalMatch(): To find the largest subgraphs of nodes that support a specific invariant, we can utilize FX's -[`CapabilityBasedPartitioner`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/infra/partitioner.py#L34C1-L34C1). +[`CapabilityBasedPartitioner`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/infra/partitioner.py#L65). Class Attributes @@ -399,14 +399,14 @@ Class Attributes that are allowed to be in a single node partition. The -[`OperatorSupportBase`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L28) +[`OperatorSupportBase`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L37) class is used by the partitioner to determine if a specific node in the graph belongs in the partition. This is done by overriding the `is_node_supported` function. You can -chain multiple `OperatorSuppportBase` by using -[`chain`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L150)(which +chain multiple `OperatorSupportBase` by using +[`chain`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L159)(which returns False if any of the OperatorSupportBase return False) and -[`any_chain`](https://github.com/pytorch/pytorch/blob/8597d37536ef11bdf6b0a539ab79af876e1c92f6/torch/fx/passes/operator_support.py#L164) +[`any_chain`](https://github.com/pytorch/pytorch/blob/main/torch/fx/passes/operator_support.py#L172) (which returns True if any of the OperatorSupportBase returns True). Consider the following example: @@ -440,7 +440,7 @@ not allow `call_module` nodes. ### Combined We also provide a combined helper function: -[`generate_pattern_op_partitions`](https://github.com/pytorch/executorch/blob/d9eef24bb720804aa7b400b05241487510ae0dc2/exir/backend/canonical_partitioners/pattern_op_partitioner.py#L59) +[`generate_pattern_op_partitions`](https://github.com/pytorch/executorch/blob/main/exir/backend/canonical_partitioners/pattern_op_partitioner.py#L107) Args: * `graph_module (fx.GraphModule)`: Module that we want to partition diff --git a/docs/source/compiler-memory-planning.md b/docs/source/compiler-memory-planning.md index 5c30defada7..5a34634beec 100644 --- a/docs/source/compiler-memory-planning.md +++ b/docs/source/compiler-memory-planning.md @@ -82,7 +82,7 @@ program = edge_program.to_executorch( ) ``` -Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/d62c41ca86435e5316e7ed292b6d68aff27a2fb7/exir/memory_planning.py#L459C1-L459C12). +Users attempting to write a custom memory planning algorithm should start by looking at [the greedy algorithm's implementation](https://github.com/pytorch/executorch/blob/main/exir/memory_planning.py#L801). ## Debugging Tool diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index 443015b47be..ef55ade68aa 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -82,7 +82,7 @@ Starting from 2025-04-12, you can download nightly `main` branch snapshots: * `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar` * `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar.sha256sums` * Replace `YYYYMMDD` with the actual date you want to use. -* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/c66b37d010c88a113560693b14dc6bd112593c11/.github/workflows/android-release-artifacts.yml#L14-L15). +* AAR file is generated by [this workflow](https://github.com/pytorch/executorch/blob/main/.github/workflows/android-release-artifacts.yml). For example: