You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
so install a torch version that fulfills your needs by following either https://pytorch.org/get-started/locally/ or our recommendation:
We may want to reword this to make it less optional what torch version they use. Maybe we should have a supported torch table and say something along the lines of:
We recommended that you use one of the torch versions listed above. You can use your own version, but may run into undefined behavior.
We may not even need a table, just somewhere where they can easily view the validated versions.
Torch Warning
We should provide users with a warning message, when they use non-validated versions of torch. Something along the lines of:
WARNING: You are using an unvalidated version of `torch` and may run into undefined behavior. For validated versions, see <link_to_validated_versions>
There were warnings thrown in the export script, after upgrading to torch >= 2.6.0:
/home/stbaione/repos/temp/.venv/lib/python3.11/site-packages/torch/export/_unlift.py:75: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/home/stbaione/repos/temp/.venv/lib/python3.11/site-packages/torch/fx/graph.py:1801: UserWarning: Node root_token_embedding_lifted_tensor_0 target root.token_embedding.lifted_tensor_0 lifted_tensor_0 of root.token_embedding does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
But, it probably would have made the troubleshooting process faster (and potentially given the user enough info to fix on their own), if there was a clear warning.
The text was updated successfully, but these errors were encountered:
In this case I would rather we support the latest stable torch version. Pinning to old versions was an emergency fix following the torch update that started blocking developer CI that we can't carry forward forever.
There was a recent discord discussion, where a user was hitting a compiler error when attempting to follow the llama_serving docs.
The issue was root caused to the use of
torch>=2.6.0
.Two improvements we could make out of this:
llama_serving doc
wording to make it clear that you might run into issues with non-validated torch versionswarning
to user when using a non-validated torch version (recommended by user who hit the bug)Llama_serving doc
We do specify
torch>=2.3.0,<2.6.0
in the Install Python Packages, but that's prefixed by:We may want to reword this to make it less optional what
torch
version they use. Maybe we should have asupported torch
table and say something along the lines of:We may not even need a table, just somewhere where they can easily view the validated versions.
Torch Warning
We should provide users with a
warning
message, when they use non-validated versions oftorch
. Something along the lines of:There were warnings thrown in the export script, after upgrading to
torch >= 2.6.0
:Which then lead to errors in the compiler:
But, it probably would have made the troubleshooting process faster (and potentially given the user enough info to fix on their own), if there was a clear warning.
The text was updated successfully, but these errors were encountered: