diff --git a/docs/guide/export.rst b/docs/guide/export.rst index 88a02fe8b..a37fb894f 100644 --- a/docs/guide/export.rst +++ b/docs/guide/export.rst @@ -100,6 +100,7 @@ If you are using PyTorch 2.0+ and ONNX Opset 14+, you can easily export SB3 poli with th.no_grad(): print(model.policy(th.as_tensor(observation), deterministic=True)) +For exporting ``MultiInputPolicy``, please have a look at `GH#1873 `_. For SAC the procedure is similar. The example shown only exports the actor network as the actor is sufficient to roll out the trained policies. diff --git a/docs/misc/changelog.rst b/docs/misc/changelog.rst index c02a185b3..bb61e2bfd 100644 --- a/docs/misc/changelog.rst +++ b/docs/misc/changelog.rst @@ -44,6 +44,7 @@ Documentation: - Clarify the use of Gym wrappers with ``make_vec_env`` in the section on Vectorized Environments (@pstahlhofen) - Updated callback doc for ``EveryNTimesteps`` - Added doc on how to set env attributes via ``VecEnv`` calls +- Added ONNX export example for ``MultiInputPolicy`` (@darkopetrovic) Release 2.5.0 (2025-01-27)