Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/build-deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,27 +49,27 @@ jobs:
env:
TORCHJD_VERSION: ${{ steps.deploy_folder.outputs.TORCHJD_VERSION }}

- name: Deploy to DEPLOY_DIR of TorchJD/documentation
- name: Deploy to DEPLOY_DIR of SimplexLab/documentation
uses: peaceiris/actions-gh-pages@v4
with:
deploy_key: ${{ secrets.PROD_DOCUMENTATION_DEPLOY_KEY }}
publish_dir: docs/build/dirhtml
destination_dir: ${{ steps.deploy_folder.outputs.DEPLOY_DIR }}
external_repository: TorchJD/documentation
external_repository: SimplexLab/documentation
publish_branch: main

- name: Kill ssh-agent
# See: https://github.com/peaceiris/actions-gh-pages/issues/909
run: killall ssh-agent

- name: Deploy to stable of TorchJD/documentation
- name: Deploy to stable of SimplexLab/documentation
if: startsWith(github.ref, 'refs/tags/')
uses: peaceiris/actions-gh-pages@v4
with:
deploy_key: ${{ secrets.PROD_DOCUMENTATION_DEPLOY_KEY }}
publish_dir: docs/build/dirhtml
destination_dir: stable
external_repository: TorchJD/documentation
external_repository: SimplexLab/documentation
publish_branch: main

- name: Add documentation link to summary
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

[![Doc](https://img.shields.io/badge/Doc-torchjd.org-blue?logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8%2BCjwhLS0gQ3JlYXRlZCB1c2luZyBLcml0YTogaHR0cDovL2tyaXRhLm9yZyAtLT4KCjxzdmcKICAgd2lkdGg9IjIwNDcuNzJwdCIKICAgaGVpZ2h0PSIyMDQ3LjcycHQiCiAgIHZpZXdCb3g9IjAgMCAyMDQ3LjcyIDIwNDcuNzIiCiAgIHZlcnNpb249IjEuMSIKICAgaWQ9InN2ZzEiCiAgIHNvZGlwb2RpOmRvY25hbWU9IlRvcmNoSkRfbG9nb19jaXJjdWxhci5zdmciCiAgIGlua3NjYXBlOnZlcnNpb249IjEuMy4yICgwOTFlMjBlZjBmLCAyMDIzLTExLTI1KSIKICAgeG1sbnM6aW5rc2NhcGU9Imh0dHA6Ly93d3cuaW5rc2NhcGUub3JnL25hbWVzcGFjZXMvaW5rc2NhcGUiCiAgIHhtbG5zOnNvZGlwb2RpPSJodHRwOi8vc29kaXBvZGkuc291cmNlZm9yZ2UubmV0L0RURC9zb2RpcG9kaS0wLmR0ZCIKICAgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIgogICB4bWxuczpzdmc9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KICA8c29kaXBvZGk6bmFtZWR2aWV3CiAgICAgaWQ9Im5hbWVkdmlldzEiCiAgICAgcGFnZWNvbG9yPSIjZmZmZmZmIgogICAgIGJvcmRlcmNvbG9yPSIjNjY2NjY2IgogICAgIGJvcmRlcm9wYWNpdHk9IjEuMCIKICAgICBpbmtzY2FwZTpzaG93cGFnZXNoYWRvdz0iMiIKICAgICBpbmtzY2FwZTpwYWdlb3BhY2l0eT0iMC4wIgogICAgIGlua3NjYXBlOnBhZ2VjaGVja2VyYm9hcmQ9IjAiCiAgICAgaW5rc2NhcGU6ZGVza2NvbG9yPSIjZDFkMWQxIgogICAgIGlua3NjYXBlOmRvY3VtZW50LXVuaXRzPSJwdCIKICAgICBpbmtzY2FwZTp6b29tPSIwLjE2Mjk4NjE1IgogICAgIGlua3NjYXBlOmN4PSIxMzk1LjgyNDEiCiAgICAgaW5rc2NhcGU6Y3k9Ijg3NC4zMDczOSIKICAgICBpbmtzY2FwZTp3aW5kb3ctd2lkdGg9IjI1NjAiCiAgICAgaW5rc2NhcGU6d2luZG93LWhlaWdodD0iMTM3MSIKICAgICBpbmtzY2FwZTp3aW5kb3cteD0iMCIKICAgICBpbmtzY2FwZTp3aW5kb3cteT0iMCIKICAgICBpbmtzY2FwZTp3aW5kb3ctbWF4aW1pemVkPSIxIgogICAgIGlua3NjYXBlOmN1cnJlbnQtbGF5ZXI9InN2ZzEiIC8%2BCiAgPGRlZnMKICAgICBpZD0iZGVmczEiIC8%2BCiAgPHBhdGgKICAgICBpZD0ic2hhcGUxIgogICAgIGZpbGw9IiMwMDAwMDAiCiAgICAgZmlsbC1ydWxlPSJldmVub2RkIgogICAgIGQ9Ik0yNTUuMjE1IDg5OS44NzVMMjU1Ljk2NCAyNTUuOTY0TDc2Ny44OTMgMjU1Ljk2NEw3NjcuODkzIDBMMCAwTDAuMDMxMjUzMyA4OTguODQ0QzAuMDMxNzMwNSA4OTguODE0IDg0LjU3MjYgODk5Ljg3NSAyNTUuMjE1IDg5OS44NzVaIgogICAgIHN0eWxlPSJmaWxsOiMxYTgxZWI7ZmlsbC1vcGFjaXR5OjEiCiAgICAgdHJhbnNmb3JtPSJtYXRyaXgoMS4wMDAwMDAwMTQzMDcwNyAwIDAgMS4wMDAwMDAwMTQzMDcwNyAxMjcuOTgyMjI2NTIyMDU2IDEyNy45ODIyMjY1MjIwNTYpIiAvPgogIDxwYXRoCiAgICAgaWQ9InNoYXBlMDEiCiAgICAgdHJhbnNmb3JtPSJtYXRyaXgoLTEuMDAwMDAwMDA5MjIxODUgMCAwIC0xLjAwMDAwMDAwOTIyMTg1IDE5MTkuOTEzNjE3Mzk4NzEgMTkxMC4zMzcxOTY5MzEyNSkiCiAgICAgZmlsbD0iIzAwMDAwMCIKICAgICBmaWxsLXJ1bGU9ImV2ZW5vZGQiCiAgICAgZD0iTTc2OC4wNzQgMTc3Mi42MUMtMjgyLjAwNCAxNTk4LjY1IC0yMjkuNzEyIDE1MS44MjEgNzY4LjA3NCAwQzc2Ny4wODMgMjkuOTMzNyA3NjguMDk2IDE0Mi43NiA3NjguMDc0IDI2MC44ODZDNDEuNDc0NiA0NTYuOTAzIDEzNy40MjMgMTM4MC4wNiA3NjguMDc0IDE1MTMuNjQiCiAgICAgc3R5bGU9ImZpbGw6IzFhODFlYjtmaWxsLW9wYWNpdHk6MSIgLz4KICA8cGF0aAogICAgIGlkPSJzaGFwZTAyIgogICAgIGZpbGw9IiMwMDAwMDAiCiAgICAgZmlsbC1ydWxlPSJldmVub2RkIgogICAgIGQ9Ik03NjcuOTA5IDg4Ny4zMzhDMjYzLjQwMiA4MDMuOTI2IDAuMDc1OTQyMSAzODcuOTY0IDAgMC4wODU2NDk3QzE0LjY4NjggLTAuMDI4NTQ5OSA5OS4wNTUxIC0wLjAyODU0OTkgMjU1LjAxMSAwLjA4NTY0OTdDMjU1LjMxMSAyODEuMTE0IDQ0OC43ODYgNTYyLjE2MyA3NjcuOTA5IDYyNi40OTkiCiAgICAgc3R5bGU9ImZpbGw6IzFhODFlYjtmaWxsLW9wYWNpdHk6MSIKICAgICB0cmFuc2Zvcm09Im1hdHJpeCgwLjk5OTk5OTk2MDczODQ0IDAgMCAwLjk5OTk5OTk2MDczODQ0IDEyNy45NjY1OTE0OTQzMjggMTAyMy43NzIxNDc4MzE0KSIgLz4KICA8ZWxsaXBzZQogICAgIHN0eWxlPSJmaWxsOiMxYTgxZWI7c3Ryb2tlLXdpZHRoOjEuMDY3OTtmaWxsLW9wYWNpdHk6MSIKICAgICBpZD0icGF0aDEiCiAgICAgY3g9IjEwMjYuMzYxIgogICAgIGN5PSIxMDE0LjIyMTEiCiAgICAgcng9IjE4My4yNTU0MyIKICAgICByeT0iMTgzLjUxNTU4IiAvPgo8L3N2Zz4K)](https://torchjd.org)
[![Static Badge](https://img.shields.io/badge/%F0%9F%92%AC_ChatBot-chat.torchjd.org-blue?logo=%F0%9F%92%AC)](https://chat.torchjd.org)
[![Tests](https://github.com/TorchJD/torchjd/actions/workflows/tests.yml/badge.svg)](https://github.com/TorchJD/torchjd/actions/workflows/tests.yml)
[![codecov](https://codecov.io/gh/TorchJD/torchjd/graph/badge.svg?token=8AUCZE76QH)](https://codecov.io/gh/TorchJD/torchjd)
[![mypy](https://img.shields.io/github/actions/workflow/status/TorchJD/torchjd/tests.yml?label=mypy)](https://github.com/TorchJD/torchjd/actions/workflows/tests.yml)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/TorchJD/torchjd/main.svg)](https://results.pre-commit.ci/latest/github/TorchJD/torchjd/main)
[![Tests](https://github.com/SimplexLab/torchjd/actions/workflows/tests.yml/badge.svg)](https://github.com/SimplexLab/torchjd/actions/workflows/tests.yml)
[![codecov](https://codecov.io/gh/SimplexLab/torchjd/graph/badge.svg?token=8AUCZE76QH)](https://codecov.io/gh/SimplexLab/torchjd)
[![mypy](https://img.shields.io/github/actions/workflow/status/SimplexLab/torchjd/tests.yml?label=mypy)](https://github.com/SimplexLab/torchjd/actions/workflows/tests.yml)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/SimplexLab/torchjd/main.svg)](https://results.pre-commit.ci/latest/github/SimplexLab/torchjd/main)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/torchjd)](https://pypi.org/project/torchjd/)
[![Static Badge](https://img.shields.io/badge/Discord%20-%20community%20-%20%235865F2?logo=discord&logoColor=%23FFFFFF&label=Discord)](https://discord.gg/76KkRnb3nk)

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def linkcode_resolve(domain: str, info: dict[str, str]) -> str | None:
line_str = _get_line_str(obj)
version_str = _get_version_str()

link = f"https://github.com/TorchJD/torchjd/blob/{version_str}/{file_name}{line_str}"
link = f"https://github.com/SimplexLab/torchjd/blob/{version_str}/{file_name}{line_str}"
return link


Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples/rnn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,6 @@ descent can be leveraged to enhance optimization.
.. note::
At the time of writing, there seems to be an incompatibility between ``torch.vmap`` and
``torch.nn.RNN`` when running on CUDA (see `this issue
<https://github.com/TorchJD/torchjd/issues/220>`_ for more info), so we advise to set the
<https://github.com/SimplexLab/torchjd/issues/220>`_ for more info), so we advise to set the
``parallel_chunk_size`` to ``1`` to avoid using ``torch.vmap``. To improve performance, you can
check whether ``parallel_chunk_size=None`` (maximal parallelization) works on your side.
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ of the batch and per task, in the context of multi-task learning. We call this
:doc:`Instance-Wise Risk Multi-Task Learning <examples/iwmtl>` (IWMTL).

TorchJD is open-source, under MIT License. The source code is available on
`GitHub <https://github.com/TorchJD/torchjd>`_.
`GitHub <https://github.com/SimplexLab/torchjd>`_.

.. toctree::
:caption: Getting Started
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ SOFTWARE.
[project.urls]
Homepage = "https://torchjd.org/"
Documentation = "https://torchjd.org/"
Source = "https://github.com/TorchJD/torchjd"
Changelog = "https://github.com/TorchJD/torchjd/blob/main/CHANGELOG.md"
Source = "https://github.com/SimplexLab/torchjd"
Changelog = "https://github.com/SimplexLab/torchjd/blob/main/CHANGELOG.md"

[dependency-groups]
check = [
Expand Down
4 changes: 2 additions & 2 deletions src/torchjd/autojac/_backward.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def backward(
To differentiate in parallel, ``backward`` relies on ``torch.vmap``, which has some
limitations: `it does not work on the output of compiled functions
<https://github.com/pytorch/pytorch/issues/138422>`_, `when some tensors have
<https://github.com/TorchJD/torchjd/issues/184>`_ ``retains_grad=True`` or `when using an
RNN on CUDA <https://github.com/TorchJD/torchjd/issues/220>`_, for instance. If you
<https://github.com/SimplexLab/torchjd/issues/184>`_ ``retains_grad=True`` or `when using an
RNN on CUDA <https://github.com/SimplexLab/torchjd/issues/220>`_, for instance. If you
experience issues with ``backward`` try to use ``parallel_chunk_size=1`` to avoid relying on
``torch.vmap``.
"""
Expand Down
4 changes: 2 additions & 2 deletions src/torchjd/autojac/_mtl_backward.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def mtl_backward(
To differentiate in parallel, ``mtl_backward`` relies on ``torch.vmap``, which has some
limitations: `it does not work on the output of compiled functions
<https://github.com/pytorch/pytorch/issues/138422>`_, `when some tensors have
<https://github.com/TorchJD/torchjd/issues/184>`_ ``retains_grad=True`` or `when using an
RNN on CUDA <https://github.com/TorchJD/torchjd/issues/220>`_, for instance. If you
<https://github.com/SimplexLab/torchjd/issues/184>`_ ``retains_grad=True`` or `when using an
RNN on CUDA <https://github.com/SimplexLab/torchjd/issues/220>`_, for instance. If you
experience issues with ``backward`` try to use ``parallel_chunk_size=1`` to avoid relying on
``torch.vmap``.
"""
Expand Down
2 changes: 1 addition & 1 deletion src/torchjd/autojac/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def _get_descendant_accumulate_grads(

# This implementation more or less follows what is advised in
# https://discuss.pytorch.org/t/autograd-graph-traversal/213658 and what was suggested in
# https://github.com/TorchJD/torchjd/issues/216.
# https://github.com/SimplexLab/torchjd/issues/216.
while nodes_to_traverse:
node = nodes_to_traverse.popleft() # Breadth-first

Expand Down
Loading