Skip to content

Conversation

efylmzr
Copy link

@efylmzr efylmzr commented Sep 25, 2025

Some activation functions which are helpful for neural network training such as Sigmoid, Softplus and Swish. The AdamW is extended to handle per parameter weight decays.

Copy link
Member

@wjakob wjakob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ekrem -- thank you for the PR. Here are some comments from me.


class SiLU(Module):
r"""
SiLU activation function. Also known as the "swish" function.
Copy link
Member

@wjakob wjakob Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you document the expression here as well?

drjit/opt.py Outdated
beta_2: float = 0.999,
epsilon: float = 1e-8,
weight_decay: float = 0.01,
weight_decay: Optional[float | Mapping[str, float]] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weight decay is an AdamW-specific feature. I would prefer to keep this 100% in the AdamW subclass so that we don't have to pay any cost (like unpacking above) in the general optimizer.

When optimizing simple calculations (e.g. neural nets) without any ray tracing, the Python code in the optimizer can actually inhibit saturating the GPU, hence this focus on keeping the code here as simple/efficient as possible. It was also the motivation for a larger rewrite when moving the optimizer code from Mitsuba to Dr.Jit.

AdamW can store an optional per-parameter weight decay override using the extra field (setting this value to None by default).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not get your comment here. The other optimizers do not have any access to the weight_decay parameter. I just have an additional dictionary in AdamW and increase the size of the tuple (extra) inset_weight_decay function. Can you elaborate your comment about keeping everything in AdamW?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I misread the diff and had thought that you were modifying the base optimizer. I think it would be better if the set_weight_decay function works like the set_learning_rate function, please check out the docstring of this function. Right now, all of the optimizers are written so that they don't use key-based dictionary lookups in opt.step(). They iterate over dictionaries but never explicitly search a dictionary for a string-based key (which involves string hashing etc.)

@wjakob
Copy link
Member

wjakob commented Sep 25, 2025

FYI: The | type alias is not supported by older aliases. Please use Union. It would be great if you could also test the parameter-specific weight decay override so that we can ensure that it is running as expected (and will keep on running even with Dr.Jit is refactored). In general, there is an important focus on testing newly added functionality in this project—a change/addition in code will usually have to be accompanied by a change/addition to tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants