-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optimisers #320
base: main
Are you sure you want to change the base?
Add optimisers #320
Conversation
14adcf1
to
281f28a
Compare
Cpp-Linter Report
|
fortitude check src/ftorch.F90 | ||
fortitude check src/ftorch_optim.F90 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At some point it might make sense to separate out modules for tensor
, model
, optim
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, since each optimiser (Adam, SGD, etc.) seems to be its own function I thought I'd do this here, for the Fortran at least.
I had been hoping that there was a general optimiser function specified by an enum, but alas no.
Others feel OK for now, but definitely something I was thinking about, especially as module functions might grow soon.
This PR seeks to add optimisers functionality to the code.
Notes whilst in progress:
.mean()
and.sum()
- see Overloadsum
intrinsic for tensors #240.backward()
by contracting a (loss) tensor into a scalar. Should this be added to the autograd example?ftorch.F90
is going to grow with this. Perhaps now is the time to break apart into sub-modules?ftorch_optim
as a module to hold these.