-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on relation to univariate Taylor series propagation for higher-order derivatives #1
Comments
Thanks for bringing up this highly relevant work! Upon reading the paper, I think the idea of evaluating arbitrary derivative tensor elements via forward propagation univariate Taylor series is similar. I wasn't aware of this work, and based on my experience talking to people at NeurIPS, this work is not well known within the ML community even for people who work on AD in the ML context. The JAX team who wrote the Taylor mode AD I used in the paper wasn't aware of this technique and was quite surprised that one could do this. So yes your interpretation is right, STDE approximates derivative tensor with randomized univariate Taylor series propagation. |
Hi,
Thank you for your interesting work! I’ve been reading your paper and found the idea of stochastically evaluating higher-order derivatives quite intriguing.
I was wondering if you could comment on how your technique relates to the known method of evaluating higher-order derivative tensors by propagating a set of univariate Taylor series. This idea is discussed in this paper and the Evaluating Derivatives book (Chapter 13).
Specifically, could your method be interpreted (or extended) as sampling a set of univariate Taylor series to propogate, to the end of approximating the derivative tensor? I’d love to hear your thoughts on this connection or any fundamental differences.
Thanks in advance for your response!
The text was updated successfully, but these errors were encountered: