Skip to content
Discussion options

You must be logged in to vote

In the pipelines, there is a decorator torch method on top of their __call__ and several other methods. It avoids preserving gradients while inferencing. Could you examine the examples in these docs pages? In these examples, you can run the overall pipeline by separating its components and eliminating the aforementioned torch method in order to preserve gradients.

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@Zoybzo
Comment options

@tolgacangoz
Comment options

@Zoybzo
Comment options

@tolgacangoz
Comment options

Answer selected by Zoybzo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants