-
Notifications
You must be signed in to change notification settings - Fork 45
Open
Description
The current way of generating the graph in graphgen.lua is to rely on the input and self.output of each module. The underlying assumption is that those tensors will be the same for every forward call (i.e., no new tensor is created for both the input and the output, but they are instead reused).
This is not the case for nn.Parallel, as a new tensor is created at every forward pass. The same applies for nn.ParallelTable when the input is a tensor, or modules that allocate a new self.output tensor at every forward pass.
I will try to figure out a way of making the graph generation more robust, without having to define special cases for every module.
Metadata
Metadata
Assignees
Labels
No labels