-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup compilation time benchmarks and log #29
Comments
mitchmindtree
added a commit
to mitchmindtree/gantz
that referenced
this issue
Jun 13, 2019
This allows for optionally specifying the full types of the inputs and outputs of a `Node` during implementation by allowing to specify a full, freestanding function, rather than only an expression. The function's arguments and return type will be parsed to produce the number of inputs and outputs for the node, where the number of arguments is the number of inputs, and the number of tuple arguments in the output is the number of outputs (1 if no tuple output type). Some of this may have to be re-written when addressing a few follow-up issues including nannou-org#29, nannou-org#19, nannou-org#21 and nannou-org#22, but I think it's helpful to break up progress into achievable steps! Closes nannou-org#27 and makes nannou-org#20 much more feasible.
mitchmindtree
added a commit
to mitchmindtree/gantz
that referenced
this issue
Jun 13, 2019
This allows for optionally specifying the full types of the inputs and outputs of a `Node` during implementation by allowing to specify a full, freestanding function, rather than only an expression. The function's arguments and return type will be parsed to produce the number of inputs and outputs for the node, where the number of arguments is the number of inputs, and the number of tuple arguments in the output is the number of outputs (1 if no tuple output type). Some of this may have to be re-written when addressing a few follow-up issues including nannou-org#29, nannou-org#19, nannou-org#21 and nannou-org#22, but I think it's helpful to break up progress into achievable steps! Closes nannou-org#27 and makes nannou-org#20 much more feasible.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It would be nice to have a set of benchmarks that compiled a suite of different graphs in order to track how our changes to code generation affect the compiler.
Perhaps we could set something up where, on merge, travis also runs the benchmarks, creates a log file of them and adds them to some historical log for us.
The text was updated successfully, but these errors were encountered: