-
Notifications
You must be signed in to change notification settings - Fork 4
Work on benchmark #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Generated by automated benchmark workflow Results saved to docs/src/assets/benchmark-minimal/data.json Ready for documentation generation
Generated by automated benchmark workflow Results saved to docs/src/assets/benchmark-minimal/data.json Ready for documentation generation
- Add solve_and_extract_data and benchmark_minimal_data functions - Support multiple models (JuMP, adnlp, exa) and solvers (Ipopt, MadNLP) - Add comprehensive tests for the new functionality - Update project dependencies - Add support for multiple discretization methods
- Moved all imports to CTBenchmarks.jl with consistent comments - Added Tables.jl as a dependency - Relaxed version constraints in Project.toml - Removed mini.jl as it's no longer needed
- Renamed benchmark_minimal to benchmark and benchmark_minimal_data to benchmark_data - Added max_iter and max_wall_time parameters to solve function - Removed default values from generic functions - Updated tests to match new function signatures - Moved benchmark script to scripts/ directory - Added comprehensive test suite for benchmark utilities
- Add benchmark_minimal function in src/utils.jl - Create GitHub Actions workflow for benchmark execution - Add documentation page for benchmark results - Update project dependencies and test suite
- Refactor API: replace separate solvers/models with solver=>models pairs - Add :exa_gpu model for GPU benchmarking with MadNLP + CUDA backend - Implement CUDA.@timed for GPU timing and memory tracking - Add automatic CUDA detection and filtering of exa_gpu when unavailable - Update format_benchmark_line to display both CPU and GPU metrics - Store full benchmark objects (@btimed or CUDA.@timed) in DataFrame - Add assertion: exa_gpu requires madnlp solver - Update all tests to use new API and verify GPU assertions - Update benchmark script with new solver_models structure
- Update _print_results to pass full benchmark object to format_benchmark_line - Remove direct access to time/allocs/memory fields which are now in benchmark object - Maintain same display format but with unified data access
- Add helper function getval() to access fields from both Dict and NamedTuple - Support both String and Symbol keys when reading from JSON - Handle nested structures (cpu_gcstats, gpu_memstats) from Dict - Maintain compatibility with native benchmark objects (@btimed, CUDA.@timed) - Fix documentation build error when displaying benchmark results from JSON
- Add grid_size_max_cpu parameter to benchmark_data and benchmark functions - Filter CPU models (not ending with _gpu) when N > grid_size_max_cpu - GPU models run on all grid sizes regardless of grid_size_max_cpu - Update display logic to skip empty grid sizes and handle spacing correctly - Add test case verifying CPU/GPU filtering behavior - Update documentation with GPU benchmarking features and usage example - Update benchmark-core.jl script with grid_size_max_cpu=200 - Rename CI job from 'call' to 'cpu-tests' for clarity
…enchmarks.jl into 48-minimal-benchmark
Generated by reusable benchmark workflow Results saved to /scratch/github-actions/actions_runner_control_toolbox/_work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-moonshot/data.json Includes environment TOMLs
Generated by reusable benchmark workflow Results saved to /home/runner/work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-ubuntu-cpu/data.json Includes environment TOMLs
✅ Benchmark and Documentation CompleteThe automated workflow has completed successfully! 🎉 ✅ Completed Tasks
📖 Documentation Preview
📋 Results
🔗 Links🤖 This notification was automatically generated |
Generated by reusable benchmark workflow Results saved to /home/runner/work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-ubuntu-latest/data.json Includes environment TOMLs
✅ Benchmark and Documentation CompleteThe automated workflow has completed successfully! 🎉 ✅ Completed Tasks
📖 Documentation Preview
📋 Results
🔗 Links🤖 This notification was automatically generated |
end | ||
|
||
# Use CUDA.@timed for GPU benchmarking | ||
madnlp(nlp_model_oc; opt...) # run for warmup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like the run times i see with Ipopt on CPU are better than those with MadNLP, which contradicts the tests i've made with tol = 1e-8
(not tested with `tol = 1e-6 ). to be checked
Generated by reusable benchmark workflow Results saved to /home/runner/work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-ubuntu-latest/data.json Includes environment TOMLs
Generated by reusable benchmark workflow Results saved to /home/runner/work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-ubuntu-latest/data.json Includes environment TOMLs
…enchmarks.jl into 48-minimal-benchmark
Generated by reusable benchmark workflow Results saved to /scratch/github-actions/actions_runner_control_toolbox/_work/CTBenchmarks.jl/CTBenchmarks.jl/docs/src/assets/benchmark-core-moonshot/data.json Includes environment TOMLs
@jbcaillau I was working on the benchmark too, considering your branch as starting point. I have added storage of needed info. If you want, based on what you are doing, I can update to set the pipeline from the bench to the doc.
For the stale dependencies: