-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Update pyproject.toml to use PEP-621 dependency specification #584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update pyproject.toml to use PEP-621 dependency specification #584
Conversation
WalkthroughThe Changes
Possibly related issues
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (2)
pyproject.toml (1)
19-26
: Upper bounds clash with newly-addedrequirements.txt
[project.dependencies]
still caps NumPy and SciPy to<2.0.0
, yet the freshly-addedrequirements.txt
pins them to2.x
. Decide which source of truth wins and adjust one of them. At the moment a user installing with Poetry and a user installing from the flat file will end up with divergent ABI versions.requirements-dev.txt (1)
1-145
: Huge fully-pinned dev snapshot will rot quicklyA 400-line frozen list is excellent for archival reproduction but painful to maintain manually. Recommend:
- Generate it via
poetry export --with dev
(orpip-compile
).- Commit it as
requirements-dev.lock
plus a short, readablerequirements-dev.in
so humans review only meaningful changes.- Refresh automatically in CI (e.g. Renovate).
This keeps dev envs in sync without trapping the project on months-old packages.
🧹 Nitpick comments (4)
requirements.txt (1)
1-6
: Add environment markers to model Python-specific constraintsThe
pyproject.toml
correctly downgrades to NumPy 1.x / SciPy 1.x for Python < 3.13. Pinning unconditional 2.x here silently drops that safety net.-numpy==2.3.1 -scipy==1.16.0 +numpy==2.3.1 ; python_version >= "3.13" +numpy==1.26.4 ; python_version < "3.13" +scipy==1.16.0 ; python_version >= "3.13" +scipy==1.11.4 ; python_version < "3.13"pyproject.toml (3)
28-35
: Pinning one package with==
but leaving the rest floating is inconsistentIn
dev
you pinruff==0.6.6
but allowpytest
,coverage
, etc. to float. Either:
- Adopt a freeze-everything philosophy (create/export a lock file), or
- Keep all optional-dependency ranges flexible and rely on the lock produced by the installer (Poetry, PDM, etc.).
Mixing approaches invites non-reproducible CI failures.
36-47
: Consider extracting the notebook / docs stack into its owndocs
group
nbtools
currently bundles runtime notebook dependencies with Sphinx extensions. Splitting those intonb
anddocs
(or re-using the growing “docs” convention) keeps environments lean: users who only want to run notebooks do not need to pull in the entire Sphinx tool-chain.
49-55
: Redundant[tool.poetry]
section can be droppedPoetry ≥ 1.4 fully understands PEP-621 metadata. Keeping only
requires-poetry
under[tool.poetry]
is fine, butpackages
can be moved to[tool.poetry.dynamic]
or declared via the newpackages = [...]
key in[tool.poetry.dependencies]
for clarity. Removing the legacy table eliminates ambiguity about which metadata Poetry ultimately reads.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
pyproject.toml
(1 hunks)requirements-dev.txt
(1 hunks)requirements.txt
(1 hunks)
🔇 Additional comments (1)
requirements.txt (1)
1-6
: Review Comment Update: Version Pins Don’t Actually Conflict withpyproject.toml
Constraints
The numpy and scipy pins inrequirements.txt
are within the bounds declared inpyproject.toml
—numpy has no upper‐bound and scipy<2.0.0, soscipy==1.16.0
is allowed. The main risk is that those exact versions may not yet exist on PyPI, leading to install failures.•
pyproject.toml
declares:
"numpy>=1.25; python_version<'3.13'"
and"numpy>=2.1.3; python_version>='3.13'"
"scipy>=1.0.0,<2.0.0; python_version<'3.13'"
and"scipy>=1.14.1,<2.0.0; python_version>='3.13'"
• Neither package pin exceeds an upper bound, but pinning unreleased versions will break installs.To ensure reproducible, working installs, generate your
requirements.txt
directly from Poetry’s lock file rather than hand-pinning:poetry export --without-hashes --only main -f requirements.txt > requirements.txtLikely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jameskrach,
thanks for taking charge here! As already mentioned elsewhere, I think cleaning up the project file is necessary!
Please have a look at the test failure -- this may already be fixed by merging the dependency groups.
FYI for anyone who hasn't contributed yet (i.e. some commit on master) I need to manually approve workflows, but I'll keep an eye on this PR :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pyproject.toml (1)
39-40
: Avoid hard-pinningruff
to an exact patch version
Lockingruff
to0.6.6
prevents bug-fix & security updates from reaching consumers. A conservative upper bound is usually sufficient (e.g.<0.7
) while still allowing patch-level upgrades.- "ruff==0.6.6", + "ruff>=0.6.6,<0.7",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
pyproject.toml
(1 hunks)
🔇 Additional comments (2)
pyproject.toml (2)
28-45
: Consolidation ofdev
extras matches prior feedback – good catch
Merging the formerdev
andnbtools
groups into a single[project.optional-dependencies].dev
extra keeps the file PEP-621-compliant and removes duplication, exactly as requested in the earlier review by @till-m.
19-26
: Double-check future-proof markers for Python 3.13
numpy>=2.1.3
andscipy>=1.14.1
are pinned forpython_version>='3.13'
, but wheels for 3.13 may not exist the moment 3.13 is released. This can breakpip install
for early adopters. Please verify build availability or consider a laxer fallback until the ecosystem stabilises.
Also, one last comment I'd request you consider: the original dependency versions were specified using caret constraints, which cannot be expressed in the standard format from PEP-440. I've translated those to equivalent constraints in this PR, but I would request you consider removing the upper bounds on your dependency constraints entirely. The reasons for doing so are best articulated by this quite lengthy post, but in short, upper bound constraints could cause breakages that downstream users are unable to resolve themselves, whereas without upper bound constraints, users can always simply downgrade to the last known working version of a dependency. |
Feel free to drop the carets :) Moreover, I think the rabbit might be right and |
@jameskrach to fix the problem, I think this line should change from
to
and similarly for the other workflows (or, alternatively, we switch to using |
Updated the upper bound constraints and the workflow. Didn't have a chance to change the poetry thing to uv, but that may also involve changing some other things. I can look at it either later or in a follow-up PR if you like. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/run_tests.yml
(1 hunks)pyproject.toml
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
.github/workflows/run_tests.yml (1)
21-25
: Potential NumPy ≥ 2 incompatibility with Python 3.9NumPy 2.0 is dropping support for some older Python versions; final wheels may require ≥3.10. You already exclude
3.13
+<2
, but the reverse (3.9
+>=2
) can still appear and break the matrix.exclude: + - python-version: "3.9" + numpy-version: ">=2"
🧹 Nitpick comments (9)
.pre-commit-config.yaml (1)
8-9
: Add a trailing newline to satisfy linters
yamllint
is already flagging “no new line character at the end of file”.
Appending a single\n
keeps CI happy and avoids needless diffs later.- rev: v0.12.3 + rev: v0.12.3 +scripts/format.sh (1)
4-5
: Order of commands leads to double work
ruff format
writes changes, thenruff check --fix
may re-write the same files.
Runningcheck --fix
first (or dropping--fix
here) avoids two formatting passes and speeds CI.-uv run ruff format bayes_opt tests -uv run ruff check bayes_opt --fix +uv run ruff check bayes_opt --fix +uv run ruff format bayes_opt testsscripts/check_precommit.sh (1)
4-5
: Consider skippingpre-commit install
in CI runs
pre-commit install
writes to.git/hooks
, which is irrelevant in CI and costs ~1 s.
pre-commit run …
works fine without it.-uv run pre-commit install
If the script is only for local dev keep it, otherwise you can drop it when invoking in pipelines.
scripts/check.sh (1)
4-5
: Pass--config pyproject.toml
explicitly for forward-compat
ruff
auto-detects config today, but the project just switched to PEP-621 layout;
being explicit avoids surprises if the file is relocated.-uv run ruff format --check bayes_opt tests -uv run ruff check bayes_opt tests +uv run ruff format --check --config pyproject.toml bayes_opt tests +uv run ruff check --config pyproject.toml bayes_opt tests.github/workflows/build_docs.yml (3)
21-25
: Pinsetup-uv
major version to avoid breaking changes
astral-sh/setup-uv@v6
is fine today, but GitHub-Action major bumps can introduce breaking changes.
Sticking to the major tag and a minor pin (e.g.v6.0.1
) keeps the workflow reproducible, aligning with the PR’s reproducibility goal.- - name: Install uv - uses: astral-sh/setup-uv@v6 + - name: Install uv + uses: astral-sh/[email protected]
30-30
:uv sync --extra dev
fails if the extra is renamedThe extra is hard-coded; if the optional-dependency key changes (e.g.,
dev
➜docs
) docs will stop building.
Consider reusing a shared env var (defined inpyproject.toml
or a matrix) to keep this DRY.
34-34
: Parallel Sphinx build could speed up CI
make github
defaults to a single process.
AddingSPHINXOPTS="-j auto"
leverages parallel cores at no extra cost:- uv run make github + uv run make github SPHINXOPTS="-j auto".github/workflows/run_tests.yml (1)
34-35
:uv add
afteruv sync
defeats reproducibility
uv sync
installs the locked set, thenuv add
mutates the environment ad-hoc, bypassing the lock file and creating network churn for every CI run.
Prefer a fully-locked path:
- Add a separate
extra
(e.g.numpy2
) inpyproject.toml
and lock it once.- In CI, call
uv sync --extra dev --extra numpy2
.This keeps builds hermetic and cache-friendly.
.github/workflows/format_and_lint.yml (1)
16-22
: Cache opportunity for faster lint jobsBecause
uv sync
will fetch all packages on every run, you can shave ~30-40 s off the job by adding the built-in cache:- name: Install uv uses: astral-sh/setup-uv@v6 with: python-version: "3.9" + cache: true # enables automatic ~/.cache/uv caching
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.github/workflows/build_docs.yml
(1 hunks).github/workflows/format_and_lint.yml
(1 hunks).github/workflows/python-publish.yml
(2 hunks).github/workflows/run_tests.yml
(2 hunks).gitignore
(1 hunks).pre-commit-config.yaml
(1 hunks)bayes_opt/__init__.py
(1 hunks)bayes_opt/constraint.py
(1 hunks)bayes_opt/exception.py
(1 hunks)pyproject.toml
(2 hunks)scripts/check.sh
(1 hunks)scripts/check_precommit.sh
(1 hunks)scripts/format.sh
(1 hunks)tests/test_seq_domain_red.py
(1 hunks)tests/test_target_space.py
(2 hunks)
✅ Files skipped from review due to trivial changes (3)
- .gitignore
- bayes_opt/exception.py
- bayes_opt/init.py
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🧠 Learnings (1)
bayes_opt/constraint.py (1)
Learnt from: till-m
PR: bayesian-optimization/BayesianOptimization#566
File: bayes_opt/target_space.py:114-116
Timestamp: 2025-05-30T08:08:06.867Z
Learning: The `.lb` property of the `ConstraintModel` class in bayes_opt is always an array, so `.size` calls on `constraint.lb` are always valid and will not raise AttributeError.
🧬 Code Graph Analysis (1)
tests/test_seq_domain_red.py (1)
bayes_opt/domain_reduction.py (1)
_trim
(158-241)
🪛 actionlint (1.7.7)
.github/workflows/python-publish.yml
23-23: shellcheck reported issue in this script: SC2086:info:1:20: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 YAMLlint (1.37.1)
.pre-commit-config.yaml
[error] 9-9: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (7)
bayes_opt/constraint.py (1)
243-243
: LGTM: Clean syntax modernization.The change from tuple concatenation to argument unpacking with the splat operator is a good improvement. Both approaches produce identical reshape results, but the new syntax is more readable and modern.
tests/test_seq_domain_red.py (2)
159-159
: LGTM: Enhanced warning specificityThe addition of the
match
parameter ensures the test specifically catches warnings about parameter bounds exceeding global bounds, making the test more precise and robust.
166-166
: LGTM: Consistent warning message validationGood consistency with the previous test case - both scenarios (bounds in order and out of order) now validate the same specific warning message pattern.
tests/test_target_space.py (2)
128-128
: LGTM: Improved test precisionAdding the
match
parameter ensures this test specifically validates warnings about parameters being outside bounds, making the test more reliable and specific.
212-212
: LGTM: Consistent warning validation patternThe warning message matching aligns well with the test context and maintains consistency with the similar change on line 128.
.pre-commit-config.yaml (1)
3-7
: Verify Ruff hook IDs in .pre-commit-config.yamlRuff-pre-commit v0.12.0 renamed the formatter hook from
ruff-format
→ruff
. If the IDs don’t match, the hooks will silently no-op.· File: .pre-commit-config.yaml (lines 3–7)
· Current snippet:- id: ruff name: ruff-lint - id: ruff-format name: ruff-format args: [--check]
Run:
pre-commit run --all-files --show-diff-on-failureand confirm that both the formatting and lint hooks actually execute. If either no-ops, update the
id:
values to the correct hook names for v0.12.3 (currentlyruff
andruff-format
)..github/workflows/run_tests.yml (1)
20-25
: Matrix includes unreleased Python 3.13 – confirm job stability
3.13
is still pre-release. A large share of dependencies (including NumPy) do not publish wheels for it yet, so the matrix cell may fail unexpectedly and mask real regressions in supported versions.
Unless you explicitly target the dev branch of CPython, consider dropping3.13
(or marking the job ascontinue-on-error
) until it reaches beta.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #584 +/- ##
==========================================
- Coverage 97.81% 97.80% -0.01%
==========================================
Files 10 10
Lines 1188 1186 -2
==========================================
- Hits 1162 1160 -2
Misses 26 26 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM! Only slightly worried about the publishing workflow as it's hard to test 😬
Thanks for this great PR @jameskrach! I will merge it on friday :) |
Great! Thanks for being open to the contribution! |
uv
instead ofpoetry
anduv
Summary by CodeRabbit