[WIP] Add infrastructure for fixed random seeds in tests #1655
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposal for fixing random seeds in tests to make them reproducible.
The basic idea is that if you need randomness, you call
check_random_state(seed)and that you expose theseedargument to your callers. If they don't care they can not pass anything in, in which caseseed=Noneand we get random numbers that aren't reproducible. If they do care they can either pass in a number or an existing random number generator.@standage this is what I had in mind. Shamelessly cribbed from http://scikit-learn.org/stable/modules/generated/sklearn.utils.check_random_state.html
We also use this approach in scikit-optimize.
make testDid it pass the tests?make clean diff-coverIf it introduces new functionality inscripts/is it tested?make format diff_pylint_report doc pydocstyleIs it wellformatted?
additions are allowed without a major version increment. Changing file
formats also requires a major version number increment.
documented in
CHANGELOG.md? See keepachangelogfor more details.
changes were made?
tested for streaming IO?)