-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Replace prts
metrics
#2400
base: main
Are you sure you want to change the base?
Conversation
Thank you for contributing to
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these functions look like they should be private/protected. In the tests please run both the new and current functions to ensure the output is the same (using pytest)
aeon/benchmarking/metrics/anomaly_detection/tests/test_metrics.py
Outdated
Show resolved
Hide resolved
Changes made from the previous commit:
|
@MatthewMiddlehurst Please let me know if the test cases are fine, I can separate them into different functions if necessary. |
thanks for this, its really helpful. We will look again next week |
The updated test compares the results of the previous version (_binary.py) and the new version (range_metrics). Also, let me know if you want me to remove some test cases, these cover all the edge cases. |
Do you know why the new functions have different parameters from the old ones? Don't know if this was explained previously. |
I've just changed the names of parameters. |
@MatthewMiddlehurst @SebastianSchmidl Do you want me to convert all the test cases in a format that compares the output from prts and current implementation? |
If I understand you correctly, yes. The new implementation should produce the same results as the old implementation for all cases. Please excuse that I am slow to respond and cannot spend much time here. I will be busy with other obligations until the middle of summer this year. |
@MatthewMiddlehurst @SebastianSchmidl , I’ve updated the test cases so that they now compare the current implementation with the prts package for all parameter configurations. |
Please add a check for the soft-dependency "prts" to those test where you call into the existing range-based metrics to make the CI happy; similar to the existing tests: aeon/aeon/benchmarking/metrics/anomaly_detection/tests/test_ad_metrics.py Lines 228 to 232 in 8770d2f
|
Done. Let me know if there's anything else to be changed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for addressing my concerns. I'm happy for this to go in the with testing done.
I would not close the issue yet, the next step is to remove the old functions and prts
as a dependency. Unfortunately, this would involve changing a lot of the testing you have done here again, but the main purpose of the output comparison is to smooth the transition anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some small documentation changes
@MatthewMiddlehurst @SebastianSchmidl
This should be everything? |
The functions in
Could you add this information to the corresponding issue? This should be addressed in separate PRs. |
3 different PR's for 3 mentioned changes? |
@MatthewMiddlehurst Please merge this PR so I can pull the changes and update the test cases to hard-coded ones. |
@MatthewMiddlehurst A gentle reminder. |
Reference Issues/PRs
Addresses #2066
What does this implement/fix? Explain your changes.
Implementation of Precision, Recall and F1-score metrics.
PR checklist
For all contributions