-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(metrics): add a classification metrics module #176
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #176 +/- ##
==========================================
+ Coverage 90.35% 90.49% +0.14%
==========================================
Files 29 31 +2
Lines 2083 2114 +31
==========================================
+ Hits 1882 1913 +31
Misses 201 201 ☔ View full report in Codecov by Sentry. |
Sorry, I didn't see this earlier; will try to take a look soon! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One functional request to not force compute; rest are quick nits!
959b45d
to
e7e835c
Compare
Thanks for your review and the suggestions. I think these were great and have added them all now. I'm not sure what's going on with the build-docs check here, it seems the last time it succeeded it was using scikit-learn==1.5.2 and now it's using scikit-learn==1.6.0. |
Seems |
e7e835c
to
ff559bd
Compare
docs: apply proper spacing separation in docstrings Co-authored-by: Deepyaman Datta <[email protected]> feat: lazify classification metrics and clean tests
ff559bd
to
d017b8b
Compare
It's pinned now at <1.6.0 for the "doc" dependencies and the check is passing! ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I wanted to start making some progress on supporting a new
ibis_ml.metrics
module might look like with the first pass to support classification metrics.
In this initial pass, the functions accept the
y_pred
andy_true
as Ibis tableexpression columns.
A few thoughts
to start with classification only as an "experimental" type of op.
What could go wrong?
I anticipate the inputs are coming from the same table expression, although testing
this behavior (or at least documenting) with what happens with input from two different
table expressions would be necessary.
Resolves (partially) #174
Edit: As far as testing goes, I wonder if more precision might be necessary.