@@ -772,13 +772,16 @@ class RandomForestQuantileRegressor(BaseForestQuantileRegressor):
772
772
predict. Each quantile must be strictly between 0 and 1. If "mean",
773
773
the model predicts the mean.
774
774
775
- criterion : {"squared_error", "absolute_error", "poisson"}, \
775
+ criterion : {"squared_error", "absolute_error", "friedman_mse", " poisson"}, \
776
776
default="squared_error"
777
777
The function to measure the quality of a split. Supported criteria
778
778
are "squared_error" for the mean squared error, which is equal to
779
- variance reduction as feature selection criterion, "absolute_error"
780
- for the mean absolute error, and "poisson" which uses reduction in
781
- Poisson deviance to find splits.
779
+ variance reduction as feature selection criterion and minimizes the L2
780
+ loss using the mean of each terminal node, "friedman_mse", which uses
781
+ mean squared error with Friedman's improvement score for potential
782
+ splits, "absolute_error" for the mean absolute error, which minimizes
783
+ the L1 loss using the median of each terminal node, and "poisson" which
784
+ uses reduction in Poisson deviance to find splits.
782
785
Training using "absolute_error" is significantly slower
783
786
than when using "squared_error".
784
787
@@ -866,9 +869,11 @@ class RandomForestQuantileRegressor(BaseForestQuantileRegressor):
866
869
Whether bootstrap samples are used when building trees. If False, the
867
870
whole dataset is used to build each tree.
868
871
869
- oob_score : bool, default=False
872
+ oob_score : bool or callable , default=False
870
873
Whether to use out-of-bag samples to estimate the generalization score.
871
- Only available if bootstrap=True.
874
+ By default, :func:`~sklearn.metrics.r2_score` is used.
875
+ Provide a callable with signature `metric(y_true, y_pred)` to use a
876
+ custom metric. Only available if `bootstrap=True`.
872
877
873
878
n_jobs : int, default=None
874
879
The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`,
@@ -901,12 +906,12 @@ class RandomForestQuantileRegressor(BaseForestQuantileRegressor):
901
906
902
907
- If None (default), then draw `X.shape[0]` samples.
903
908
- If int, then draw `max_samples` samples.
904
- - If float, then draw `max_samples * X.shape[0] ` samples. Thus,
905
- `max_samples` should be in the interval `(0.0, 1.0]`.
909
+ - If float, then draw `max(round(n_samples * max_samples), 1) ` samples.
910
+ Thus, `max_samples` should be in the interval `(0.0, 1.0]`.
906
911
907
912
Attributes
908
913
----------
909
- base_estimator_ : DecisionTreeRegressor
914
+ estimator_ : DecisionTreeRegressor
910
915
The child estimator template used to create the collection of fitted
911
916
sub-estimators.
912
917
@@ -1054,11 +1059,18 @@ class ExtraTreesQuantileRegressor(BaseForestQuantileRegressor):
1054
1059
predict. Each quantile must be strictly between 0 and 1. If "mean",
1055
1060
the model predicts the mean.
1056
1061
1057
- criterion : {"squared_error", "absolute_error"}, default="squared_error"
1062
+ criterion : {"squared_error", "absolute_error", "friedman_mse", "poisson"}, \
1063
+ default="squared_error"
1058
1064
The function to measure the quality of a split. Supported criteria
1059
1065
are "squared_error" for the mean squared error, which is equal to
1060
- variance reduction as feature selection criterion, and "absolute_error"
1061
- for the mean absolute error.
1066
+ variance reduction as feature selection criterion and minimizes the L2
1067
+ loss using the mean of each terminal node, "friedman_mse", which uses
1068
+ mean squared error with Friedman's improvement score for potential
1069
+ splits, "absolute_error" for the mean absolute error, which minimizes
1070
+ the L1 loss using the median of each terminal node, and "poisson" which
1071
+ uses reduction in Poisson deviance to find splits.
1072
+ Training using "absolute_error" is significantly slower
1073
+ than when using "squared_error".
1062
1074
1063
1075
max_depth : int, default=None
1064
1076
The maximum depth of the tree. If None, then nodes are expanded until
@@ -1144,9 +1156,11 @@ class ExtraTreesQuantileRegressor(BaseForestQuantileRegressor):
1144
1156
Whether bootstrap samples are used when building trees. If False, the
1145
1157
whole dataset is used to build each tree.
1146
1158
1147
- oob_score : bool, default=False
1159
+ oob_score : bool or callable , default=False
1148
1160
Whether to use out-of-bag samples to estimate the generalization score.
1149
- Only available if bootstrap=True.
1161
+ By default, :func:`~sklearn.metrics.accuracy_score` is used.
1162
+ Provide a callable with signature `metric(y_true, y_pred)` to use a
1163
+ custom metric. Only available if `bootstrap=True`.
1150
1164
1151
1165
n_jobs : int, default=None
1152
1166
The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`,
@@ -1187,18 +1201,18 @@ class ExtraTreesQuantileRegressor(BaseForestQuantileRegressor):
1187
1201
1188
1202
Attributes
1189
1203
----------
1190
- base_estimator_ : ExtraTreeQuantileRegressor
1204
+ estimator_ : ExtraTreeRegressor
1191
1205
The child estimator template used to create the collection of fitted
1192
1206
sub-estimators.
1193
1207
1194
- estimators_ : list of ForestRegressor
1208
+ estimators_ : list of DecisionTreeRegressor
1195
1209
The collection of fitted sub-estimators.
1196
1210
1197
1211
feature_importances_ : ndarray of shape (n_features,)
1198
1212
The impurity-based feature importances.
1199
1213
The higher, the more important the feature.
1200
1214
The importance of a feature is computed as the (normalized)
1201
- total reduction of the criterion brought by that feature. It is also
1215
+ total reduction of the criterion brought by that feature. It is also
1202
1216
known as the Gini importance.
1203
1217
1204
1218
Warning: impurity-based feature importances can be misleading for
0 commit comments