Issue
In the docs it is said that metaclassifier is trained through cross_val_predict
. From my perspective it means that data is splitten by folds, and all base estimators predict values on one fold, trained on all other folds. And that procedure goes for every fold. Then metaclassifier is trained on predictions of base estimators on these folds. Is it correct? If so, doesn't it contradict to
Note that
estimators_
are fitted on the fullX
in the way that base estimators are trained on several folds, not full X
?
Solution
There is no contradiction, because estimators_
is not used when training the metaclassifier. After the cross-val-predictions are made, you don't actually have fitted base estimators (or rather, you have multiple copies of each, depending on your cv
parameter). For predicting on new data, you need a single fitted copy of each base estimator; those are obtained by fitting on the full X
, and are stored in the attribute estimators_
.
Answered By - Ben Reiniger
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.