Issue
I tried the solution here: sklearn logistic regression loss value during training
With verbose=0
and verbose=1
. loss_history
is nothing, and loss_list
is empty, although the epoch number and change in loss are still printed in the terminal.
Epoch 1, change: 1.00000000
Epoch 2, change: 0.32949890
Epoch 3, change: 0.19452967
Epoch 4, change: 0.14287635
Epoch 5, change: 0.11357212
I also tried the solution proposed here: how to plot correctly loss curves for training and validation sets?
I.e., training single epochs iteratively. In this case it doesn't train at all, the loss change is always 1:
Epoch 1, change: 1.00000000
max_iter reached after 2 seconds
Epoch 1, change: 1.00000000
max_iter reached after 1 seconds
Epoch 1, change: 1.00000000
max_iter reached after 1 seconds
Epoch 1, change: 1.00000000
max_iter reached after 2 seconds
Is there really no straightforward way to do this? I'm totally open to hacky workarounds as well.
My simple code that I want to plot from:
logreg = LogisticRegression(
random_state=42,
C=.001,
penalty="l1",
max_iter=500,
solver="saga",
n_jobs=8,
warm_start=True,
class_weight='balanced',
verbose=1)
logreg.fit(X_train, y_train)
Solution
I only managed a hacky way to solve this, since training iteratively gives different results than a single call to model.fit
and the workaround mentioned in other solutions using sys.stdout
unfortunately does not work for the LogisticRegression()
class, although it does for SGDClassifier()
.
I run the Python file from the terminal and pipe the output to a file directly:
python3 logreg_train.py > terminal_output.txt
Then one can parse the text file to extract the change in training loss.
Hope this helps someone!
Answered By - neverreally
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.