Skip to content

Commit cf71668

Browse files
committed
Try fixing the Tensorflow example
1 parent 8e448a0 commit cf71668

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

integrations/workflow-orchestration/metaflow/metaflow-model-evaluation/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ export COMET_WORKSPACE=<Your Comet Workspace>
3232
In this guide, we will demonstrate how to use Comet's Metaflow integration to build a simple model evaluation flow.
3333

3434
```shell
35-
python metaflow_model_evaluation.py run --max-workers 1 --n_samples 100
35+
python metaflow-model-evaluation.py run --max-workers 1 --n_samples 100
3636
```
3737

3838
Our flow consists of two steps.

integrations/workflow-orchestration/metaflow/metaflow-model-evaluation/metaflow-model-evaluation.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -215,8 +215,7 @@ def evaluate_classification_metrics(self):
215215
)
216216
accuracy = accuracy_score(labels, torch.argmax(predictions, dim=1))
217217

218-
self.comet_experiment.log_metrics(clf_metrics["micro avg"], prefix="micro_avg")
219-
self.comet_experiment.log_metrics(clf_metrics["macro avg"], prefix="macro_avg")
218+
self.comet_experiment.log_metrics(clf_metrics)
220219
self.comet_experiment.log_metrics({"accuracy": accuracy})
221220

222221
log_model(self.comet_experiment, model, self.input)

0 commit comments

Comments
 (0)