Skip to content

Commit cabe522

Browse files
jsondaicopybara-github
authored andcommitted
docs: fix comment typo in eval_task.py
PiperOrigin-RevId: 714826124
1 parent 26a08c7 commit cabe522

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vertexai/evaluation/eval_task.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ class EvalTask:
5858
An Evaluation Tasks is defined to measure the model's ability to perform a
5959
certain task in response to specific prompts or inputs. Evaluation tasks must
6060
contain an evaluation dataset, and a list of metrics to evaluate. Evaluation
61-
tasks help developers compare propmpt templates, track experiments, compare
61+
tasks help developers compare prompt templates, track experiments, compare
6262
models and their settings, and assess the quality of the model's generated
6363
text.
6464

0 commit comments

Comments
 (0)