diff --git a/openseek/competition/LongContext-ICL-Annotation/data/README.md b/openseek/competition/LongContext-ICL-Annotation/data/README.md index f30f56b..566d8d2 100644 --- a/openseek/competition/LongContext-ICL-Annotation/data/README.md +++ b/openseek/competition/LongContext-ICL-Annotation/data/README.md @@ -21,7 +21,7 @@ The datasets are specifically designed to evaluate the capability of Large Langu | openseek-5 | semeval_2018_task1_tweet_sadness_detection | 30K | 500 | | openseek-6 | mnli_same_genre_classification | 30K | 500 | | openseek-7 | jeopardy_answer_generation_all | 30K | 500 | -| openseek-8 | kernel_genernation | 15K | 166 | +| openseek-8 | kernel_genernation | 16K | 166 | ## Data Structure @@ -39,4 +39,4 @@ The datasets are organized in JSON format, with each task having its own json fi - Participants must use the **official datasets as provided**, without altering test splits or labels, for leaderboard evaluation. - Any preprocessing steps, context construction strategies, or example selection mechanisms should be clearly described in the accompanying technical report. -- All experimental results must be **fully reproducible** using the datasets in this repository. \ No newline at end of file +- All experimental results must be **fully reproducible** using the datasets in this repository.