Skip to content

Commit 05a385a

Browse files
committed
Adding alpaca_dataset as teh default dataset
Signed-off-by: Swati Allabadi <[email protected]>
1 parent bae75d2 commit 05a385a

File tree

2 files changed

+6
-4
lines changed

2 files changed

+6
-4
lines changed

QEfficient/finetune/configs/training.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ class train_config:
2828
use_fp16: bool = True
2929
use_autocast: bool = True
3030
val_batch_size: int = 1
31-
dataset = "samsum_dataset"
31+
dataset = "alpaca_dataset"
3232
task_type = "generation" # "generation" / "seq_classification"
3333
peft_method: str = "lora"
3434
use_peft: bool = True # use parameter efficient fine tuning

docs/source/finetune.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,13 +69,13 @@ tensorboard --logdir runs/<file> --bind_all
6969

7070
To run fine tuning for any user specific dataset, prepare the dataset using the following steps:
7171

72-
1) Create a directory named 'dataset' inside efficient-transformers.
72+
1) Create a directory named 'dataset' inside efficient-transformers.
7373
2) Inside this directory, create a file named 'custom_dataset.py'. This is different than the custom_dataset.py present at efficient-transformers/QEfficient/finetune/dataset.
7474
3) Inside the newly created efficient-transformers/dataset/custom_dataset.py, define a function named 'get_custom_dataset'.
7575
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset.
7676
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
7777
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
78-
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points.
78+
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points. As an alternative, these values can be passed as command line arguemnets as well with the finetune command. For example "--train_split train".
7979
8) While running fine tuning, pass argument "-–dataset custom_dataset" to finetune on custom dataset.
8080

8181
Template for get_custom_dataset() to be defined inside efficient-transformers/dataset/custom_dataset.py is as follows:
@@ -87,10 +87,12 @@ def get_custom_dataset(dataset_config, tokenizer, split, context_length=None):
8787
# based on split, retrieve only the specific portion of the dataset (train or eval) either here or at the last
8888

8989
def apply_prompt_template():
90+
# transform the passed datapoint by applying the prompt on it
9091

9192
def tokenize():
93+
# tokenize the passed datapoint
9294

93-
# define prompt
95+
# define the prompt
9496
# call apply_prompt_template() for each data point:
9597
# dataset = dataset.map(apply_prompt_template ,<other args>)
9698
# call tokenize() for each data point:

0 commit comments

Comments
 (0)