Skip to content

[QEff Finetune]: Adding steps about how to fine tune on any custom dataset. #381

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

quic-swatia
Copy link
Contributor

@quic-swatia quic-swatia commented Apr 28, 2025

  1. Added steps on how to create the custom_dataset.py to run fine-tuning through QEfficient pipeline on any custom dataset. Also, added a detailed template for the user which covers how to create custom_dataset.py
  2. Added the argument 'context_length' in the existing APIs which helps run fine tuning with padding for custom dataset.
  3. Made alpaca_dataset as the default dataset.

@quic-swatia quic-swatia self-assigned this Apr 28, 2025
@quic-swatia quic-swatia requested a review from vbaddi April 28, 2025 11:16
@quic-rishinr quic-rishinr marked this pull request as draft April 28, 2025 16:51
@quic-mamta quic-mamta changed the title Adding steps about how to fine tune on any custom dataset. [QEff Finetune]: Adding steps about how to fine tune on any custom dataset. May 8, 2025
@quic-swatia quic-swatia marked this pull request as ready for review May 9, 2025 10:39
@quic-swatia quic-swatia requested a review from quic-amitraj as a code owner May 9, 2025 10:39
Signed-off-by: Swati Allabadi <[email protected]>
Copy link
Contributor

@quic-meetkuma quic-meetkuma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work in listing down the detailed steps for custom dataset, Swati! Please check on the comments. :)

To run fine tuning for any user specific dataset, prepare the dataset using the following steps:

1) Create a directory named 'dataset' inside efficient-transformers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double space between "a" and "directory"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.


def tokenize():

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comment as "Implement tokenization and prepare inputs for the training."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

# load dataset
# based on split, retrieve only the specific portion of the dataset (train or eval) either here or at the last

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add one more comment as "Define a prompt template"

Copy link
Contributor Author

@quic-swatia quic-swatia May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already there. ( # define prompt)


def apply_prompt_template():

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comment as "Convert the raw input into format as per the template defined earlier."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is no longer needed after PR#289. We can directly pass --train_split and --test_split from the CLI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, correct! Thanks for pointing this. After PR 289, user can do this through both ways. Updated step #7 accordingly.

@@ -28,7 +28,7 @@ class train_config:
use_fp16: bool = True
use_autocast: bool = True
val_batch_size: int = 1
dataset = "samsum_dataset"
dataset = "alpaca_dataset"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good that you have added this change in this gerrit.

Copy link
Contributor

@quic-meetkuma quic-meetkuma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

To run fine tuning for any user specific dataset, prepare the dataset using the following steps:

1) Create a directory named 'dataset' inside efficient-transformers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the location "at root of the repo."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

2) Inside this directory, create a file named 'custom_dataset.py'. This is different than the custom_dataset.py present at efficient-transformers/QEfficient/finetune/dataset.
3) Inside the newly created efficient-transformers/dataset/custom_dataset.py, define a function named 'get_custom_dataset'.
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QEfficient not Qefficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset.
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since default dataset is changed, we should mention alpaca here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The steps I have mentioned matches with the format of samsum_dataset.py. It doesn't match with alpaca_dataset.py. Hence, I didn't change it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

too verbose. Make it simple pointed steps

Signed-off-by: Swati Allabadi <[email protected]>
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset.
5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

too verbose. Make it simple pointed steps

5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points. As an alternative, these values can be passed as command line arguemnets as well with the finetune command. For example "--train_split train".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add hyperlinks to the relative paths annotated in the steps below

```

## Fine-Tuning on custom dataset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should include details on how we use gradient accumulation, how the dataset is shuffled, how activation checkpointing is enabled in separate sections.
In custom dataset, add a point that if any user wants to use these, refer xyz section

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants