Skip to content

Conversation

mengniwang95
Copy link
Contributor

@mengniwang95 mengniwang95 commented Sep 25, 2025

for model_name in "/mengni/flux/"
do
  CUDA_VISIBLE_DEVICES="7"  python3 -m auto_round \
    --scheme MXFP4
    --format "llm_compressor" \
    --output_dir "/mengni/tmp_flux" \
    --prompt_file "captions_source.tsv" \ # download from https://github.com/mlcommons/inference/raw/refs/heads/master/text_to_image/coco2014/captions/captions_source.tsv
    --metrics "clip,clip-iqa,imagereward"
done

@@ -0,0 +1,3 @@
diffusers
Copy link
Contributor

@wenhuach21 wenhuach21 Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don’t we need to add these requirements to our repo’s requirements file? If not, we should provide an auto-round[diffusion] installation instead, right?. please check with suyue/xuhao

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If those deps are not force required by auto_round, we can raise issue when user run into those code and ask them to install.

@chensuyue chensuyue added this to the 0.8.0 milestone Sep 26, 2025
@wenhuach21
Copy link
Contributor

please reminder users that this is experimental feature and only validated on limited models ,e.g.xxx

@wenhuach21
Copy link
Contributor

wenhuach21 commented Sep 26, 2025

--prompt_file "captions_source.tsv"
--metrics "clip,clip-iqa,imagereward"
1 is this for evaluation? It's hard for user to understand, please wrapper it as tasks
2 please follow the original behavior, if tasks is not set, no evaluation is conducted

@n1ck-guo @yiliu30 As there are so many args in main.py, we need a better way for auto-round -h

@mengniwang95
Copy link
Contributor Author

--prompt_file "captions_source.tsv" --metrics "clip,clip-iqa,imagereward" 1 is this for evaluation? It's hard for user to understand, please wrapper it as tasks 2 please follow the original behavior, if tasks is not set, no evaluation is conducted

@n1ck-guo @yiliu30 As there are so many args in main.py, we need a better way for auto-round -h

--prompt_file and --metrics are for evaluation. It is hard to wrapper prompt_file because there is no standard dataset in huggingface and other repos usually ask user to prepare dataset by themselves https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/examples/diffusers/README.md#data-format

@wenhuach21 wenhuach21 self-requested a review September 30, 2025 07:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants