Skip to content

Conversation

keehyuna
Copy link
Collaborator

@keehyuna keehyuna commented Aug 1, 2025

Description

Support pre-quantized HF models and post-training quantization (PTQ) option for run_llm.py

Fixes # (issue)

Type of change

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@meta-cla meta-cla bot added the cla signed label Aug 1, 2025
@keehyuna keehyuna self-assigned this Aug 6, 2025
@keehyuna keehyuna changed the title fp8 pre-quantized model support Pre-quantized model support Aug 7, 2025
@keehyuna keehyuna changed the title Pre-quantized model support Feat: Pre-quantized LLM model support Aug 7, 2025
@keehyuna keehyuna marked this pull request as ready for review August 7, 2025 12:39
@keehyuna keehyuna requested review from narendasan and peri044 and removed request for narendasan August 8, 2025 06:44
return model


class TensorRTQuantizedLinear(torch.nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@peri044 Is this something we might want to upstream to ModelOpt in the future?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or pull into main torch-tensorrt as a pass?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess its somewhat HF specific, so remaining in this tool would make sense but are there some parts we could make generic for any sort of quantization workflow (e.g. torchao)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think quantize_model() can be moved to function like torch_tensorrt.dynamo.quantize(). Currently investigating how to separate the calibration data path from the quantization logic


hf_quant_algo = hf_quant_config.pop("quant_algo", None)
if hf_quant_algo != "FP8" and hf_quant_algo != "NVFP4":
raise RuntimeError("Only FP8 or NVFP4 quantization is supported")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would it be different for MXFP4?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looked at quantization cfg in modelopt

NVFP4_DEFAULT_CFG NVFP4 has E4M3 scales and a block size is 16.

MXFP4_DEFAULT_CFG MXFP4 has E8M0 scales and a block size is 32.

@github-actions github-actions bot added component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Sep 4, 2025
@lanluo-nvidia
Copy link
Collaborator

modelopt has changed their code structure in 0.35.0:
please make the same changes as here: 9c520f8

input_amax = tensors.pop(input_scale_name) * 448.0

# Dequantize the weight using the scale factor
dequantized_weight_data = module.weight.to(torch.float32) * weight_scale
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we check if precison is fp16 then .to(torch.float16) otherwise float32?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that makes sense. I've updated it to use the same model precision.

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functionality looks good to me. Posted some comments on code restructuring

hf_quant_config = load_quantization_config(args.model)
if hf_quant_config:
model = convert_linear_to_tensorrt_quantized(model, hf_quant_config).cuda()
print(f"Model converted to TensorRT quantized")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider changing this to a more informative message

@github-actions github-actions bot removed component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Sep 20, 2025
Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending CI failures

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants