In the default config with LoRA, only the base model is being saved to the weights directory. The config indicates that if save_adapter_separately=False, the adapter weights are merged into the main model, but that is not happening. Instead, adapter parameters are simply stripped from the state dict in clean_lora_state_dict.
save_adapter_separately: Annotated[
bool,
Field(
description="Whether to save LoRA adapters separately before merging into full model weights.",
),
] = False
In the default config with LoRA, only the base model is being saved to the
weightsdirectory. The config indicates that ifsave_adapter_separately=False, the adapter weights are merged into the main model, but that is not happening. Instead, adapter parameters are simply stripped from the state dict inclean_lora_state_dict.