Skip to content

Commit 4f98b14

Browse files
Docs / PEFT: Add PEFT API documentation (#31078)
* add peft references * add peft references * Update docs/source/en/peft.md * Update docs/source/en/peft.md
1 parent 779bc36 commit 4f98b14

File tree

1 file changed

+15
-0
lines changed

1 file changed

+15
-0
lines changed

docs/source/en/peft.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,8 @@ model = AutoModelForCausalLM.from_pretrained(model_id)
8181
model.load_adapter(peft_model_id)
8282
```
8383

84+
Check out the [API documentation](#transformers.integrations.PeftAdapterMixin) section below for more details.
85+
8486
## Load in 8bit or 4bit
8587

8688
The `bitsandbytes` integration supports 8bit and 4bit precision data types, which are useful for loading large models because it saves memory (see the `bitsandbytes` integration [guide](./quantization#bitsandbytes-integration) to learn more). Add the `load_in_8bit` or `load_in_4bit` parameters to [`~PreTrainedModel.from_pretrained`] and set `device_map="auto"` to effectively distribute the model to your hardware:
@@ -227,6 +229,19 @@ lora_config = LoraConfig(
227229
model.add_adapter(lora_config)
228230
```
229231

232+
## API docs
233+
234+
[[autodoc]] integrations.PeftAdapterMixin
235+
- load_adapter
236+
- add_adapter
237+
- set_adapter
238+
- disable_adapters
239+
- enable_adapters
240+
- active_adapters
241+
- get_adapter_state_dict
242+
243+
244+
230245

231246
<!--
232247
TODO: (@younesbelkada @stevhliu)

0 commit comments

Comments
 (0)