Skip to content

fix(deps): update dependency accelerate to v1 #80

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dreadnode-renovate-bot[bot]
Copy link
Contributor

@dreadnode-renovate-bot dreadnode-renovate-bot bot commented Apr 24, 2025

This PR contains the following updates:

| Package | Type | Update | Change |
|

Generated Summary

  • Updated accelerate dependency version from ^0.30.1 to ^1.0.0.
  • This change potentially improves compatibility and access to newer features.
  • Ensures alignment with the latest updates in the accelerate library.

This summary was generated with ❤️ by rigging

| accelerate | extras | major | ^0.30.1 -> ^1.0.0 |


Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

huggingface/accelerate (accelerate)

v1.6.0: : FSDPv2, DeepSpeed TP and XCCL backend support

Compare Source

FSDPv2 support

This release introduces the support for FSDPv2 thanks to @​S1ro1.

If you are using python code, you need to set fsdp_version=2 in FullyShardedDataParallelPlugin:

from accelerate import FullyShardedDataParallelPlugin, Accelerator

fsdp_plugin = FullyShardedDataParallelPlugin(
    fsdp_version=2

### other options...
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)

If want to convert a YAML config that contains the FSDPv1 config to FSDPv2 one , use our conversion tool:

accelerate to-fsdp2 --config_file config.yaml --output_file new_config.yaml`

To learn more about the difference between FSDPv1 and FSDPv2, read the following documentation.

DeepSpeed TP support

We have added initial support for DeepSpeed + TP. Not many changes were required as the DeepSpeed APIs was already compatible. We only needed to make sure that the dataloader was compatible with TP and that we were able to save the TP weights. Thanks @​inkcherry for the work ! https://github.com/huggingface/accelerate/pull/3390.

To use TP with deepspeed, you need to update the setting in the deepspeed config file by including tensor_parallel key:

    ....
    "tensor_parallel":{
      "autotp_size": ${autotp_size}
    },
   ...

More details in this deepspeed PR.

Support for XCCL distributed backend

We've added support for XCCL which is an Intel distributed backend which can be used with XPU devices. More details in this torch PR. Thanks @​dvrogozh for the integration !

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v1.5.2...v1.6.0

v1.5.2: Patch: v1.5.2

Compare Source

Bug Fixes:

  • Fixed an issue with torch.get_default_device() requiring a higher version than what we support
  • Fixed a broken pytest import in prod

Full Changelog: huggingface/accelerate@v1.5.0...v1.5.2

v1.5.1

Compare Source

v1.5.0: : HPU support

Compare Source

HPU Support

  • Adds in HPU accelerator support for 🤗 Accelerate

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v1.4.0...v1.5.0

v1.4.0: : torchao FP8, TP & dataLoader support, fix memory leak

Compare Source

torchao FP8, initial Tensor Parallel support, and memory leak fixes

torchao FP8

This release introduces a new FP8 API and brings in a new backend: torchao. To use, pass in AORecipeKwargs to the Accelerator while setting mixed_precision="fp8". This is initial support, as it matures we will incorporate more into it (such as accelerate config/yaml) in future releases. See our benchmark examples here

TensorParallel

We have intial support for an in-house solution to TP when working with accelerate dataloaders. check out the PR here

Bug fixes

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v1.3.0...v1.4.0

v1.3.0: Bug fixes + Require torch 2.0

Compare Source

Torch 2.0

As it's been ~2 years since torch 2.0 was first released, we are now requiring this as the minimum version for Accelerate, which similarly was done in transformers as of its last release.

Core

Big Modeling

Examples

Full Changelog

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v1.2.1...v1.3.0

v1.2.1: : Patchfix

Compare Source

Full Changelog: huggingface/accelerate@v1.2.0...v1.2.1

v1.2.0: : Bug Squashing & Fixes across the board

Compare Source

Core

Big Modeling

DeepSpeed

Documentation

New Contributors

Full Changelog

Code Diff

Release diff: huggingface/accelerate@v1.1.1...v1.2.0

v1.1.1

Compare Source

v1.1.0: : Python 3.9 minimum, torch dynamo deepspeed support, and bug fixes

Compare Source

Internals:

DeepSpeed

Megatron

Big Model Inference

Examples

Full Changelog

New Contributors

Full Changelog: huggingface/accelerate@v1.0.1...v1.1.0

v1.0.1: : Bugfix

Compare Source

Bugfixes

  • Fixes an issue where the auto values were no longer being parsed when using deepspeed
  • Fixes a broken test in the deepspeed tests related to the auto values

Full Changelog: huggingface/accelerate@v1.0.0...v1.0.1

v1.0.0: Accelerate 1.0.0 is here!

Compare Source

🚀 Accelerate 1.0 🚀

With accelerate 1.0, we are officially stating that the core parts of the API are now "stable" and ready for the future of what the world of distributed training and PyTorch has to handle. With these release notes, we will focus first on the major breaking changes to get your code fixed, followed by what is new specifically between 0.34.0 and 1.0.

To read more, check out our official blog here

Migration assistance

  • Passing in dispatch_batches, split_batches, even_batches, and use_seedable_sampler to the Accelerator() should now be handled by creating an accelerate.utils.DataLoaderConfiguration() and passing this to the Accelerator() instead (Accelerator(dataloader_config=DataLoaderConfiguration(...)))
  • Accelerator().use_fp16 and AcceleratorState().use_fp16 have been removed; this should be replaced by checking accelerator.mixed_precision == "fp16"
  • Accelerator().autocast() no longer accepts a cache_enabled argument. Instead, an AutocastKwargs() instance should be used which handles this flag (among others) passing it to the Accelerator (Accelerator(kwargs_handlers=[AutocastKwargs(cache_enabled=True)]))
  • accelerate.utils.is_tpu_available should be replaced with accelerate.utils.is_torch_xla_available
  • accelerate.utils.modeling.shard_checkpoint should be replaced with split_torch_state_dict_into_shards from the huggingface_hub library
  • accelerate.tqdm.tqdm() no longer accepts True/False as the first argument, and instead, main_process_only should be passed in as a named argument

Multiple Model DeepSpeed Support

After long request, we finally have multiple model DeepSpeed support in Accelerate! (though it is quite early still). Read the full tutorial here, however essentially:

When using multiple models, a DeepSpeed plugin should be created for each model (and as a result, a separate config). a few examples are below:

Knowledge distillation

(Where we train only one model, zero3, and another is used for inference, zero2)

from accelerate import Accelerator
from accelerate.utils import DeepSpeedPlugin

zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json")
zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json")

deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin}

accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)

To then select which plugin to be used at a certain time (aka when calling prepare), we call `accelerator.state.select_deepspeed_plugin("name"), where the first plugin is active by default:

accelerator.state.select_deepspeed_plugin("student")
student_model, optimizer, scheduler = ...
student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader)

accelerator.state.select_deepspeed_plugin("teacher") # This will automatically enable zero init
teacher_model = AutoModel.from_pretrained(...)
teacher_model = accelerator.prepare(teacher_model)
Multiple disjoint models

For disjoint models, separate accelerators should be used for each model, and their own .backward() should be called later:

for batch in dl:
    outputs1 = first_model(**batch)
    first_accelerator.backward(outputs1.loss)
    first_optimizer.step()
    first_scheduler.step()
    first_optimizer.zero_grad()
    
    outputs2 = model2(**batch)
    second_accelerator.backward(outputs2.loss)
    second_optimizer.step()
    second_scheduler.step()
    second_optimizer.zero_grad()

FP8

We've enable


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

| datasource | package    | from   | to    |
| ---------- | ---------- | ------ | ----- |
| pypi       | accelerate | 0.30.1 | 1.6.0 |
@dreadnode-renovate-bot dreadnode-renovate-bot bot requested a review from a team as a code owner April 24, 2025 17:09
@dreadnode-renovate-bot dreadnode-renovate-bot bot added type/digest Dependency digest updates area/python Changes to Python package configuration and dependencies labels Apr 24, 2025
@dreadnode-renovate-bot
Copy link
Contributor Author

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

⚠️ Warning: custom changes will be lost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/python Changes to Python package configuration and dependencies type/digest Dependency digest updates
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants