Skip to content

v0.17.0: PyTorch 2.0 support, Process Control Enhancements, TPU pod support and FP8 mixed precision training

Compare
Choose a tag to compare
@sgugger sgugger released this 09 Mar 18:22
· 887 commits to main since this release
1a63f7d

PyTorch 2.0 support

This release fully supports the upcoming PyTorch 2.0 release. You can choose to use torch.compile or not and then customize the options in accelerate.config or via a TorchDynamoPlugin.

Process Control Enhancements

This release adds a new PartialState, which contains most of the capabilities of the AcceleratorState however it is designed to be used by the user to assist in any process control mechanisms around it. With this, users also now do not need to have if accelerator.state.is_main_process when utilizing classes such as the Tracking API, as these now will automatically use only the main process for their work by default.

  • Refactor process executors to be in AcceleratorState by @muellerzr in #1039

TPU Pod Support (Experimental)

Launching from TPU pods is now supported, please see this issue for more information

FP8 mixed precision training (Experimental)

This release adds experimental support for FP8 mixed precision training, which requires the transformer-engine library as well as a Hopper GPU (or higher).

What's new?

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @Yard1
    • Refactor launch for greater extensibility (#1123)