Skip to content

Latest commit

 

History

History
43 lines (28 loc) · 802 Bytes

README.md

File metadata and controls

43 lines (28 loc) · 802 Bytes

fairscale

fairscale is a PyTorch extension library for high performance and large scale training.

fairscale supports:

  • pipeline parallelism (fairscale.nn.Pipe)
  • optimizer state sharding (fairscale.optim.oss)

Examples

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)

Requirements

  • PyTorch >= 1.4

Installation

Normal installation:

pip install .

Development mode:

pip install -e .

Contributors

See the CONTRIBUTING file for how to help out.

License

fairscale is licensed under the BSD-3-Clause License.