Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC][TorchElastic] topology info in training apps/ranks in #57

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

kurman
Copy link

@kurman kurman commented Sep 20, 2023

Proposal to provide topology information to training apps/ranks that can be implemented as part of TorchElastic.

Copy link
Member

@d4l3k d4l3k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any details on what will consume WORLD_TOPOLOGY_DETAILS? Also can you share any details on how we'll compute the graph? Integrating with things like IB/NVLink seem pretty cluster specific -- can we autodetect that topology in all cases?

Also wondering about things like AWS with spline topologies etc which impact distributed performance pretty significantly

Profiling bandwidth/latency also seems tricky when there are many nodes so would be nice to see some details on that

RFC-0033-TorchElastic-TopologyInfo.md Show resolved Hide resolved
"measurement": "GB/s"
},
"channels": {
"value": "4"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this number of connections or lanes? multi-NIC vs # of nvlink lanes?

- `RANK` - unique rank of a worker (0…WORLD_SIZE-1)
- `LOCAL_RANK` - unique rank of a worker on a node, typically used to exclusively assign accelerators on the host.

New proposed `WORLD_TOPOLOGY_FILE` environment variable will reference a local filesystem file that will provide information about the underlying topology.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this be a required? if not what would the default topology be? if so, what is the proposal for backwards-compatibility?

- Most of the can be easily detected at runtime by the trainer code
- More fine-grained details based on communication pattern (p2p vs collectives)

### **Format of the topology information file**

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we define a TopologyInfo schema (e.g. dataclass or protobuf) and have a Reader API that can be extended to read from various sources? The default implementation could read from a simple json/yaml file, but I can imagine folks running on the cloud wanting to read from a database or directly from a cloud storage like s3 or (to your point below) auto-discovered and dynamically generated.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+on API, adding that to proposal.

On additional datasources: we need a single point of discovery of this data at application level therefore it has to be controlled via underlying infra setup. We can extend to other datasources but I believe factory mechanism should be encapsulated.

RFC-0033-TorchElastic-TopologyInfo.md Show resolved Hide resolved
RFC-0033-TorchElastic-TopologyInfo.md Show resolved Hide resolved
@facebook-github-bot
Copy link
Contributor

Hi @kurman!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants