-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AMD][Build] Porting dockerfiles from the ROCm/vllm fork #11777
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Gregory Shtrasberg <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: Gregory Shtrasberg <[email protected]>
8016502
to
a309129
Compare
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
nice |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Gregory Shtrasberg <[email protected]>
@SageMoore - could you take a quick look through this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks reasonable to me. I assume the configs are copied over from the rocm/vllm-dev container and are expected to be faster than what we have on main? Do you have any performance results that you can share?
ARG FA_BRANCH | ||
ARG FA_REPO | ||
RUN git clone ${PYTORCH_REPO} pytorch | ||
RUN cd pytorch && git checkout ${PYTORCH_BRANCH} && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this version of pytorch support rocm 6.3? If not, would it make more sense to pull in the 6.2 wheel from pypy instead of building from source?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same base image and therefore a combination of versions used in the AMD weekly dockers - rocm/vllm-dev:main (out of ROCm/vllm.git)
6.2 wheel is a all-in-one whl that brings with it an entire ROCm worth of .so's, including the old less performant hipblaslt
Not sure which configs do you mean. The change to the triton tuning configs in this PR is due to the triton version change. Triton 3.2+ deprecated the use of num_stages:0 in favor of num_stages:2 |
It looks like the CI isn't able to find the container? |
All GPU nodes of AMD CI are currently down due to a network issue in the compute cluster used there |
1092866
to
5c36cb8
Compare
Signed-off-by: Gregory Shtrasberg <[email protected]>
To unbreak the CI we need vllm-project/buildkite-ci#57 |
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
An attempt to unify the build process with how it is done in ROCm/vllm
Split the build process into:
In addition to not building the libraries each time, the new base image is now much smaller (7GB on docker hub vs 18GB previously), which allows to make the build process much faster (~10 minutes)