Skip to content

Commit

Permalink
Update headers and remove unnecessary code directives
Browse files Browse the repository at this point in the history
Signed-off-by: DarkLight1337 <[email protected]>
  • Loading branch information
DarkLight1337 committed Jan 6, 2025
1 parent 8866836 commit 299961f
Show file tree
Hide file tree
Showing 6 changed files with 20 additions and 20 deletions.
4 changes: 2 additions & 2 deletions docs/source/deployment/frameworks/skypilot.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ vLLM can be **run and scaled to multiple service replicas on clouds and Kubernet

## Prerequisites

- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model {code}`meta-llama/Meta-Llama-3-8B-Instruct`.
- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model `meta-llama/Meta-Llama-3-8B-Instruct`.
- Check that you have installed SkyPilot ([docs](https://skypilot.readthedocs.io/en/latest/getting-started/installation.html)).
- Check that {code}`sky check` shows clouds or Kubernetes are enabled.
- Check that `sky check` shows clouds or Kubernetes are enabled.

```console
pip install skypilot-nightly
Expand Down
2 changes: 1 addition & 1 deletion docs/source/getting_started/installation/gpu-rocm.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ $ export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
$ python3 setup.py develop
```

This may take 5-10 minutes. Currently, {code}`pip install .` does not work for ROCm installation.
This may take 5-10 minutes. Currently, `pip install .` does not work for ROCm installation.

```{tip}
- Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers.
Expand Down
14 changes: 7 additions & 7 deletions docs/source/serving/distributed_serving.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
(distributed-serving)=

# Distributed Inference and Serving
# Distributed inference and serving

## How to decide the distributed inference strategy?

Expand All @@ -18,36 +18,36 @@ After adding enough GPUs and nodes to hold the model, you can run vLLM first, wh
There is one edge case: if the model fits in a single node with multiple GPUs, but the number of GPUs cannot divide the model size evenly, you can use pipeline parallelism, which splits the model along layers and supports uneven splits. In this case, the tensor parallel size should be 1 and the pipeline parallel size should be the number of GPUs.
```

## Details for Distributed Inference and Serving
## Running vLLM on a single node

vLLM supports distributed tensor-parallel and pipeline-parallel inference and serving. Currently, we support [Megatron-LM's tensor parallel algorithm](https://arxiv.org/pdf/1909.08053.pdf). We manage the distributed runtime with either [Ray](https://github.com/ray-project/ray) or python native multiprocessing. Multiprocessing can be used when deploying on a single node, multi-node inferencing currently requires Ray.

Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured {code}`tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the {code}`LLM` class {code}`distributed_executor_backend` argument or {code}`--distributed-executor-backend` API server argument. Set it to {code}`mp` for multiprocessing or {code}`ray` for Ray. It's not required for Ray to be installed for the multiprocessing case.
Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured `tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the `LLM` class `distributed_executor_backend` argument or `--distributed-executor-backend` API server argument. Set it to `mp` for multiprocessing or `ray` for Ray. It's not required for Ray to be installed for the multiprocessing case.

To run multi-GPU inference with the {code}`LLM` class, set the {code}`tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs:
To run multi-GPU inference with the `LLM` class, set the `tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs:

```python
from vllm import LLM
llm = LLM("facebook/opt-13b", tensor_parallel_size=4)
output = llm.generate("San Franciso is a")
```

To run multi-GPU serving, pass in the {code}`--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs:
To run multi-GPU serving, pass in the `--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs:

```console
$ vllm serve facebook/opt-13b \
$ --tensor-parallel-size 4
```

You can also additionally specify {code}`--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism:
You can also additionally specify `--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism:

```console
$ vllm serve gpt2 \
$ --tensor-parallel-size 4 \
$ --pipeline-parallel-size 2
```

## Multi-Node Inference and Serving
## Running vLLM on multiple nodes

If a single node does not have enough GPUs to hold the model, you can run the model using multiple nodes. It is important to make sure the execution environment is the same on all nodes, including the model path, the Python environment. The recommended way is to use docker images to ensure the same environment, and hide the heterogeneity of the host machines via mapping them into the same docker configuration.

Expand Down
14 changes: 7 additions & 7 deletions docs/source/serving/multimodal_inputs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
(multimodal-inputs)=

# Multimodal Inputs
# Multimodal inputs

This page teaches you how to pass multi-modal inputs to [multi-modal models](#supported-mm-models) in vLLM.

Expand All @@ -18,7 +18,7 @@ To input multi-modal data, follow this schema in {class}`vllm.inputs.PromptType`

### Image

You can pass a single image to the {code}`'image'` field of the multi-modal dictionary, as shown in the following examples:
You can pass a single image to the `'image'` field of the multi-modal dictionary, as shown in the following examples:

```python
llm = LLM(model="llava-hf/llava-1.5-7b-hf")
Expand Down Expand Up @@ -122,21 +122,21 @@ for o in outputs:

### Video

You can pass a list of NumPy arrays directly to the {code}`'video'` field of the multi-modal dictionary
You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary
instead of using multi-image input.

Full example: <gh-file:examples/offline_inference_vision_language.py>

### Audio

You can pass a tuple {code}`(array, sampling_rate)` to the {code}`'audio'` field of the multi-modal dictionary.
You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary.

Full example: <gh-file:examples/offline_inference_audio_language.py>

### Embedding

To input pre-computed embeddings belonging to a data type (i.e. image, video, or audio) directly to the language model,
pass a tensor of shape {code}`(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary.
pass a tensor of shape `(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary.

```python
# Inference with image embeddings as input
Expand Down Expand Up @@ -294,7 +294,7 @@ $ export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>

### Video

Instead of {code}`image_url`, you can pass a video file via {code}`video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf).
Instead of `image_url`, you can pass a video file via `video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf).

First, launch the OpenAI-compatible server:

Expand Down Expand Up @@ -418,7 +418,7 @@ result = chat_completion_from_base64.choices[0].message.content
print("Chat completion output from input audio:", result)
```

Alternatively, you can pass {code}`audio_url`, which is the audio counterpart of {code}`image_url` for image input:
Alternatively, you can pass `audio_url`, which is the audio counterpart of `image_url` for image input:

```python
chat_completion_from_url = client.chat.completions.create(
Expand Down
2 changes: 1 addition & 1 deletion docs/source/serving/openai_compatible_server.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ you can use the [official OpenAI Python client](https://github.com/openai/openai

We support both [Vision](https://platform.openai.com/docs/guides/vision)- and
[Audio](https://platform.openai.com/docs/guides/audio?audio-generation-quickstart-example=audio-in)-related parameters;
see our [Multimodal Inputs](#multimodal-inputs) guide for more information.
see our [multimodal inputs](#multimodal-inputs) guide for more information.
- *Note: `image_url.detail` parameter is not supported.*

Code example: <gh-file:examples/openai_chat_completion_client.py>
Expand Down
4 changes: 2 additions & 2 deletions docs/source/serving/usage_stats.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Usage Stats Collection
# Usage stats collection

vLLM collects anonymous usage data by default to help the engineering team better understand which hardware and model configurations are widely used. This data allows them to prioritize their efforts on the most common workloads. The collected data is transparent, does not contain any sensitive information, and will be publicly released for the community's benefit.

Expand Down Expand Up @@ -45,7 +45,7 @@ You can preview the collected data by running the following command:
tail ~/.config/vllm/usage_stats.json
```

## Opt-out of Usage Stats Collection
## Opting out

You can opt-out of usage stats collection by setting the `VLLM_NO_USAGE_STATS` or `DO_NOT_TRACK` environment variable, or by creating a `~/.config/vllm/do_not_track` file:

Expand Down

0 comments on commit 299961f

Please sign in to comment.