Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc][2/N] Reorganize Models and Usage sections #11755

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/600-new-model.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ body:
value: >
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).

#### We also highly recommend you read https://docs.vllm.ai/en/latest/models/adding_model.html first to understand how to add a new model.
#### We also highly recommend you read https://docs.vllm.ai/en/latest/contributing/model/adding_model.html first to understand how to add a new model.
- type: textarea
attributes:
label: The model to consider.
Expand Down
102 changes: 102 additions & 0 deletions docs/source/contributing/model/basic.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
(new-model-basic)=

# Basic Implementation

This guide walks you through the steps to implement a basic vLLM model.

## 1. Bring your model code

First, clone the PyTorch model code from the source repository.
For instance, vLLM's [OPT model](gh-file:vllm/model_executor/models/opt.py) was adapted from
HuggingFace's [modeling_opt.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_opt.py) file.

```{warning}
Make sure to review and adhere to the original code's copyright and licensing terms!
```

## 2. Make your code compatible with vLLM

To ensure compatibility with vLLM, your model must meet the following requirements:

### Initialization Code

All vLLM modules within the model must include a `prefix` argument in their constructor. This `prefix` is typically the full name of the module in the model's state dictionary and is crucial for:

- Runtime support: vLLM's attention operators are registered in a model's state by their full names. Each attention operator must have a unique prefix as its layer name to avoid conflicts.
- Non-uniform quantization support: A quantized checkpoint can selectively quantize certain layers while keeping others in full precision. By providing the `prefix` during initialization, vLLM can match the current layer's `prefix` with the quantization configuration to determine if the layer should be initialized in quantized mode.

The initialization code should look like this:

```python
from torch import nn
from vllm.config import VllmConfig
from vllm.attention import Attention

class MyAttention(nn.Module):
def __init__(self, vllm_config: VllmConfig, prefix: str):
super().__init__()
self.attn = Attention(prefix=f"{prefix}.attn")

class MyDecoderLayer(nn.Module):
def __init__(self, vllm_config: VllmConfig, prefix: str):
super().__init__()
self.self_attn = MyAttention(prefix=f"{prefix}.self_attn")

class MyModel(nn.Module):
def __init__(self, vllm_config: VllmConfig, prefix: str):
super().__init__()
self.layers = nn.ModuleList(
[MyDecoderLayer(vllm_config, prefix=f"{prefix}.layers.{i}") for i in range(vllm_config.model_config.hf_config.num_hidden_layers)]
)

class MyModelForCausalLM(nn.Module):
def __init__(self, vllm_config: VllmConfig, prefix: str = ""):
super().__init__()
self.model = MyModel(vllm_config, prefix=f"{prefix}.model")
```

### Computation Code

Rewrite the {meth}`~torch.nn.Module.forward` method of your model to remove any unnecessary code, such as training-specific code. Modify the input parameters to treat `input_ids` and `positions` as flattened tensors with a single batch size dimension, without a max-sequence length dimension.

```python
def forward(
self,
input_ids: torch.Tensor,
positions: torch.Tensor,
kv_caches: List[torch.Tensor],
attn_metadata: AttentionMetadata,
) -> torch.Tensor:
...
```

```{note}
Currently, vLLM supports the basic multi-head attention mechanism and its variant with rotary positional embeddings.
If your model employs a different attention mechanism, you will need to implement a new attention layer in vLLM.
```

For reference, check out our [Llama implementation](gh-file:vllm/model_executor/models/llama.py). vLLM already supports a large number of models. It is recommended to find a model similar to yours and adapt it to your model's architecture. Check out <gh-dir:vllm/model_executor/models> for more examples.

## 3. (Optional) Implement tensor parallelism and quantization support

If your model is too large to fit into a single GPU, you can use tensor parallelism to manage it.
To do this, substitute your model's linear and embedding layers with their tensor-parallel versions.
For the embedding layer, you can simply replace {class}`torch.nn.Embedding` with `VocabParallelEmbedding`. For the output LM head, you can use `ParallelLMHead`.
When it comes to the linear layers, we provide the following options to parallelize them:

- `ReplicatedLinear`: Replicates the inputs and weights across multiple GPUs. No memory saving.
- `RowParallelLinear`: The input tensor is partitioned along the hidden dimension. The weight matrix is partitioned along the rows (input dimension). An *all-reduce* operation is performed after the matrix multiplication to reduce the results. Typically used for the second FFN layer and the output linear transformation of the attention layer.
- `ColumnParallelLinear`: The input tensor is replicated. The weight matrix is partitioned along the columns (output dimension). The result is partitioned along the column dimension. Typically used for the first FFN layer and the separated QKV transformation of the attention layer in the original Transformer.
- `MergedColumnParallelLinear`: Column-parallel linear that merges multiple `ColumnParallelLinear` operators. Typically used for the first FFN layer with weighted activation functions (e.g., SiLU). This class handles the sharded weight loading logic of multiple weight matrices.
- `QKVParallelLinear`: Parallel linear layer for the query, key, and value projections of the multi-head and grouped-query attention mechanisms. When number of key/value heads are less than the world size, this class replicates the key/value heads properly. This class handles the weight loading and replication of the weight matrices.

Note that all the linear layers above take `linear_method` as an input. vLLM will set this parameter according to different quantization schemes to support weight quantization.

## 4. Implement the weight loading logic

You now need to implement the `load_weights` method in your `*ForCausalLM` class.
This method should load the weights from the HuggingFace's checkpoint file and assign them to the corresponding layers in your model. Specifically, for `MergedColumnParallelLinear` and `QKVParallelLinear` layers, if the original model has separated weight matrices, you need to load the different parts separately.

## 5. Register your model

See [this page](#new-model-registration) for instructions on how to register your new model to be used by vLLM.
26 changes: 26 additions & 0 deletions docs/source/contributing/model/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
(new-model)=

# Adding a New Model

This section provides more information on how to integrate a [HuggingFace Transformers](https://github.com/huggingface/transformers) model into vLLM.

```{toctree}
:caption: Contents
:maxdepth: 1

basic
registration
multimodal
```

```{note}
The complexity of adding a new model depends heavily on the model's architecture.
The process is considerably straightforward if the model shares a similar architecture with an existing model in vLLM.
However, for models that include new operators (e.g., a new attention mechanism), the process can be a bit more complex.
```

```{tip}
If you are encountering issues while integrating your model into vLLM, feel free to open a [GitHub issue](https://github.com/vllm-project/vllm/issues)
or ask on our [developer slack](https://slack.vllm.ai).
We will be happy to help you out!
```
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,11 @@

# Enabling Multimodal Inputs

This document walks you through the steps to extend a vLLM model so that it accepts [multi-modal inputs](#multimodal-inputs).

```{seealso}
[Adding a New Model](adding-a-new-model)
```
This document walks you through the steps to extend a basic model so that it accepts [multi-modal inputs](#multimodal-inputs).

## 1. Update the base vLLM model

It is assumed that you have already implemented the model in vLLM according to [these steps](#adding-a-new-model).
It is assumed that you have already implemented the model in vLLM according to [these steps](#new-model-basic).
Further update the model as follows:

- Implement the {class}`~vllm.model_executor.models.interfaces.SupportsMultiModal` interface.
Expand Down
56 changes: 56 additions & 0 deletions docs/source/contributing/model/registration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
(new-model-registration)=

# Model Registration

vLLM relies on a model registry to determine how to run each model.
A list of pre-registered architectures can be found on the [Supported Models](#supported-models) page.

If your model is not on this list, you must register it to vLLM.
This page provides detailed instructions on how to do so.

## Built-in models

To add a model directly to the vLLM library, start by forking our [GitHub repository](https://github.com/vllm-project/vllm) and then [build it from source](#build-from-source).
This gives you the ability to modify the codebase and test your model.

After you have implemented your model (see [tutorial](#new-model-basic)), put it into the <gh-dir:vllm/model_executor/models> directory.
Then, add your model class to `_VLLM_MODELS` in <gh-file:vllm/model_executor/models/registry.py> so that it is automatically registered upon importing vLLM.
You should also include an example HuggingFace repository for this model in <gh-file:tests/models/registry.py> to run the unit tests.
Finally, update the [Supported Models](#supported-models) documentation page to promote your model!

```{important}
The list of models in each section should be maintained in alphabetical order.
```

## Out-of-tree models

You can load an external model using a plugin without modifying the vLLM codebase.

```{seealso}
[vLLM's Plugin System](#plugin-system)
```

To register the model, use the following code:

```python
from vllm import ModelRegistry
from your_code import YourModelForCausalLM
ModelRegistry.register_model("YourModelForCausalLM", YourModelForCausalLM)
```

If your model imports modules that initialize CUDA, consider lazy-importing it to avoid errors like `RuntimeError: Cannot re-initialize CUDA in forked subprocess`:

```python
from vllm import ModelRegistry

ModelRegistry.register_model("YourModelForCausalLM", "your_code:YourModelForCausalLM")
```

```{important}
If your model is a multimodal model, ensure the model class implements the {class}`~vllm.model_executor.models.interfaces.SupportsMultiModal` interface.
Read more about that [here](#enabling-multimodal-inputs).
```

```{note}
Although you can directly put these code snippets in your script using `vllm.LLM`, the recommended way is to place these snippets in a vLLM plugin. This ensures compatibility with various vLLM features like distributed inference and the API server.
```
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Implementation
(design-automatic-prefix-caching)=

The core idea of PagedAttention is to partition the KV cache of each request into KV Blocks. Each block contains the attention keys and values for a fixed number of tokens. The PagedAttention algorithm allows these blocks to be stored in non-contiguous physical memory so that we can eliminate memory fragmentation by allocating the memory on demand.
# Automatic Prefix Caching

The core idea of [PagedAttention](#design-paged-attention) is to partition the KV cache of each request into KV Blocks. Each block contains the attention keys and values for a fixed number of tokens. The PagedAttention algorithm allows these blocks to be stored in non-contiguous physical memory so that we can eliminate memory fragmentation by allocating the memory on demand.

To automatically cache the KV cache, we utilize the following key observation: Each KV block can be uniquely identified by the tokens within the block and the tokens in the prefix before the block.

Expand Down
2 changes: 2 additions & 0 deletions docs/source/design/kernel/paged_attention.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
(design-paged-attention)=

# vLLM Paged Attention

- Currently, vLLM utilizes its own implementation of a multi-head query
Expand Down
1 change: 1 addition & 0 deletions docs/source/dev/offline_inference/offline_index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Offline Inference

```{toctree}
:caption: Contents
:maxdepth: 1

llm
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
(apc)=
(automatic-prefix-caching)=

# Introduction
# Automatic Prefix Caching

## What is Automatic Prefix Caching
## Introduction

Automatic Prefix Caching (APC in short) caches the KV cache of existing queries, so that a new query can directly reuse the KV cache if it shares the same prefix with one of the existing queries, allowing the new query to skip the computation of the shared part.

```{note}
Technical details on how vLLM implements APC are in the next page.
Technical details on how vLLM implements APC can be found [here](#design-automatic-prefix-caching).
```

## Enabling APC in vLLM
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Check the '✗' with links to see tracking issue for unsupported feature/hardwar

* - Feature
- [CP](#chunked-prefill)
- [APC](#apc)
- [APC](#automatic-prefix-caching)
- [LoRA](#lora-adapter)
- <abbr title="Prompt Adapter">prmpt adptr</abbr>
- [SD](#spec_decode)
Expand Down Expand Up @@ -64,7 +64,7 @@ Check the '✗' with links to see tracking issue for unsupported feature/hardwar
-
-
-
* - [APC](#apc)
* - [APC](#automatic-prefix-caching)
- ✅
-
-
Expand Down Expand Up @@ -345,7 +345,7 @@ Check the '✗' with links to see tracking issue for unsupported feature/hardwar
- ✅
- ✅
- ✅
* - [APC](#apc)
* - [APC](#automatic-prefix-caching)
- [✗](gh-issue:3687)
- ✅
- ✅
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@ Key abstractions for disaggregated prefilling:

Here is a figure illustrating how the above 3 abstractions are organized:

```{image} /assets/usage/disagg_prefill/abstraction.jpg
```{image} /assets/features/disagg_prefill/abstraction.jpg
:alt: Disaggregated prefilling abstractions
```

The workflow of disaggregated prefilling is as follows:

```{image} /assets/usage/disagg_prefill/overview.jpg
```{image} /assets/features/disagg_prefill/overview.jpg
:alt: Disaggregated prefilling workflow
```

Expand Down
File renamed without changes.
19 changes: 19 additions & 0 deletions docs/source/features/quantization/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
(quantization-index)=

# Quantization

Quantization trades off model precision for smaller memory footprint, allowing large models to be run on a wider range of devices.

```{toctree}
:caption: Contents
:maxdepth: 1

supported_hardware
auto_awq
bnb
gguf
int8
fp8
fp8_e5m2_kvcache
fp8_e4m3_kvcache
```
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
(supported-hardware-for-quantization)=
(quantization-supported-hardware)=

# Supported Hardware for Quantization Kernels
# Supported Hardware

The table below shows the compatibility of various quantization implementations with different hardware platforms in vLLM:

Expand Down Expand Up @@ -120,12 +120,12 @@ The table below shows the compatibility of various quantization implementations
- ✗
```

## Notes:

- Volta refers to SM 7.0, Turing to SM 7.5, Ampere to SM 8.0/8.6, Ada to SM 8.9, and Hopper to SM 9.0.
- "✅︎" indicates that the quantization method is supported on the specified hardware.
- "✗" indicates that the quantization method is not supported on the specified hardware.

Please note that this compatibility chart may be subject to change as vLLM continues to evolve and expand its support for different hardware platforms and quantization methods.
```{note}
This compatibility chart is subject to change as vLLM continues to evolve and expand its support for different hardware platforms and quantization methods.

For the most up-to-date information on hardware support and quantization methods, please refer to <gh-dir:vllm/model_executor/layers/quantization> or consult with the vLLM development team.
```
File renamed without changes.
File renamed without changes.
Loading
Loading