diff --git a/docs/source/serving/architecture_helm_deployment.png b/docs/source/assets/deployment/architecture_helm_deployment.png similarity index 100% rename from docs/source/serving/architecture_helm_deployment.png rename to docs/source/assets/deployment/architecture_helm_deployment.png diff --git a/docs/source/contributing/dockerfile/dockerfile.md b/docs/source/contributing/dockerfile/dockerfile.md index 7ffec83333d7d..38ea956ba8dfb 100644 --- a/docs/source/contributing/dockerfile/dockerfile.md +++ b/docs/source/contributing/dockerfile/dockerfile.md @@ -1,7 +1,7 @@ # Dockerfile We provide a to construct the image for running an OpenAI compatible server with vLLM. -More information about deploying with Docker can be found [here](../../serving/deploying_with_docker.md). +More information about deploying with Docker can be found [here](#deployment-docker). Below is a visual representation of the multi-stage Dockerfile. The build graph contains the following nodes: diff --git a/docs/source/serving/deploying_with_docker.md b/docs/source/deployment/docker.md similarity index 98% rename from docs/source/serving/deploying_with_docker.md rename to docs/source/deployment/docker.md index 844bd27800c7a..2df1aca27f1e6 100644 --- a/docs/source/serving/deploying_with_docker.md +++ b/docs/source/deployment/docker.md @@ -1,6 +1,6 @@ -(deploying-with-docker)= +(deployment-docker)= -# Deploying with Docker +# Using Docker ## Use vLLM's Official Docker Image diff --git a/docs/source/serving/deploying_with_bentoml.md b/docs/source/deployment/frameworks/bentoml.md similarity index 89% rename from docs/source/serving/deploying_with_bentoml.md rename to docs/source/deployment/frameworks/bentoml.md index dfa0de4f0f6d7..ea0b5d1d4c93b 100644 --- a/docs/source/serving/deploying_with_bentoml.md +++ b/docs/source/deployment/frameworks/bentoml.md @@ -1,6 +1,6 @@ -(deploying-with-bentoml)= +(deployment-bentoml)= -# Deploying with BentoML +# BentoML [BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes. diff --git a/docs/source/serving/deploying_with_cerebrium.md b/docs/source/deployment/frameworks/cerebrium.md similarity index 98% rename from docs/source/serving/deploying_with_cerebrium.md rename to docs/source/deployment/frameworks/cerebrium.md index 950064c8c1b10..be018dfb75d7a 100644 --- a/docs/source/serving/deploying_with_cerebrium.md +++ b/docs/source/deployment/frameworks/cerebrium.md @@ -1,6 +1,6 @@ -(deploying-with-cerebrium)= +(deployment-cerebrium)= -# Deploying with Cerebrium +# Cerebrium ```{raw} html

diff --git a/docs/source/serving/deploying_with_dstack.md b/docs/source/deployment/frameworks/dstack.md similarity index 98% rename from docs/source/serving/deploying_with_dstack.md rename to docs/source/deployment/frameworks/dstack.md index 381f5f786ca2c..4142c1d9f1f60 100644 --- a/docs/source/serving/deploying_with_dstack.md +++ b/docs/source/deployment/frameworks/dstack.md @@ -1,6 +1,6 @@ -(deploying-with-dstack)= +(deployment-dstack)= -# Deploying with dstack +# dstack ```{raw} html

diff --git a/docs/source/serving/deploying_with_helm.md b/docs/source/deployment/frameworks/helm.md similarity index 98% rename from docs/source/serving/deploying_with_helm.md rename to docs/source/deployment/frameworks/helm.md index 7286a0a88968f..18ed293191468 100644 --- a/docs/source/serving/deploying_with_helm.md +++ b/docs/source/deployment/frameworks/helm.md @@ -1,6 +1,6 @@ -(deploying-with-helm)= +(deployment-helm)= -# Deploying with Helm +# Helm A Helm chart to deploy vLLM for Kubernetes @@ -38,7 +38,7 @@ chart **including persistent volumes** and deletes the release. ## Architecture -```{image} architecture_helm_deployment.png +```{image} /assets/deployment/architecture_helm_deployment.png ``` ## Values diff --git a/docs/source/deployment/frameworks/index.md b/docs/source/deployment/frameworks/index.md new file mode 100644 index 0000000000000..6a59131d36618 --- /dev/null +++ b/docs/source/deployment/frameworks/index.md @@ -0,0 +1,13 @@ +# Using other frameworks + +```{toctree} +:maxdepth: 1 + +bentoml +cerebrium +dstack +helm +lws +skypilot +triton +``` diff --git a/docs/source/serving/deploying_with_lws.md b/docs/source/deployment/frameworks/lws.md similarity index 91% rename from docs/source/serving/deploying_with_lws.md rename to docs/source/deployment/frameworks/lws.md index 22bab419eaca3..349fa83fbcb9d 100644 --- a/docs/source/serving/deploying_with_lws.md +++ b/docs/source/deployment/frameworks/lws.md @@ -1,6 +1,6 @@ -(deploying-with-lws)= +(deployment-lws)= -# Deploying with LWS +# LWS LeaderWorkerSet (LWS) is a Kubernetes API that aims to address common deployment patterns of AI/ML inference workloads. A major use case is for multi-host/multi-node distributed inference. diff --git a/docs/source/serving/run_on_sky.md b/docs/source/deployment/frameworks/skypilot.md similarity index 99% rename from docs/source/serving/run_on_sky.md rename to docs/source/deployment/frameworks/skypilot.md index 115873ae49292..ad93534775d36 100644 --- a/docs/source/serving/run_on_sky.md +++ b/docs/source/deployment/frameworks/skypilot.md @@ -1,6 +1,6 @@ -(on-cloud)= +(deployment-skypilot)= -# Deploying and scaling up with SkyPilot +# SkyPilot ```{raw} html

diff --git a/docs/source/serving/deploying_with_triton.md b/docs/source/deployment/frameworks/triton.md similarity index 87% rename from docs/source/serving/deploying_with_triton.md rename to docs/source/deployment/frameworks/triton.md index 9b0a6f1d54ae8..94d87120159c6 100644 --- a/docs/source/serving/deploying_with_triton.md +++ b/docs/source/deployment/frameworks/triton.md @@ -1,5 +1,5 @@ -(deploying-with-triton)= +(deployment-triton)= -# Deploying with NVIDIA Triton +# NVIDIA Triton The [Triton Inference Server](https://github.com/triton-inference-server) hosts a tutorial demonstrating how to quickly deploy a simple [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) model using vLLM. Please see [Deploying a vLLM model in Triton](https://github.com/triton-inference-server/tutorials/blob/main/Quick_Deploy/vLLM/README.md#deploying-a-vllm-model-in-triton) for more details. diff --git a/docs/source/deployment/integrations/index.md b/docs/source/deployment/integrations/index.md new file mode 100644 index 0000000000000..65f17997afe26 --- /dev/null +++ b/docs/source/deployment/integrations/index.md @@ -0,0 +1,9 @@ +# External integrations + +```{toctree} +:maxdepth: 1 + +kserve +kubeai +llamastack +``` diff --git a/docs/source/serving/deploying_with_kserve.md b/docs/source/deployment/integrations/kserve.md similarity index 85% rename from docs/source/serving/deploying_with_kserve.md rename to docs/source/deployment/integrations/kserve.md index feaeb5d0ec8a2..c780fd74e8f55 100644 --- a/docs/source/serving/deploying_with_kserve.md +++ b/docs/source/deployment/integrations/kserve.md @@ -1,6 +1,6 @@ -(deploying-with-kserve)= +(deployment-kserve)= -# Deploying with KServe +# KServe vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving. diff --git a/docs/source/serving/deploying_with_kubeai.md b/docs/source/deployment/integrations/kubeai.md similarity index 93% rename from docs/source/serving/deploying_with_kubeai.md rename to docs/source/deployment/integrations/kubeai.md index 3609d7e05acd3..2f5772e075d87 100644 --- a/docs/source/serving/deploying_with_kubeai.md +++ b/docs/source/deployment/integrations/kubeai.md @@ -1,6 +1,6 @@ -(deploying-with-kubeai)= +(deployment-kubeai)= -# Deploying with KubeAI +# KubeAI [KubeAI](https://github.com/substratusai/kubeai) is a Kubernetes operator that enables you to deploy and manage AI models on Kubernetes. It provides a simple and scalable way to deploy vLLM in production. Functionality such as scale-from-zero, load based autoscaling, model caching, and much more is provided out of the box with zero external dependencies. diff --git a/docs/source/serving/serving_with_llamastack.md b/docs/source/deployment/integrations/llamastack.md similarity index 95% rename from docs/source/serving/serving_with_llamastack.md rename to docs/source/deployment/integrations/llamastack.md index 71dadca7ad47c..474d2bdfa9580 100644 --- a/docs/source/serving/serving_with_llamastack.md +++ b/docs/source/deployment/integrations/llamastack.md @@ -1,6 +1,6 @@ -(run-on-llamastack)= +(deployment-llamastack)= -# Serving with Llama Stack +# Llama Stack vLLM is also available via [Llama Stack](https://github.com/meta-llama/llama-stack) . diff --git a/docs/source/serving/deploying_with_k8s.md b/docs/source/deployment/k8s.md similarity index 99% rename from docs/source/serving/deploying_with_k8s.md rename to docs/source/deployment/k8s.md index 5f9b0e4f55ecc..a7d796091b06c 100644 --- a/docs/source/serving/deploying_with_k8s.md +++ b/docs/source/deployment/k8s.md @@ -1,6 +1,6 @@ -(deploying-with-k8s)= +(deployment-k8s)= -# Deploying with Kubernetes +# Using Kubernetes Using Kubernetes to deploy vLLM is a scalable and efficient way to serve machine learning models. This guide will walk you through the process of deploying vLLM with Kubernetes, including the necessary prerequisites, steps for deployment, and testing. @@ -43,7 +43,7 @@ metadata: name: hf-token-secret namespace: default type: Opaque -stringData: +data: token: "REPLACE_WITH_TOKEN" ``` diff --git a/docs/source/serving/deploying_with_nginx.md b/docs/source/deployment/nginx.md similarity index 99% rename from docs/source/serving/deploying_with_nginx.md rename to docs/source/deployment/nginx.md index a1f00d8536465..a58f791c2997b 100644 --- a/docs/source/serving/deploying_with_nginx.md +++ b/docs/source/deployment/nginx.md @@ -1,6 +1,6 @@ (nginxloadbalancer)= -# Deploying with Nginx Loadbalancer +# Using Nginx This document shows how to launch multiple vLLM serving containers and use Nginx to act as a load balancer between the servers. diff --git a/docs/source/getting_started/installation/hpu-gaudi.md b/docs/source/getting_started/installation/hpu-gaudi.md index 94de169f51a73..1d50cef3bdc83 100644 --- a/docs/source/getting_started/installation/hpu-gaudi.md +++ b/docs/source/getting_started/installation/hpu-gaudi.md @@ -82,7 +82,7 @@ $ python setup.py develop ## Supported Features -- [Offline batched inference](#offline-batched-inference) +- [Offline inference](#offline-inference) - Online inference via [OpenAI-Compatible Server](#openai-compatible-server) - HPU autodetection - no need to manually select device within vLLM - Paged KV cache with algorithms enabled for Intel Gaudi accelerators diff --git a/docs/source/getting_started/quickstart.md b/docs/source/getting_started/quickstart.md index ff216f8af30f9..a69f77d9a831d 100644 --- a/docs/source/getting_started/quickstart.md +++ b/docs/source/getting_started/quickstart.md @@ -2,20 +2,20 @@ # Quickstart -This guide will help you quickly get started with vLLM to: +This guide will help you quickly get started with vLLM to perform: -- [Run offline batched inference](#offline-batched-inference) -- [Run OpenAI-compatible inference](#openai-compatible-server) +- [Offline batched inference](#quickstart-offline) +- [Online inference using OpenAI-compatible server](#quickstart-online) ## Prerequisites - OS: Linux - Python: 3.9 -- 3.12 -- GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.) ## Installation -You can install vLLM using pip. It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments. +If you are using NVIDIA GPUs, you can install vLLM using [pip](https://pypi.org/project/vllm/) directly. +It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments. ```console $ conda create -n myenv python=3.10 -y @@ -23,11 +23,13 @@ $ conda activate myenv $ pip install vllm ``` -Please refer to the [installation documentation](#installation-index) for more details on installing vLLM. +```{note} +For non-CUDA platforms, please refer [here](#installation-index) for specific instructions on how to install vLLM. +``` -(offline-batched-inference)= +(quickstart-offline)= -## Offline Batched Inference +## Offline batched inference With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: @@ -73,9 +75,9 @@ for output in outputs: print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` -(openai-compatible-server)= +(quickstart-online)= -## OpenAI-Compatible Server +## OpenAI-compatible server vLLM can be deployed as a server that implements the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. By default, it starts the server at `http://localhost:8000`. You can specify the address with `--host` and `--port` arguments. The server currently hosts one model at a time and implements endpoints such as [list models](https://platform.openai.com/docs/api-reference/models/list), [create chat completion](https://platform.openai.com/docs/api-reference/chat/completions/create), and [create completion](https://platform.openai.com/docs/api-reference/completions/create) endpoints. diff --git a/docs/source/index.md b/docs/source/index.md index f390474978790..2ce5135174d89 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -66,19 +66,26 @@ getting_started/faq ``` ```{toctree} -:caption: Serving +:caption: Inference and Serving :maxdepth: 1 +serving/offline_inference serving/openai_compatible_server -serving/deploying_with_docker -serving/deploying_with_k8s -serving/deploying_with_helm -serving/deploying_with_nginx serving/distributed_serving serving/metrics -serving/integrations -serving/tensorizer -serving/runai_model_streamer +serving/integrations/index +serving/multimodal_inputs +``` + +```{toctree} +:caption: Deployment +:maxdepth: 1 + +deployment/docker +deployment/k8s +deployment/nginx +deployment/frameworks/index +deployment/integrations/index ``` ```{toctree} @@ -90,6 +97,7 @@ models/generative_models models/pooling_models models/adding_model models/enabling_multimodal_inputs +models/loaders/index ``` ```{toctree} @@ -97,7 +105,6 @@ models/enabling_multimodal_inputs :maxdepth: 1 usage/lora -usage/multimodal_inputs usage/tool_calling usage/structured_outputs usage/spec_decode diff --git a/docs/source/models/loaders/index.md b/docs/source/models/loaders/index.md new file mode 100644 index 0000000000000..46d6ca9c0978d --- /dev/null +++ b/docs/source/models/loaders/index.md @@ -0,0 +1,8 @@ +# Alternative model loaders + +```{toctree} +:maxdepth: 1 + +runai_model_streamer +tensorizer +``` diff --git a/docs/source/serving/runai_model_streamer.md b/docs/source/models/loaders/runai_model_streamer.md similarity index 98% rename from docs/source/serving/runai_model_streamer.md rename to docs/source/models/loaders/runai_model_streamer.md index d4269050ff574..74e18a6645587 100644 --- a/docs/source/serving/runai_model_streamer.md +++ b/docs/source/models/loaders/runai_model_streamer.md @@ -1,6 +1,6 @@ (runai-model-streamer)= -# Loading Models with Run:ai Model Streamer +# Run:ai Model Streamer Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in [Run:ai Model Streamer Documentation](https://github.com/run-ai/runai-model-streamer/blob/master/docs/README.md). diff --git a/docs/source/serving/tensorizer.md b/docs/source/models/loaders/tensorizer.md similarity index 95% rename from docs/source/serving/tensorizer.md rename to docs/source/models/loaders/tensorizer.md index d3dd29d48f730..7168237cff222 100644 --- a/docs/source/serving/tensorizer.md +++ b/docs/source/models/loaders/tensorizer.md @@ -1,6 +1,6 @@ (tensorizer)= -# Loading Models with CoreWeave's Tensorizer +# Tensorizer vLLM supports loading models with [CoreWeave's Tensorizer](https://docs.coreweave.com/coreweave-machine-learning-and-ai/inference/tensorizer). vLLM model tensors that have been serialized to disk, an HTTP/HTTPS endpoint, or S3 endpoint can be deserialized diff --git a/docs/source/serving/integrations.md b/docs/source/serving/integrations.md deleted file mode 100644 index d214c77254257..0000000000000 --- a/docs/source/serving/integrations.md +++ /dev/null @@ -1,17 +0,0 @@ -# Integrations - -```{toctree} -:maxdepth: 1 - -run_on_sky -deploying_with_kserve -deploying_with_kubeai -deploying_with_triton -deploying_with_bentoml -deploying_with_cerebrium -deploying_with_lws -deploying_with_dstack -serving_with_langchain -serving_with_llamaindex -serving_with_llamastack -``` diff --git a/docs/source/serving/integrations/index.md b/docs/source/serving/integrations/index.md new file mode 100644 index 0000000000000..257cf9c5081a8 --- /dev/null +++ b/docs/source/serving/integrations/index.md @@ -0,0 +1,8 @@ +# External integrations + +```{toctree} +:maxdepth: 1 + +langchain +llamaindex +``` diff --git a/docs/source/serving/serving_with_langchain.md b/docs/source/serving/integrations/langchain.md similarity index 82% rename from docs/source/serving/serving_with_langchain.md rename to docs/source/serving/integrations/langchain.md index 96bd5943f3d64..49ff6e0c32a72 100644 --- a/docs/source/serving/serving_with_langchain.md +++ b/docs/source/serving/integrations/langchain.md @@ -1,10 +1,10 @@ -(run-on-langchain)= +(serving-langchain)= -# Serving with Langchain +# LangChain -vLLM is also available via [Langchain](https://github.com/langchain-ai/langchain) . +vLLM is also available via [LangChain](https://github.com/langchain-ai/langchain) . -To install langchain, run +To install LangChain, run ```console $ pip install langchain langchain_community -q diff --git a/docs/source/serving/serving_with_llamaindex.md b/docs/source/serving/integrations/llamaindex.md similarity index 74% rename from docs/source/serving/serving_with_llamaindex.md rename to docs/source/serving/integrations/llamaindex.md index 98859d8e3f828..9961c181d7e1c 100644 --- a/docs/source/serving/serving_with_llamaindex.md +++ b/docs/source/serving/integrations/llamaindex.md @@ -1,10 +1,10 @@ -(run-on-llamaindex)= +(serving-llamaindex)= -# Serving with llama_index +# LlamaIndex -vLLM is also available via [llama_index](https://github.com/run-llama/llama_index) . +vLLM is also available via [LlamaIndex](https://github.com/run-llama/llama_index) . -To install llamaindex, run +To install LlamaIndex, run ```console $ pip install llama-index-llms-vllm -q diff --git a/docs/source/serving/metrics.md b/docs/source/serving/metrics.md index 2dc78643f6d8f..e6ded2e6dd465 100644 --- a/docs/source/serving/metrics.md +++ b/docs/source/serving/metrics.md @@ -4,7 +4,7 @@ vLLM exposes a number of metrics that can be used to monitor the health of the system. These metrics are exposed via the `/metrics` endpoint on the vLLM OpenAI compatible API server. -You can start the server using Python, or using [Docker](deploying_with_docker.md): +You can start the server using Python, or using [Docker](#deployment-docker): ```console $ vllm serve unsloth/Llama-3.2-1B-Instruct diff --git a/docs/source/usage/multimodal_inputs.md b/docs/source/serving/multimodal_inputs.md similarity index 100% rename from docs/source/usage/multimodal_inputs.md rename to docs/source/serving/multimodal_inputs.md diff --git a/docs/source/serving/offline_inference.md b/docs/source/serving/offline_inference.md new file mode 100644 index 0000000000000..0c8f90ac9cc9f --- /dev/null +++ b/docs/source/serving/offline_inference.md @@ -0,0 +1,79 @@ +(offline-inference)= + +# Offline inference + +You can run vLLM in your own code on a list of prompts. + +The offline API is based on the {class}`~vllm.LLM` class. +To initialize the vLLM engine, create a new instance of `LLM` and specify the model to run. + +For example, the following code downloads the [`facebook/opt-125m`](https://huggingface.co/facebook/opt-125m) model from HuggingFace +and runs it in vLLM using the default configuration. + +```python +llm = LLM(model="facebook/opt-125m") +``` + +After initializing the `LLM` instance, you can perform model inference using various APIs. +The available APIs depend on the type of model that is being run: + +- [Generative models](#generative-models) output logprobs which are sampled from to obtain the final output text. +- [Pooling models](#pooling-models) output their hidden states directly. + +Please refer to the above pages for more details about each API. + +```{seealso} +[API Reference](/dev/offline_inference/offline_index) +``` + +## Configuration options + +This section lists the most common options for running the vLLM engine. +For a full list, refer to the [Engine Arguments](#engine-args) page. + +### Reducing memory usage + +Large models might cause your machine to run out of memory (OOM). Here are some options that help alleviate this problem. + +#### Tensor Parallelism (TP) + +Tensor parallelism (`tensor_parallel_size` option) can be used to split the model across multiple GPUs. + +The following code splits the model across 2 GPUs. + +```python +llm = LLM(model="ibm-granite/granite-3.1-8b-instruct", + tensor_parallel_size=2) +``` + +```{important} +To ensure that vLLM initializes CUDA correctly, you should avoid calling related functions (e.g. {func}`torch.cuda.set_device`) +before initializing vLLM. Otherwise, you may run into an error like `RuntimeError: Cannot re-initialize CUDA in forked subprocess`. + +To control which devices are used, please instead set the `CUDA_VISIBLE_DEVICES` environment variable. +``` + +#### Quantization + +Quantized models take less memory at the cost of lower precision. + +Statically quantized models can be downloaded from HF Hub (some popular ones are available at [Neural Magic](https://huggingface.co/neuralmagic)) +and used directly without extra configuration. + +Dynamic quantization is also supported via the `quantization` option -- see [here](#quantization-index) for more details. + +#### Context length and batch size + +You can further reduce memory usage by limit the context length of the model (`max_model_len` option) +and the maximum batch size (`max_num_seqs` option). + +```python +llm = LLM(model="adept/fuyu-8b", + max_model_len=2048, + max_num_seqs=2) +``` + +### Performance optimization and tuning + +You can potentially improve the performance of vLLM by finetuning various options. +Please refer to [this guide](#optimization-and-tuning) for more details. diff --git a/docs/source/serving/openai_compatible_server.md b/docs/source/serving/openai_compatible_server.md index caf5e8cafd9aa..9ac4c031c46ec 100644 --- a/docs/source/serving/openai_compatible_server.md +++ b/docs/source/serving/openai_compatible_server.md @@ -1,8 +1,10 @@ -# OpenAI Compatible Server +(openai-compatible-server)= -vLLM provides an HTTP server that implements OpenAI's [Completions](https://platform.openai.com/docs/api-reference/completions) and [Chat](https://platform.openai.com/docs/api-reference/chat) API, and more! +# OpenAI-compatible server -You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](deploying_with_docker.md): +vLLM provides an HTTP server that implements OpenAI's [Completions API](https://platform.openai.com/docs/api-reference/completions), [Chat API](https://platform.openai.com/docs/api-reference/chat), and more! + +You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](#deployment-docker): ```bash vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123 ``` @@ -217,7 +219,7 @@ you can use the [official OpenAI Python client](https://github.com/openai/openai We support both [Vision](https://platform.openai.com/docs/guides/vision)- and [Audio](https://platform.openai.com/docs/guides/audio?audio-generation-quickstart-example=audio-in)-related parameters; -see our [Multimodal Inputs](../usage/multimodal_inputs.md) guide for more information. +see our [Multimodal Inputs](#multimodal-inputs) guide for more information. - *Note: `image_url.detail` parameter is not supported.* Code example: