Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development Roadmap (2024 Q3) #634

Closed
15 of 29 tasks
Ying1123 opened this issue Jul 17, 2024 · 19 comments
Closed
15 of 29 tasks

Development Roadmap (2024 Q3) #634

Ying1123 opened this issue Jul 17, 2024 · 19 comments

Comments

@Ying1123
Copy link
Member

Ying1123 commented Jul 17, 2024

Here is the development roadmap for 2024 Q3. Contributions and feedback are welcome.

Server API

Performance

Parallelism

Quantization

Observability

Model Coverage

Hardware Coverage

Language API

LoRA Support

Usage examples

Others

@Ying1123 Ying1123 pinned this issue Jul 17, 2024
@zhyncs
Copy link
Member

zhyncs commented Jul 17, 2024

Support W8A4 quantization with fp8 activation and int4 weight.

typo: W8A4 -> W4A8

@Ying1123
Copy link
Member Author

Support W8A4 quantization with fp8 activation and int4 weight.

typo: W8A4 -> W4A8

Thanks! Changed.

@LinqingZhong
Copy link

May I ask if there is an example for using llava-next-interleave with multi images ?

@anatoli26
Copy link

I guess ROCm support is under Hardware Coverage - AMD support. Any ETA for this?

@usaxena-asapp
Copy link

Hey @Ying1123 - are you okay with open source contributions from developers outside the core team? Looking to find more places I can contribute and I'm excited about SGLang. Just wondering.

@Ying1123
Copy link
Member Author

Hey @Ying1123 - are you okay with open source contributions from developers outside the core team? Looking to find more places I can contribute and I'm excited about SGLang. Just wondering.

Hi @usaxena-asapp, definitely! There is no strict definition of a "core team," and I'm just a volunteer to coordinate. If you contribute a lot, you are a core member! Let me know if you need any help from people with experience. My suggestion is to start with small issues and PRs and join discussions. If you want to start a big one, you can start with a simple proposal to trigger collaborations from the community.

@Ying1123
Copy link
Member Author

I guess ROCm support is under Hardware Coverage - AMD support. Any ETA for this?

Hi @usaxena-asapp, thanks for the question, we list it in the roadmap, but we might just start with some basic tests. Optimizations will depend on how many people and resources we can get.

@anatoli26
Copy link

we list it in the roadmap, but we might just start with some basic tests. Optimizations will depend on how many people and resources we can get.

Have you tried talking to AMD for hardware samples (e.g. a pair of W7900) and software collaboration? They are trying hard to be on par with NVIDIA in software stack: AMD is Becoming a Software Company. Here's the Plan. The author of the article has some great connections with the AMD people, maybe you could write him (W1zzard under the title) to ask for contacts at AMD responsible for relations with FOSS projects?

@ghchris2021
Copy link

IDK if there's any potential interest to broaden the concepts involved in "Hardware Coverage" but in case it may raise some ideas to consider in the future:

You mention CPU support, AMD support, but there are higher level frameworks that MAY considerably help with supporting different hardware backends (CPU, GPU) so you don't necessarily have to put as much work / focus into supporting a SPECIFIC backend -- they may ease / largely solve running on more than one for the same effort.
For instance OpenCL, SYCL, Vulkan compute, maybe OpenACC, and others are somewhat portable parallel computing frameworks and support some CPU(s) and some GPU(s) typically at least a couple if not several.

IIRC OpenCL can run on Nvidia, Amd, Intel GPUs as well as Intel & AMD & I think some ARM CPUs.

IIRC SYCL runs on Intel GPUs, Intel / AMD CPUs, and I believe also NVIDIA GPUs. It may run on AMD GPUs but I'm not so sure about that.

There are higher level still frameworks / implementations that can encapsulate / provide some of the tools / implementations for such open standards e.g.

https://github.com/AdaptiveCpp/AdaptiveCpp

targets SYCL but also provides C++ std:: paralellism programming models.

POCL, RustiCL, and several other (intel, amd, nvidia, ...) development packages / solutions support particular instances of platforms with functional compatible OpenCL support.

Besides the NVIDIA, AMD GPUs Intel has generations of data center / enterprise / business / consumer grade GPUs which are strong in their capabilities and they've got the same tooling / documentation / etc. across the product line insofar as supporting stuff like SYCL, OneAPI, OpenVINO, DPC++, libraries like OneDNN, etc. etc. for GPU families and CPUs.

There exist vulkan wrappers and higher level middleware that encapsulate the details of Vulkan compute programming and expose easier to use developer interfaces / solutions for general parallel compute, math / arithmetic / matrix / vector / NN etc. stuff.

IIRC all major gpus NVIDIA / AMD / Intel have Vulkan compatible runtimes and development options available and several ARM SOC etc. GPUs as well. So it as a middleware layer could help support numerous platforms for a single quantum of effort to target Vulkan based operations for the primary memory / NN / linear algebra etc. related calculations that can be accelerated.

So I'm just suggesting trying to reach for tools to support multiple standards based platforms if that eases your work and also broadens / accelerates the support of more platforms.

@CSEEduanyu
Copy link

I noticed that the speculate decode function has been implemented in the branch https://github.com/sgl-project/sglang/pull/270/commits, why was this commit closed? How long will it take to support speculate decode? Thank you for your reply.

@TimDettmers
Copy link

This is an awesome project! Thank you for this. @Ying1123 I am interested in using SGLang for multi-LoRA deployments for a project. The alternative is currently vLLM, but I like SGLang better. I am curious about the current state and timeline for supporting S-LoRA-like deployment.

@Ying1123
Copy link
Member Author

This is an awesome project! Thank you for this. @Ying1123 I am interested in using SGLang for multi-LoRA deployments for a project. The alternative is currently vLLM, but I like SGLang better. I am curious about the current state and timeline for supporting S-LoRA-like deployment.

Hi @TimDettmers, happy to hear from you! I got the same request from another SGLang user, so I am actively working on the multi-LoRA module, which is expected to have a first runnable version in a week. You are welcome to join our Slack and send me your sample script!

@hxer7963
Copy link
Contributor

hxer7963 commented Sep 7, 2024

Have no plan to support W8A8 quantization?

@brotherchen
Copy link

Can the current excellent performance (compared to vllm) be understood as excellent engineering implementation (such as using multiple processes to reduce CPU overhead) and more efficient scheduling strategies? And I want to know whether support for pipeline parallelism is being considered.

@merrymercy
Copy link
Contributor

@hxer7963 fp8 W8A8 quantization is supported.

@brotherchen yes. Pipeline parallelism will be in the plan of Q4

@CedricHwong
Copy link

Planning to use Sglang on Intel Gaudi 2, but I have not tried it yet. Would like to know the current support level?

@mingfeima
Copy link

Planning to use Sglang on Intel Gaudi 2, but I have not tried it yet. Would like to know the current support level?

@xinyu-intel we don't have binding for Gaudi 2 yet, right?

@xinyu-intel
Copy link

@CedricHwong @mingfeima Hi, glad to see the requests for sglang on Intel Gaudi. Currently, it's not implemented and we are evaluating the feasibility.

@zhyncs zhyncs unpinned this issue Oct 11, 2024
@zhyncs
Copy link
Member

zhyncs commented Nov 1, 2024

Most of the list in this 2024 Q3 roadmap has been completed, and the unfinished parts have been migrated to the 2024 Q4 roadmap. This issue is now closed. For those interested in the latest roadmap, please follow #1487

@zhyncs zhyncs closed this as completed Nov 1, 2024
ispobock pushed a commit to ispobock/sglang that referenced this issue Nov 25, 2024
* WIP

* cache engine wip

* finish cache engine

* fix cache and scheduler

* add paged attention

* step and stop

* add infer

* add request process

* fix end

* request without schedulersession

* add logits processor

* better context

* update patch

* [Improve] Use 4d input in pytorch poc (sgl-project#371)

* 4D input, model.eval and llama config

* use auto dtype

* tp wip

* almost

* update logger

* run_check=false

* little optimize

current best

redist w/o dtensor

host mem in que

less rewrite

less code

update model weight

* share attention forward

* fix end

* Support Baichuan (sgl-project#382)

* add baichuan WIP

* support baichuan

* support baichuan-13b

* fix

* add chat template

* lint

* comments

* fix

* Move `q_seq_info` into `context` (sgl-project#398)

* move q seq info into context

* remove debugs

* remove debugs

* alibi wip

* add alibi

* reduce logic block (sgl-project#435)

* add docstring

* add baichuan lint (sgl-project#445)

* add fill cache back

* support internlm

* fix path of weight index

* Support chatglm2 in pytorch_poc (sgl-project#360)

* draft support for chatglm2

* debug llama

* gitignore

* update input_id

* better patching

* patch chatglm2 model

* fix after merge

* remove inits

* q_seq_info & remove some debug & orig_self

* remove old unqeuzze inputid

* update patch and model config

* remove debugs and clean codes

* clean codes

* add credit

* add update id / fix dependency

* rename modules (sgl-project#504)

Co-authored-by: grimoire <[email protected]>

* optimize fill kv cache (sgl-project#523)

* optimize fill kv cache

* update internlm

* faster embedding

* fix bias tp

* fix baichuan2

* fix fill kv cache

* fix lint

---------

* Make trust_remote_code as cli argument (sgl-project#434)

* trust_remote_code_argument

* format

* update tokenizer

* optimize rotary

* wtf

* Support Falcon models (sgl-project#406)

* move q seq info into context

* falcon aligned

* trust_remote_code_argument

* fix for falcon

* comment out debugs

* comment out debugs

* use position id in context

* remove codes in falcon model

* Revert "comment out debugs"

This reverts commit ee26a25c36f11c7afdad315b8101ebb5c2c637fd.

* 7b correct

* 1b aligned

* remove debugs

* patch to ignore position ids

* remove debug in alibi, avoid empty inputs

* fix

* rename dir to replace to "models"

* use position_id and new fill kernel

* remove useless get_prompt func

* fix batch>2

* Refactor scheduler (sgl-project#551)

* optimize block manager

* scheduler wip

* finish scheduler

* update engine

* profile pytorch poc (sgl-project#455)

* profile pytorch poc

* update doc and import if need

* arg

* support profile_throughput.py

* reuse pytorch session

* end session

* Support Tensor parallel on Falcon models (sgl-project#582)

* tp falcon 1b and 7b works

* remove debugs

* update copyright

* add some comments

* remove a debug

* support new hub models

* support 40b

* support 40b model config

* try

* recover

* fix remain len

* Apply rotary kernel (sgl-project#572)

* apply rotary kernel

* format

* update rmsnorm

* update rms norm

* better unittest

* add docstring

---------

Co-authored-by: grimoire <[email protected]>

* fix(pytorch_poc): memory cal (sgl-project#606)

* fix(pytorch_poc): memory cal

* Optimize attention (sgl-project#597)

* add unittest

* add split k

* add docstring

* fast split k

* optimize load

* manually setup device and stream

* lint

---------

Co-authored-by: grimoire <[email protected]>

* feat(pytorch_poc): implement ReRoPE (sgl-project#625)

* fix(pytorch_poc): memory cal

* style(pytorch_poc): lint

* style(.pre-commit-config.yaml): update

* style(pytorch_poc): remove useless

* feat(pytorch_poc): llama2 support rerope

* feat(pytorch_poc): fix long input generate

* feat(lmdeploy): add kernel

* feat(lmdeploy): update

* feat(lmdeploy): add rerope implementation

* fix(lmdeploy/pytorch_poc): apply rotary_emb

* fix(lmdeploy): update

* style(pytorch_poc): format

* style(lmdeploy): fix lint

* style(lmdeploy): typo

* style(pytorch_poc): format

* style(pytorch_poc): format

* fix(pytorch_poc): rms_norm add mask

* style(pytorch_poc/kernels): format rerope

* style(pytorch_poc): format rerope attn function description

* style(lmdeploy/pytorch_poc): format

* style(pytorch_poc): add code ref

* style(pytorch_poc): format rerope attn

* Refactor engine (sgl-project#623)

* add agent

* optimize postprocess

* optimize decoding fill cache

* add docstring

* logit to cuda

* blocksize 128

* optimize pre/post process

* fix postprocess

* cpu pre/post process

* manually setup stream and device

* remove context

* update model agent

* update max session len

* remove tqdm

* update pre/post process

* inplace kernel

* avoid kv_len computation

* flash decoding with one cache

* remove comment

* add warning when no enough resources

* step if has unfinish

* add request manager

* better fill kv cache

* fix fill kv cache

* optimize prefill attention

* refractor

* refactoring...

* add custom output

* use cache

---------

Co-authored-by: grimoire <[email protected]>

* [Feature] w8a8 based on pytorch poc (sgl-project#595)

* refactor smoothquant and support load w8a8 model by from_pretrained

* add w8a8 docs

* add w8a8 en docs

* add convert_to_qmodules function

---------

Co-authored-by: grimoire <[email protected]>

* feat(lmdeploy): add rerope quantization (sgl-project#718)

* feat(lmdeploy): add rerope quantization

* feat(lmdeploy): fix review

* [Refactor & Doc] Improve w8a8 and add docstring (sgl-project#768)

* WIP

* improve w8a8 and add doc string

* add docstring

* add docstring

* fix lint

* rename pytorch poc (sgl-project#764)

* rename pytorch poc

* fix lint

* add docstring

* add docstring

* refactor patch

* add recompute eviction support

* recovery modeling

* add docstring

* Unified paging (sgl-project#860)

* change 'model_format' to 'qwen' when 'model_name' starts with 'qwen' (sgl-project#575)

* avoid split chinese characters during decoding (sgl-project#566)

* add solar chat template (sgl-project#576)

* robust incremental decode for leading space (sgl-project#581)

* robust incremental decode for leading space

* speed up lookup as prefix_space_tokens is shorter than no_prefix_space_tokens

* add UT and fix qwen stuff

* update solar chat template (sgl-project#587)

* Revert "[Docs] Simplify `build.md` (sgl-project#370)" (sgl-project#586)

This reverts commit 4b5c2bd.

* Fix crash and remove `sys_instruct` from `chat.py` and `client.py`(sgl-project#591)

* fix crash

* update profile_generation.py

* format

* use self.bos_id

* remove sys_instruct

* bump version to v0.0.12 (sgl-project#604)

* Add "build from docker" section (sgl-project#602)

* add build from docker section

* update

* install python package

* update

* update

* update

* Add more user-friendly CLI  (sgl-project#541)

* add

* import fire in main

* wrap to speed up fire cli

* update

* update docs

* update docs

* fix

* resolve commennts

* resolve confict and add test for cli

* support inference a batch of prompts (sgl-project#467)

* support inference a batch of prompts

* docstring and assert

* bump version to v0.0.13 (sgl-project#620)

* Improve api_server and webui usage (sgl-project#544)

* make IPv6 compatible, safe run for coroutine interrupting

* instance_id -> session_id and fix api_client.py

* update doc

* remove useless faq

* safe ip mapping

* update app.py

* WIP completion

* completion

* update doc

* disable interactive mode for /v1/chat/completions

* docstring

* docstring

* refactor gradio

* update gradio

* udpate

* update doc

* rename

* session_id default -1

* missed two files

* add a APIClient

* add chat func for APIClient

* refine

* add concurrent function

* sequence_start, sequence_end --> interactive_mode

* update doc

* comments

* doc

* better text completion

* remove /v1/embeddings

* comments

* deprecate generate and use /v1/interactive/completions

* /v1/interactive/completion -> /v1/chat/interactive

* embeddings

* rename

* remove wrong arg description

* docstring

* fix

* update cli

* update doc

* strict session_len limit condition

* pass model args to api_server

* fix: gradio gr.Button.update deprecated after 4.0.0 (sgl-project#637)

* add cli to list the supported model names (sgl-project#639)

* update

* resolve comment

* Refactor model conversion (sgl-project#296)

* split deploy.py

* fix get_cuda_tensor

* deploy qwen_awq

* fix lint

* add docstring

* fix

* support baichuan/baichuan-awq

* parameterizing size_per_head

* remove try/except

* limit input model_format

* add quant_path param

* remove old deploy.py

* fix path

* fix transformer layer range when load bins

* fix qwen init

* split & save log

* relative import

* update get_config

* WeightFileMgr -> Reader

* rename

* update

* fix init_layer_id

* rename llama.py -> meta_llama.py, hf.py -> llama.py

* reduce code

* update arg description

* fix meta llama

* manually cleanup meta model params

* [Enchance] internlm message to prompt (sgl-project#499)

* update turbomind session_len with model.session_len (sgl-project#634)

* [Fix] Qwen's quantization results are abnormal & Baichuan cannot be quantized (sgl-project#605)

* fix awq

* adapt new qwen code

* adapt qwen 14b and baichuan2 7b

* add docstring

* add runtime error for qwen

* FIX: fix stop_session func bug (sgl-project#578)

* FIX: fix stop_session func bug

* keep sequence_end = False

---------

Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: AllentDan <[email protected]>

* Manage session id using random int for gradio local mode (sgl-project#553)

* Use session id from gradio state

* use a new session id after reset

* rename session id like a state

* update comments

* reformat files

* init session id on block loaded

* use auto increased session id

* remove session id textbox

* apply to api_server and tritonserver

* update docstring

* add lock for safety

---------

Co-authored-by: AllentDan <[email protected]>

* fix benchmark serving computation mistake (sgl-project#630)

* fix benchmark serving computation mistake

* fix timestamps computations

* remove speed up

* no mp

* mp seems faster?

* remove

* update

* remove

* fix

* update

* update print log

* typo

* print fist token latency only stream==True

* remove renew_session

* update AsyncEngine

* fix tokenizer_info when convert the model (sgl-project#661)

* Add check env sub command (sgl-project#654)

* add check env

* update issue template'

* remove some reqs from check env

* resolve comment

* fix Tokenizer load error when the path of the being-converted  model is not writable (sgl-project#669)

* Add UltraCM and WizardLM chat templates (sgl-project#599)

* add ultracm eval chat template

* add WizardLM chat template

* use ultrachat template instead of ultracm usecase

* bump version to v0.0.14 (sgl-project#663)

* Add extra_requires to reduce dependencies (sgl-project#580)

* update reqs

* update docs

* resolve comments

* upgrade pydantic

* fix rebase

* update doc

* update

* update

* update readme

* update

* add flash-attn

* TurboMind 2 (sgl-project#590)

* refresh decoder attention kernel

* block-level kv cache

* `BlockManager` & `SequenceManager`

* update

* update

* update

* update

* rename

* GQA support

* fix context length

* GQA dispatch

* kv8

* tune

* async stream cb

* nvtx

* config parsing

* debug

* optimize output cost

* split-k decoding

* minor

* truncate `session_len` by available blocks

* minor

* license

* fix

* dispatch `cp.async`

* fix linking

* fix

* fix deadlock

* guard input length

* correct start offset

* fix prefill chunking

* fix `cache_block_seq_len` param passing

* fix `block_size` fmtstr

* fix output tokens

* fix batch resizing

* fix masking of finished sequences

* add debug util

* free unused block early

* add ntk scaling and logn scaling

* cmake flags

* fix typo

* w4a16 for sm75

* fix msvc build

* fix msvc build

* fix block verification

* fix msvc build

* use `std::shuffle`

* fix lint

* fix lint

* fix lint

* clear incoming buffer

* clear finished requests

* fix batch initialization

* fix typo

* fix typo

* fix comparison

* [Docs] Update Supported Matrix (sgl-project#679)

* update supported matrix

* change the default shard size when saving quantized weights

* baichuan2 kv8

* update kv8 docs (sgl-project#681)

* Fix init of batch state (sgl-project#682)

* fix init of finished buf

* fix `finished_count`

* fix turbomind stream canceling (sgl-project#686)

* fix

* instance for each forward

* [Fix] Fix load_checkpoint_in_model bug (sgl-project#690)

* fix load_checkpoint_in_model bug

* fix comments

* fix comments

* fix bugs

* [Doc] Update restful api doc (sgl-project#662)

* update restful_api.md

* add a hint

* repeat 3 time

* Fix Tokenizer encode (sgl-project#645)

* same encode with HF

* sequence_start -> add_bos

* complement

* Fix wrong eos_id and bos_id obtained through grpc api (sgl-project#644)

* Fix wrong eos_id and bos_id obtained through grpc api

* fix according to review comments

* update

* Optimize for throughput (sgl-project#701)

* tmp

* update

* update

* optimize for throughput

* update

* fix eos

* clean up

* fix serving

* fix indexed copy

* minor

* minor

---------

Co-authored-by: lvhan028 <[email protected]>

* Check-in user guide about turbomind config (sgl-project#680)

* update

* update config guide

* update guide

* upate user guide according to review comments

* Replace mmengine with mmengine-lite (sgl-project#715)

* Support loading hf model directly (sgl-project#685)

* turbomind support export model params

* fix overflow

* support turbomind.from_pretrained

* fix tp

* support AutoModel

* support load kv qparams

* update auto_awq

* udpate docstring

* export lmdeploy version

* update doc

* remove download_hf_repo

* LmdeployForCausalLM -> LmdeployForCausalLM

* refactor turbomind.py

* update comment

* add bfloat16 convert back

* support gradio run_locl load hf

* support resuful api server load hf

* add docs

* support loading previous quantized model

* adapt pr 690

* udpate docs

* not export turbomind config when quantize a model

* check model_name when can not get it from config.json

* update readme

* remove model_name in auto_awq

* update

* update

* udpate

* fix build

* absolute import

* Fix cache/output length calculation (sgl-project#738)

* bump version to v0.1.0a0 (sgl-project#709)

* [Fix] Skip empty batch (sgl-project#747)

* [Fix] build docker image failed since `packaging` is missing (sgl-project#753)

* [Fix] Rollback the data type of input_ids to TYPE_UINT32 in preprocessor's proto (sgl-project#758)

* Set the default value of `max_context_token_num` 1 (sgl-project#761)

* rename pytorch poc

* fix lint

* add docstring

* add docstring

* refactor patch

* add recompute eviction support

* fix typo (sgl-project#769)

* add triton server test and workflow yml (sgl-project#760)

* add triton server test and workflow yml

* update

* revert changes in dockerfile

* update prompts

* recovery modeling

* fix turbomind build on sm<80 (sgl-project#754)

* fix

* fix lint

* improvement(build): enable ninja and gold linker (sgl-project#767)

* feat(build): enable ninja and lld

* fix(.github): add ninja installation

* fix(CI): remove dimsize=256

* fix(CI): add option for generate.sh

* fix(docs): update

* Report first-token-latency and token-latency percentiles (sgl-project#736)

* update profile scripts

* add top_p, top_k and temperature as input arguments

* fix input_ids

* update profile_throughput

* update profile_restful_api

* update profile_serving

* update

* update

* add progress bar

* remove TODO comments

* update

* remove useless profile_* argument

* remove log level

* change concurrency default value to 64

* update restful_api.md

* update according to review comments

* fix docstring

* convert model with hf repo_id (sgl-project#774)

* bump version to 0.1.0a1 (sgl-project#776)

* Update benchmark user guide (sgl-project#763)

* user guide of benchmark generation

* update benchmark generation guide

* update profiling throughput guide

* update profiling api_server guide

* rename file names

* update profile tis user guide

* update

* fix according to review comments

* update

* update according to review comments

* updaste

* add an example

* update

* add docstring

* add unified paging attention support

* refactor block manager

* do not alloc zero

* Fix early exit condition in attention kernel (sgl-project#788)

* add chat template for Yi (sgl-project#779)

* Fix missed arguments when benchmark static inference performance (sgl-project#787)

* minor fix in the profile scripts and docs

* miss arguments

* typo

* fix lint

* update

* Unify prefill & decode passes (sgl-project#775)

* Unify prefill and decode passes

* dynamic split-fuse

* refactor

* correct input count calculation

* remove unused

* lint

* lint

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* add cuda12.1 build check ci (sgl-project#782)

* update cuda12.1 build check ci

* use matrix

* auto upload cuda12.1 python pkg to release when create new tag (sgl-project#784)

* add cuda12-whl-release ci

* enable environment

* test py310-311 windows wheel

* fix py310, py311 setup.py error on windows

* fix lint

* fix extra colon in InternLMChat7B (sgl-project#796)

* fix local kv head num (sgl-project#806)

* Report the inference benchmark of models with different size (sgl-project#794)

* update test scripts for models with different sizes

* update

* only test after tunning gemm

* chmod +x

* fix typo

* benchmark on a100

* fix typo

* fix typo

* per-token latency percentile in profile_throughput

* fix

* fix

* rename

* make the script accept parameters

* minor fix

* indent

* reformat table

* change to 3000

* minor fix

* bump version to v0.1.0a2 (sgl-project#807)

* fix out of bounds access (sgl-project#809)

* update scheduler

* optimize request

* Simplify block manager (sgl-project#812)

* simplify block manager

* fix lint

* set smem size for repetition penalty kernel (sgl-project#818)

* add mbgemm&mbgemv

* fix recompute, fix mbgmm

---------

Co-authored-by: Lyu Han <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: pppppM <[email protected]>
Co-authored-by: Chen Xin <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: Yam(长琴) <[email protected]>
Co-authored-by: liukuikun <[email protected]>
Co-authored-by: yunzhongyan0 <[email protected]>
Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: aisensiy <[email protected]>
Co-authored-by: Li Zhang <[email protected]>
Co-authored-by: whcao <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: tpoisonooo <[email protected]>
Co-authored-by: Qian Zhao <[email protected]>

* [Fix] Adapt to the pyTorch poc branch (sgl-project#863)

* Adapt to the pyTorch poc branch

* Adapt to the pyTorch poc branch

* fix comments

* update model

* update benchmark

* [Fix] Fix conflicts in `lite` (sgl-project#878)

* cherry-pick Fix meta tensor error commits

* fix smooth quant

---------

Co-authored-by: pppppM <[email protected]>

* [Feature] Support w8a8 tp (sgl-project#888)

* fix smooth quant save_pretrained

* support w8a8 tp

* change weight and bias in QLinear back to buffer

* remove debug codes and add comments

* fix message step update

* update docs

---------
Co-authored-by: grimoire <[email protected]>
Co-authored-by: WRH <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: tpoisonooo <[email protected]>
Co-authored-by: whcao <[email protected]>
Co-authored-by: pppppM <[email protected]>
Co-authored-by: Chen Xin <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: Yam(长琴) <[email protected]>
Co-authored-by: liukuikun <[email protected]>
Co-authored-by: yunzhongyan0 <[email protected]>
Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: aisensiy <[email protected]>
Co-authored-by: Li Zhang <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: Qian Zhao <[email protected]>
ispobock pushed a commit to ispobock/sglang that referenced this issue Nov 25, 2024
* WIP

* cache engine wip

* finish cache engine

* fix cache and scheduler

* add paged attention

* step and stop

* add infer

* add request process

* fix end

* request without schedulersession

* add logits processor

* better context

* update patch

* [Improve] Use 4d input in pytorch poc (sgl-project#371)

* 4D input, model.eval and llama config

* use auto dtype

* tp wip

* almost

* update logger

* run_check=false

* little optimize

current best

redist w/o dtensor

host mem in que

less rewrite

less code

update model weight

* share attention forward

* fix end

* Support Baichuan (sgl-project#382)

* add baichuan WIP

* support baichuan

* support baichuan-13b

* fix

* add chat template

* lint

* comments

* fix

* Move `q_seq_info` into `context` (sgl-project#398)

* move q seq info into context

* remove debugs

* remove debugs

* alibi wip

* add alibi

* reduce logic block (sgl-project#435)

* add docstring

* add baichuan lint (sgl-project#445)

* add fill cache back

* support internlm

* fix path of weight index

* Support chatglm2 in pytorch_poc (sgl-project#360)

* draft support for chatglm2

* debug llama

* gitignore

* update input_id

* better patching

* patch chatglm2 model

* fix after merge

* remove inits

* q_seq_info & remove some debug & orig_self

* remove old unqeuzze inputid

* update patch and model config

* remove debugs and clean codes

* clean codes

* add credit

* add update id / fix dependency

* rename modules (sgl-project#504)

Co-authored-by: grimoire <[email protected]>

* optimize fill kv cache (sgl-project#523)

* optimize fill kv cache

* update internlm

* faster embedding

* fix bias tp

* fix baichuan2

* fix fill kv cache

* fix lint

---------

* Make trust_remote_code as cli argument (sgl-project#434)

* trust_remote_code_argument

* format

* update tokenizer

* optimize rotary

* wtf

* Support Falcon models (sgl-project#406)

* move q seq info into context

* falcon aligned

* trust_remote_code_argument

* fix for falcon

* comment out debugs

* comment out debugs

* use position id in context

* remove codes in falcon model

* Revert "comment out debugs"

This reverts commit ee26a25c36f11c7afdad315b8101ebb5c2c637fd.

* 7b correct

* 1b aligned

* remove debugs

* patch to ignore position ids

* remove debug in alibi, avoid empty inputs

* fix

* rename dir to replace to "models"

* use position_id and new fill kernel

* remove useless get_prompt func

* fix batch>2

* Refactor scheduler (sgl-project#551)

* optimize block manager

* scheduler wip

* finish scheduler

* update engine

* profile pytorch poc (sgl-project#455)

* profile pytorch poc

* update doc and import if need

* arg

* support profile_throughput.py

* reuse pytorch session

* end session

* Support Tensor parallel on Falcon models (sgl-project#582)

* tp falcon 1b and 7b works

* remove debugs

* update copyright

* add some comments

* remove a debug

* support new hub models

* support 40b

* support 40b model config

* try

* recover

* fix remain len

* Apply rotary kernel (sgl-project#572)

* apply rotary kernel

* format

* update rmsnorm

* update rms norm

* better unittest

* add docstring

---------

Co-authored-by: grimoire <[email protected]>

* fix(pytorch_poc): memory cal (sgl-project#606)

* fix(pytorch_poc): memory cal

* Optimize attention (sgl-project#597)

* add unittest

* add split k

* add docstring

* fast split k

* optimize load

* manually setup device and stream

* lint

---------

Co-authored-by: grimoire <[email protected]>

* feat(pytorch_poc): implement ReRoPE (sgl-project#625)

* fix(pytorch_poc): memory cal

* style(pytorch_poc): lint

* style(.pre-commit-config.yaml): update

* style(pytorch_poc): remove useless

* feat(pytorch_poc): llama2 support rerope

* feat(pytorch_poc): fix long input generate

* feat(lmdeploy): add kernel

* feat(lmdeploy): update

* feat(lmdeploy): add rerope implementation

* fix(lmdeploy/pytorch_poc): apply rotary_emb

* fix(lmdeploy): update

* style(pytorch_poc): format

* style(lmdeploy): fix lint

* style(lmdeploy): typo

* style(pytorch_poc): format

* style(pytorch_poc): format

* fix(pytorch_poc): rms_norm add mask

* style(pytorch_poc/kernels): format rerope

* style(pytorch_poc): format rerope attn function description

* style(lmdeploy/pytorch_poc): format

* style(pytorch_poc): add code ref

* style(pytorch_poc): format rerope attn

* Refactor engine (sgl-project#623)

* add agent

* optimize postprocess

* optimize decoding fill cache

* add docstring

* logit to cuda

* blocksize 128

* optimize pre/post process

* fix postprocess

* cpu pre/post process

* manually setup stream and device

* remove context

* update model agent

* update max session len

* remove tqdm

* update pre/post process

* inplace kernel

* avoid kv_len computation

* flash decoding with one cache

* remove comment

* add warning when no enough resources

* step if has unfinish

* add request manager

* better fill kv cache

* fix fill kv cache

* optimize prefill attention

* refractor

* refactoring...

* add custom output

* use cache

---------

Co-authored-by: grimoire <[email protected]>

* [Feature] w8a8 based on pytorch poc (sgl-project#595)

* refactor smoothquant and support load w8a8 model by from_pretrained

* add w8a8 docs

* add w8a8 en docs

* add convert_to_qmodules function

---------

Co-authored-by: grimoire <[email protected]>

* feat(lmdeploy): add rerope quantization (sgl-project#718)

* feat(lmdeploy): add rerope quantization

* feat(lmdeploy): fix review

* [Refactor & Doc] Improve w8a8 and add docstring (sgl-project#768)

* WIP

* improve w8a8 and add doc string

* add docstring

* add docstring

* fix lint

* rename pytorch poc (sgl-project#764)

* rename pytorch poc

* fix lint

* add docstring

* add docstring

* refactor patch

* add recompute eviction support

* recovery modeling

* add docstring

* Unified paging (sgl-project#860)

* change 'model_format' to 'qwen' when 'model_name' starts with 'qwen' (sgl-project#575)

* avoid split chinese characters during decoding (sgl-project#566)

* add solar chat template (sgl-project#576)

* robust incremental decode for leading space (sgl-project#581)

* robust incremental decode for leading space

* speed up lookup as prefix_space_tokens is shorter than no_prefix_space_tokens

* add UT and fix qwen stuff

* update solar chat template (sgl-project#587)

* Revert "[Docs] Simplify `build.md` (sgl-project#370)" (sgl-project#586)

This reverts commit 4b5c2bd.

* Fix crash and remove `sys_instruct` from `chat.py` and `client.py`(sgl-project#591)

* fix crash

* update profile_generation.py

* format

* use self.bos_id

* remove sys_instruct

* bump version to v0.0.12 (sgl-project#604)

* Add "build from docker" section (sgl-project#602)

* add build from docker section

* update

* install python package

* update

* update

* update

* Add more user-friendly CLI  (sgl-project#541)

* add

* import fire in main

* wrap to speed up fire cli

* update

* update docs

* update docs

* fix

* resolve commennts

* resolve confict and add test for cli

* support inference a batch of prompts (sgl-project#467)

* support inference a batch of prompts

* docstring and assert

* bump version to v0.0.13 (sgl-project#620)

* Improve api_server and webui usage (sgl-project#544)

* make IPv6 compatible, safe run for coroutine interrupting

* instance_id -> session_id and fix api_client.py

* update doc

* remove useless faq

* safe ip mapping

* update app.py

* WIP completion

* completion

* update doc

* disable interactive mode for /v1/chat/completions

* docstring

* docstring

* refactor gradio

* update gradio

* udpate

* update doc

* rename

* session_id default -1

* missed two files

* add a APIClient

* add chat func for APIClient

* refine

* add concurrent function

* sequence_start, sequence_end --> interactive_mode

* update doc

* comments

* doc

* better text completion

* remove /v1/embeddings

* comments

* deprecate generate and use /v1/interactive/completions

* /v1/interactive/completion -> /v1/chat/interactive

* embeddings

* rename

* remove wrong arg description

* docstring

* fix

* update cli

* update doc

* strict session_len limit condition

* pass model args to api_server

* fix: gradio gr.Button.update deprecated after 4.0.0 (sgl-project#637)

* add cli to list the supported model names (sgl-project#639)

* update

* resolve comment

* Refactor model conversion (sgl-project#296)

* split deploy.py

* fix get_cuda_tensor

* deploy qwen_awq

* fix lint

* add docstring

* fix

* support baichuan/baichuan-awq

* parameterizing size_per_head

* remove try/except

* limit input model_format

* add quant_path param

* remove old deploy.py

* fix path

* fix transformer layer range when load bins

* fix qwen init

* split & save log

* relative import

* update get_config

* WeightFileMgr -> Reader

* rename

* update

* fix init_layer_id

* rename llama.py -> meta_llama.py, hf.py -> llama.py

* reduce code

* update arg description

* fix meta llama

* manually cleanup meta model params

* [Enchance] internlm message to prompt (sgl-project#499)

* update turbomind session_len with model.session_len (sgl-project#634)

* [Fix] Qwen's quantization results are abnormal & Baichuan cannot be quantized (sgl-project#605)

* fix awq

* adapt new qwen code

* adapt qwen 14b and baichuan2 7b

* add docstring

* add runtime error for qwen

* FIX: fix stop_session func bug (sgl-project#578)

* FIX: fix stop_session func bug

* keep sequence_end = False

---------

Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: AllentDan <[email protected]>

* Manage session id using random int for gradio local mode (sgl-project#553)

* Use session id from gradio state

* use a new session id after reset

* rename session id like a state

* update comments

* reformat files

* init session id on block loaded

* use auto increased session id

* remove session id textbox

* apply to api_server and tritonserver

* update docstring

* add lock for safety

---------

Co-authored-by: AllentDan <[email protected]>

* fix benchmark serving computation mistake (sgl-project#630)

* fix benchmark serving computation mistake

* fix timestamps computations

* remove speed up

* no mp

* mp seems faster?

* remove

* update

* remove

* fix

* update

* update print log

* typo

* print fist token latency only stream==True

* remove renew_session

* update AsyncEngine

* fix tokenizer_info when convert the model (sgl-project#661)

* Add check env sub command (sgl-project#654)

* add check env

* update issue template'

* remove some reqs from check env

* resolve comment

* fix Tokenizer load error when the path of the being-converted  model is not writable (sgl-project#669)

* Add UltraCM and WizardLM chat templates (sgl-project#599)

* add ultracm eval chat template

* add WizardLM chat template

* use ultrachat template instead of ultracm usecase

* bump version to v0.0.14 (sgl-project#663)

* Add extra_requires to reduce dependencies (sgl-project#580)

* update reqs

* update docs

* resolve comments

* upgrade pydantic

* fix rebase

* update doc

* update

* update

* update readme

* update

* add flash-attn

* TurboMind 2 (sgl-project#590)

* refresh decoder attention kernel

* block-level kv cache

* `BlockManager` & `SequenceManager`

* update

* update

* update

* update

* rename

* GQA support

* fix context length

* GQA dispatch

* kv8

* tune

* async stream cb

* nvtx

* config parsing

* debug

* optimize output cost

* split-k decoding

* minor

* truncate `session_len` by available blocks

* minor

* license

* fix

* dispatch `cp.async`

* fix linking

* fix

* fix deadlock

* guard input length

* correct start offset

* fix prefill chunking

* fix `cache_block_seq_len` param passing

* fix `block_size` fmtstr

* fix output tokens

* fix batch resizing

* fix masking of finished sequences

* add debug util

* free unused block early

* add ntk scaling and logn scaling

* cmake flags

* fix typo

* w4a16 for sm75

* fix msvc build

* fix msvc build

* fix block verification

* fix msvc build

* use `std::shuffle`

* fix lint

* fix lint

* fix lint

* clear incoming buffer

* clear finished requests

* fix batch initialization

* fix typo

* fix typo

* fix comparison

* [Docs] Update Supported Matrix (sgl-project#679)

* update supported matrix

* change the default shard size when saving quantized weights

* baichuan2 kv8

* update kv8 docs (sgl-project#681)

* Fix init of batch state (sgl-project#682)

* fix init of finished buf

* fix `finished_count`

* fix turbomind stream canceling (sgl-project#686)

* fix

* instance for each forward

* [Fix] Fix load_checkpoint_in_model bug (sgl-project#690)

* fix load_checkpoint_in_model bug

* fix comments

* fix comments

* fix bugs

* [Doc] Update restful api doc (sgl-project#662)

* update restful_api.md

* add a hint

* repeat 3 time

* Fix Tokenizer encode (sgl-project#645)

* same encode with HF

* sequence_start -> add_bos

* complement

* Fix wrong eos_id and bos_id obtained through grpc api (sgl-project#644)

* Fix wrong eos_id and bos_id obtained through grpc api

* fix according to review comments

* update

* Optimize for throughput (sgl-project#701)

* tmp

* update

* update

* optimize for throughput

* update

* fix eos

* clean up

* fix serving

* fix indexed copy

* minor

* minor

---------

Co-authored-by: lvhan028 <[email protected]>

* Check-in user guide about turbomind config (sgl-project#680)

* update

* update config guide

* update guide

* upate user guide according to review comments

* Replace mmengine with mmengine-lite (sgl-project#715)

* Support loading hf model directly (sgl-project#685)

* turbomind support export model params

* fix overflow

* support turbomind.from_pretrained

* fix tp

* support AutoModel

* support load kv qparams

* update auto_awq

* udpate docstring

* export lmdeploy version

* update doc

* remove download_hf_repo

* LmdeployForCausalLM -> LmdeployForCausalLM

* refactor turbomind.py

* update comment

* add bfloat16 convert back

* support gradio run_locl load hf

* support resuful api server load hf

* add docs

* support loading previous quantized model

* adapt pr 690

* udpate docs

* not export turbomind config when quantize a model

* check model_name when can not get it from config.json

* update readme

* remove model_name in auto_awq

* update

* update

* udpate

* fix build

* absolute import

* Fix cache/output length calculation (sgl-project#738)

* bump version to v0.1.0a0 (sgl-project#709)

* [Fix] Skip empty batch (sgl-project#747)

* [Fix] build docker image failed since `packaging` is missing (sgl-project#753)

* [Fix] Rollback the data type of input_ids to TYPE_UINT32 in preprocessor's proto (sgl-project#758)

* Set the default value of `max_context_token_num` 1 (sgl-project#761)

* rename pytorch poc

* fix lint

* add docstring

* add docstring

* refactor patch

* add recompute eviction support

* fix typo (sgl-project#769)

* add triton server test and workflow yml (sgl-project#760)

* add triton server test and workflow yml

* update

* revert changes in dockerfile

* update prompts

* recovery modeling

* fix turbomind build on sm<80 (sgl-project#754)

* fix

* fix lint

* improvement(build): enable ninja and gold linker (sgl-project#767)

* feat(build): enable ninja and lld

* fix(.github): add ninja installation

* fix(CI): remove dimsize=256

* fix(CI): add option for generate.sh

* fix(docs): update

* Report first-token-latency and token-latency percentiles (sgl-project#736)

* update profile scripts

* add top_p, top_k and temperature as input arguments

* fix input_ids

* update profile_throughput

* update profile_restful_api

* update profile_serving

* update

* update

* add progress bar

* remove TODO comments

* update

* remove useless profile_* argument

* remove log level

* change concurrency default value to 64

* update restful_api.md

* update according to review comments

* fix docstring

* convert model with hf repo_id (sgl-project#774)

* bump version to 0.1.0a1 (sgl-project#776)

* Update benchmark user guide (sgl-project#763)

* user guide of benchmark generation

* update benchmark generation guide

* update profiling throughput guide

* update profiling api_server guide

* rename file names

* update profile tis user guide

* update

* fix according to review comments

* update

* update according to review comments

* updaste

* add an example

* update

* add docstring

* add unified paging attention support

* refactor block manager

* do not alloc zero

* Fix early exit condition in attention kernel (sgl-project#788)

* add chat template for Yi (sgl-project#779)

* Fix missed arguments when benchmark static inference performance (sgl-project#787)

* minor fix in the profile scripts and docs

* miss arguments

* typo

* fix lint

* update

* Unify prefill & decode passes (sgl-project#775)

* Unify prefill and decode passes

* dynamic split-fuse

* refactor

* correct input count calculation

* remove unused

* lint

* lint

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* fix msvc build

* add cuda12.1 build check ci (sgl-project#782)

* update cuda12.1 build check ci

* use matrix

* auto upload cuda12.1 python pkg to release when create new tag (sgl-project#784)

* add cuda12-whl-release ci

* enable environment

* test py310-311 windows wheel

* fix py310, py311 setup.py error on windows

* fix lint

* fix extra colon in InternLMChat7B (sgl-project#796)

* fix local kv head num (sgl-project#806)

* Report the inference benchmark of models with different size (sgl-project#794)

* update test scripts for models with different sizes

* update

* only test after tunning gemm

* chmod +x

* fix typo

* benchmark on a100

* fix typo

* fix typo

* per-token latency percentile in profile_throughput

* fix

* fix

* rename

* make the script accept parameters

* minor fix

* indent

* reformat table

* change to 3000

* minor fix

* bump version to v0.1.0a2 (sgl-project#807)

* fix out of bounds access (sgl-project#809)

* update scheduler

* optimize request

* Simplify block manager (sgl-project#812)

* simplify block manager

* fix lint

* set smem size for repetition penalty kernel (sgl-project#818)

* add mbgemm&mbgemv

* fix recompute, fix mbgmm

---------

Co-authored-by: Lyu Han <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: pppppM <[email protected]>
Co-authored-by: Chen Xin <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: Yam(长琴) <[email protected]>
Co-authored-by: liukuikun <[email protected]>
Co-authored-by: yunzhongyan0 <[email protected]>
Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: aisensiy <[email protected]>
Co-authored-by: Li Zhang <[email protected]>
Co-authored-by: whcao <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: tpoisonooo <[email protected]>
Co-authored-by: Qian Zhao <[email protected]>

* [Fix] Adapt to the pyTorch poc branch (sgl-project#863)

* Adapt to the pyTorch poc branch

* Adapt to the pyTorch poc branch

* fix comments

* update model

* wip

* wrong implementation

* s-lora single gpu

* refactor tp patch

* add tp support

* add tp gather

* recover profile generation

* daemon process

* inplace gather

* hf style

* add assert when input nothing

* find available port

---------

Co-authored-by: grimoire <[email protected]>
Co-authored-by: WRH <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: AllentDan <[email protected]>
Co-authored-by: tpoisonooo <[email protected]>
Co-authored-by: whcao <[email protected]>
Co-authored-by: Lyu Han <[email protected]>
Co-authored-by: pppppM <[email protected]>
Co-authored-by: Chen Xin <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: Yam(长琴) <[email protected]>
Co-authored-by: liukuikun <[email protected]>
Co-authored-by: yunzhongyan0 <[email protected]>
Co-authored-by: honglei.yan <[email protected]>
Co-authored-by: aisensiy <[email protected]>
Co-authored-by: Li Zhang <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: Qian Zhao <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests