Skip to content

Commit

Permalink
Remove cuda graph batch size adjustment for dp attention (#2484)
Browse files Browse the repository at this point in the history
  • Loading branch information
ispobock authored Dec 14, 2024
1 parent fccbfa3 commit 0ba2c58
Showing 1 changed file with 0 additions and 2 deletions.
2 changes: 0 additions & 2 deletions python/sglang/srt/server_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,12 +221,10 @@ def __post_init__(self):
if self.enable_dp_attention:
self.dp_size = self.tp_size
self.chunked_prefill_size = self.chunked_prefill_size // 2
self.cuda_graph_max_bs = min(self.cuda_graph_max_bs, 96)
self.schedule_conservativeness = self.schedule_conservativeness * 0.3
self.disable_overlap_schedule = True
logger.warning(
f"DP attention is enabled. The chunked prefill size is adjusted to {self.chunked_prefill_size} to avoid MoE kernel issues. "
f"The CUDA graph max batch size is adjusted to {self.cuda_graph_max_bs}. "
f"The schedule conservativeness is adjusted to {self.schedule_conservativeness}. "
"Data parallel size is adjusted to be the same as tensor parallel size. "
"Overlap scheduler is disabled."
Expand Down

0 comments on commit 0ba2c58

Please sign in to comment.