Skip to content

Commit

Permalink
Update torchbench pin on PyTorch CI (#2584)
Browse files Browse the repository at this point in the history
Summary:
I'm adding sam2 to TorchBench #2566, so I'm updating PyTorch CI to use latest TorchBench commit.

This goes together with pytorch/pytorch#145455.  The list of fixes includes:

* Adding `decoder_start_token_id` to HF `GenerationConfig`. This was Introduced by transformers 4.41.0 huggingface/transformers#30892
* Run `sam_fast` on A10G as it seems working correctly now

Pull Request resolved: #2584

Reviewed By: xuzhao9

Differential Revision: D68837682

Pulled By: huydhn

fbshipit-source-id: d9bda87384bac680b08fd2ea200ca7691c24a7e9
  • Loading branch information
huydhn authored and facebook-github-bot committed Jan 29, 2025
1 parent 5018404 commit 0e370a0
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 7 deletions.
10 changes: 3 additions & 7 deletions torchbenchmark/models/sam_fast/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,7 @@ eval_nograd: true
train_benchmark: false
train_deterministic: false
not_implemented:
- device: cpu
- device: cuda
test: example
# eval test exceeds 300s time limit on A10G
# https://github.com/pytorch/benchmark/actions/runs/8972426901/job/24640395990
- device: NVIDIA A10G
test: eval
- device: cpu
- device: cuda
test: example
skip_cuda_memory_leak: true
2 changes: 2 additions & 0 deletions torchbenchmark/util/framework/huggingface/model_factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,8 @@ def __init__(self, name, test, device, batch_size=None, extra_args=[]):
do_sample=False,
num_beams=1,
use_cache=True,
# Introduced by transformers 4.41.0 https://github.com/huggingface/transformers/issues/30892
decoder_start_token_id=0,
)
self.model = GenerationWrapper(self.model, generation_config)
self.example_inputs = (self.example_inputs["input_ids"],)
Expand Down

0 comments on commit 0e370a0

Please sign in to comment.