-
Notifications
You must be signed in to change notification settings - Fork 469
Issues: huggingface/optimum
Community contribution -
optimum.exporters.onnx
support fo...
#555
opened Dec 7, 2022 by
michaelbenayoun
Open
41
Community contribution -
BetterTransformer
integration for ...
#488
opened Nov 18, 2022 by
younesbelkada
Open
25
[Quick poll] Give your opinion on the future of the Hugging F...
#568
opened Dec 9, 2022 by
LysandreJik
Open
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Please don't kill BetterTransformer — 1.88x faster inference than SDPA
#2083
opened Oct 28, 2024 by
umarbutler
Support onnx conversion for wav2vec2-bert
bug
Something isn't working
#2082
opened Oct 27, 2024 by
fawazahmed0
2 of 4 tasks
"ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large
bug
Something isn't working
#2080
opened Oct 25, 2024 by
TurboEncabulator9000
1 of 4 tasks
LLama 3.2 vision - unable to convert
bug
Something isn't working
#2079
opened Oct 24, 2024 by
pdufour
4 tasks
Problem converting tinyllama to onnx model with optimum-cli
bug
Something isn't working
#2076
opened Oct 22, 2024 by
hayyaw
2 of 4 tasks
Problem converting DeBERTaV3 to ONNX using optimum-cli
bug
Something isn't working
#2075
opened Oct 21, 2024 by
marcovzla
2 of 4 tasks
High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release
question
Further information is requested
#2069
opened Oct 19, 2024 by
niyathimariya
2 of 4 tasks
Conversion innaccuracy specific Opus-MT model
bug
Something isn't working
#2068
opened Oct 18, 2024 by
FricoRico
2 of 4 tasks
Support int8 tinyllama tflite export.
feature-request
New feature or request
#2060
opened Oct 15, 2024 by
hayyaw
vision model's input size spedified with cmd line is overrided by pretrained model config
exporters
Issue related to exporters
onnx
Related to the ONNX export
#2035
opened Sep 29, 2024 by
waterdropw
Add BetterTransformer support for ESM Huggingface model for protein folding
bettertransformer
#2034
opened Sep 28, 2024 by
rakeshr10
ONNX support for decision transformers
onnx
Related to the ONNX export
#2032
opened Sep 20, 2024 by
ra9hur
[ONNX] Use the Related to the ONNX export
dynamo=True
option from PyTorch 2.5
onnx
#2026
opened Sep 17, 2024 by
justinchuby
Adding Support for DETA Model
onnx
Related to the ONNX export
#2018
opened Sep 8, 2024 by
TheMattBin
[Feature request] Add kwargs or additional options for torch.onnx.export
onnx
Related to the ONNX export
#2009
opened Sep 3, 2024 by
martinkorelic
Optional Related to the ONNX export
subfolder
if model repository contains one ONNX model behind a subfolder
onnx
#2008
opened Sep 3, 2024 by
tomaarsen
Support for gemma2-2b-it(gemma 2nd version) Model Export in Optimum for OpenVINO
onnx
Related to the ONNX export
#2006
opened Sep 3, 2024 by
chakka12345677
support for jinaai/jina-reranker-v2-base-multilingual model
bug
Something isn't working
onnxruntime
Related to ONNX Runtime
#2004
opened Aug 30, 2024 by
bash99
2 of 4 tasks
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
onnx
Related to the ONNX export
#2002
opened Aug 29, 2024 by
pengpengtao
NameError: name '_SENTENCE_TRANSFORMERS_TASKS_TO_MODEL_LOADERS' is not defined
bug
Something isn't working
onnx
Related to the ONNX export
#1997
opened Aug 25, 2024 by
purejomo
2 of 4 tasks
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.