This project provides a Tensorrt implementation for fast image upscaling inside ComfyUI (3-4x faster)
This project is licensed under CC BY-NC-SA, everyone is FREE to access, use, modify and redistribute with the same license.
For commercial purposes, please contact me directly at [email protected]
If you like the project, please give me a star! ⭐
Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 frames
Device | Model | Input Resolution (WxH) | Output Resolution (WxH) | FPS |
---|---|---|---|---|
L40s | RealESRGAN_x4 | 512 x 512 | 2048 x 2048 | 5 |
L40s | RealESRGAN_x4 | 960 x 540 | 3840 x 2160 | 2 |
L40s | RealESRGAN_x4 | 1280 x 1280 | 5120 x 5120 | 0.7 |
Navigate to the ComfyUI /custom_nodes
directory
git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
cd ./ComfyUI-Upscaler-Tensorrt
pip install -r requirements.txt
-
Download one of the available onnx models. These onnx models support dynamic image resolutions from 256x256 to 1280x1280 px (e.g 960x540, 512x512, 1280x720 etc). Here are the original models:
-
Run
python export_trt.py
and set onnx/engine paths accordingly -
Place the exported engine inside ComfyUI
/models/tensorrt/upscaler
directory
- Insert node by
Right Click -> tensorrt -> Upscaler Tensorrt
- Choose the appropriate engine from the dropdown
- Only models with ESRGAN architecture are currently working
- High ram usage when exporting
.pth
to.onnx
- Ubuntu 22.04 LTS, Cuda 12.3, Tensorrt 10.0.1, Python 3.10, L40s GPU
- Windows 11
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)