You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your awesome work in Otter-HD! I have a question about the resizing operation in fuyu. Since fuyu-8b claims that the model can accept images with any resolution as the input, then why does the code first resize all the images into [1080, 1920]? Why not just keep the original size?
Maybe an explanation is to keep the length of input the same, but adding pad tokens after text tokens is also plausible. Could you help answer this question? Thanks in advance :)
The text was updated successfully, but these errors were encountered:
Correct me if I am wrong but I believe that the image is not resized unless it is larger than 1080x1920. Instead, it is padded to these dimensions. This means that say for an image (w, h) this is resized to 1080x1920 with padding values in each dimension. Then to handle variable sized images the processor tries to find the minimum integer that exceeds w (or h) and it is divisible by the patch size (30): w1 or (h1)
Since the image has been padded to 1080x1920, it is safe to take w1 cols and h1 rows from the padded tensor.
However, this means that for all benchmarks that do not have images larger than 1080x1920 the performance of the model should be exactly the same in 1080x1920 and the "variable size" scenario? 🤷
Thanks for your awesome work in Otter-HD! I have a question about the resizing operation in fuyu. Since fuyu-8b claims that the model can accept images with any resolution as the input, then why does the code first resize all the images into [1080, 1920]? Why not just keep the original size?
Maybe an explanation is to keep the length of input the same, but adding pad tokens after text tokens is also plausible. Could you help answer this question? Thanks in advance :)
The text was updated successfully, but these errors were encountered: