-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Robust Video Matting #223
base: master
Are you sure you want to change the base?
Conversation
✔️ Deploy Preview for pytorch-hub-preview ready! 🔨 Explore the source changes: 7dd9984 🔍 Inspect the deploy log: https://app.netlify.com/sites/pytorch-hub-preview/deploys/6144be1e21202d00075b3e5e 😎 Browse the preview: https://deploy-preview-223--pytorch-hub-preview.netlify.app |
DO NOT MERGE this one yet. We are temporarily making some change in the RVM repo. Will take some time to be available again. |
Sounds good, I marked the PR as draft so we don't merge it accidentally. Please ping me when it's ready. BTW, as a follow up to your previous issue: #224 , the CI should be reliable now :) |
This pull request is ready to merge again. |
@NicolasHug Can you approve the merge? |
@NicolasHug Hi Nicolas. Any updates? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @PeterL1n and sorry for the lag. This looks great! I'm just wondering if we could slightly modify the code snippets so that we can actually run the tests?
* `downsample_ratio` is an important hyperparameter. See [inference documentation](https://github.com/PeterL1n/RobustVideoMatting/blob/master/documentation/inference.md) for more detail. | ||
|
||
### References | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On top of the bibtex, would you mind adding a link to the archiv page of the paper as well?
```python | ||
convert_video( | ||
model, # The loaded model, can be on any device (cpu or cuda). | ||
input_source='input.mp4', # A video file or an image sequence directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to just download the same demo video as in https://colab.research.google.com/drive/10z-pNKRnVNsp0Lq9tH1J_XPZ7CBC_uHm?usp=sharing#scrollTo=-iJwFwqUI9Az?
This way, if running the model doesn't take too long, we'd be able to properly test the file instead of skipping it
Do we have GPU on the test environment? Otherwise the inference will be too slow. |
Yes there's a GPU in the CI. I agree we should try to keep the CI test time within a reasonable limit though. |
I don't think I recently have time for making the changes. If we can just merge this I would really appreciate. |
There's no rush on our side @PeterL1n Without tests, we have no guarantees that the models we are showcasing on the website run properly, which in the long run can erode the trust our users put into torchhub. It seems that enabling the test would only involve a simple change as suggested above, so I would prefer waiting until then if you don't mind. |
Adding https://github.com/PeterL1n/RobustVideoMatting
Disabled CI check because the documentation shows how to run our model against videos, but the videos don't exist in the CI environment, so the test will fail.