Releases: volotat/SD-CN-Animation
Releases · volotat/SD-CN-Animation
v0.9
Last version changes: v0.9
- Fixed issues #69, #76, #91, #92.
- Fixed an issue in vid2vid mode when an occlusion mask computed from the optical flow may include unnecessary parts (where flow is non-zero).
- Added 'Extra params' in vid2vid mode for more fine-grain controls of the processing pipeline.
- Better default parameters set for vid2vid pipeline.
- In txt2vid mode after the first frame is generated the seed is now automatically set to -1 to prevent blurring issues.
- Added an option to save resulting frames into a folder alongside the video.
- Added ability to export current parameters in a human readable form as a json.
- Interpolation mode in the flow-applying stage is set to ‘nearest’ to reduce overtime image blurring.
- Added ControlNet to txt2vid mode as well as fixing #86 issue, thanks to @mariaWitch
- Fixed a major issue when ConrtolNet used wrong input images. Because of this vid2vid results were way worse than they should be.
- Text to video mode now supports video as a guidance for ControlNet. It allows to create much stronger video stylizations.
v0.8
- Better error handling. Fixes an issue when errors may not appear in the console.
- Fixed an issue with deprecated variables. Should be a resolution of running the extension on other webui forks.
- Slight improvements in vid2vid processing pipeline.
- Video preview added to the UI. It will become available at the end of the processing.
- Time elapsed/left indication added.
- Fixed an issue with color drifting on some models.
- Sampler type and sampling steps settings added to text2video mode.
- Added automatic resizing before processing with RAFT and FloweR models.
v0.7
- Text to Video mode added to the extension
- 'Generate' button is now automatically disabled while the video is generated
- Added 'Interrupt' button that allows to stop video generation process
- Now all necessary models are automatically downloaded. No need for manual preparation.
v0.6
- Complete rewrite of the project to make it possible to install as an Automatic1111/Web-ui extension.
- Added flow normalization before resizing it, so the magnitude of the flow computed correctly at the different resolution.
- Less ghosting and color drift in vid2vid mode
- Added "warped styled frame fix" at vid2vid mode that removes duplicates from the parts of the image that cannot be relocated from the optical flow.
v0.5
- Fixed an issue with the wrong direction of an optical flow applied to an image.
- Added text to video mode within txt2vid.py script. Make sure to update new dependencies for this script to work!
- Added a threshold for an optical flow before processing the frame to remove white noise that might appear, as it was suggested by @alexfredo.
- Background removal at flow computation stage implemented by @CaptnSeraph, it should reduce ghosting effect in most of the videos processed with vid2vid script.
v0.4
- Fixed issue with extreme blur accumulating at the static parts of the video.
- The order of processing was changed to achieve the best quality at different domains.
- Optical flow computation isolated into a separate script for better GPU memory management. Check out the instruction for a new processing pipeline.
v0.3
- Flow estimation algorithm is updated to RAFT method.
- Difference map now computed as per-pixel maximum of warped first and second frame of the original video and occlusion map that is computed from forward and backward flow estimation.
- Added keyframe detection that illuminates ghosting artifacts between the scenes.