-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to improve result? #131
Comments
Hi @hardikdava , there could be a lot of parameters that affect image sharpness, Given this is your private dataset, I can only provide some ideas:
adaptive-controller-config:
# The threshold on the gradient of point position in the camera plane for all affected pixels
# decrease it->more points, slower train/render speed, better quality
densification-view-space-position-gradients-threshold: 6e-6
# The frequency to split/clone points, shall adjust base on dataset size
num-iterations-densify: 100
# The frequency to reset all point opacity, if you have more images, maybe you shall increase this value, otherwise points may still be transparent. Also, please note that rendering quality will drop dramatically in the few iterations after reset.
num-iterations-reset-alpha: 4000 About autotuning tools: sure, you can try to find the best parameters in the YAML config. |
@wanmeihuali Thanks for your reply. I will get back to you on this with the result. |
@wanmeihuali I have better results after tunning the parameters like you suggested. |
@hardikdava could you try using white background for the segmented image? Note sure if those wash-out effects still exist with this. |
Hello @wanmeihuali , I have started doing some experiments with the code. The implementation is really good and clean, I like it. I have couple of questions regarding result quality.
Left = Rendered View, Right = Groundtruth View
Questions:
P.S. I am ready to provide a PR if something is improving the result.
The text was updated successfully, but these errors were encountered: