-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speeding up training stage #130
Comments
I think so. It's in previous TODO list. But I don't have that much time to implement it... It shall not be difficult, just need to add a new dim in sorting key(tile uv to duv), and also add the dim in rasterization/backward. As we only use less than 4GB GPU memory now, I think it shall help. |
@wanmeihuali do you still have plans for batch processing? |
@Alexma3312 @Alexma3312 @yanzhoupan Do you have time to take a look? |
Will start working on batch training. |
I am new to gaussian splatting and taichi as well. Is there quick way to speed up training process without loosing quality. Is it possible to do batch processing?
P.S. The training process takes more than 1 hour for 20k iterations on Tesla T4.
The text was updated successfully, but these errors were encountered: