Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have any questions of your paper #69

Open
soohyoen opened this issue Sep 26, 2024 · 1 comment
Open

I have any questions of your paper #69

soohyoen opened this issue Sep 26, 2024 · 1 comment

Comments

@soohyoen
Copy link

soohyoen commented Sep 26, 2024

Thank you for your awesome work.

Ive read your paper, but Id want to know more about how this method is possible.

  1. Diffusion models have randomness properties; however, your model have consistency of any view-point.
    How to get consistency of any view-point?

  2. Your model is the SOTA of 3D editing models.
    I would like to know why your model has a high CLIP dimension score compared to other models.

@2hiTee
Copy link

2hiTee commented Oct 8, 2024

For question one, I think this work is not concentrating on multi-view consistency, the multi-view consistency is guranteed by inputting multi-view images for editing, just like InstructNeRF2NeRF, but the multi-view consistency still faces a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants