Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results with the training code is worse than the results in paper. #6

Open
sjtuljw520 opened this issue Jul 19, 2024 · 2 comments

Comments

@sjtuljw520
Copy link

Hi, thank you for sharing the nice code.

I try to train my model with the centerpoint detection as input and get the amota 0.513 (validation dataset), wich is significantly lower than the results in paper (amota 0.712). Maybe the config file (default.json) in this repo is not same as the implementation of the paper? Can you share how to modify the config file, thank you!

@dsx0511
Copy link
Owner

dsx0511 commented Aug 1, 2024

Hi, the config file in the repository is the same that I used in the paper. It is hard to find out the reason why the performance is different without your reproduction details. I would assume that you can go over the data preprocessing again and ensure that everything went correctly, and also make sure that you used the correct detection results.

@sol924
Copy link

sol924 commented Sep 20, 2024

hi, can you help me solve a problem? what is the pyg version? in this code,out = self.propagate(edge_index, query=query, key=key, value=value, edge_attr=edge_attr, edge_gate=edge_gate,size=None). there aren't edge_gate param in this function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants