Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem occur during running the hypergraph.py #6

Open
aeroplanepaper opened this issue Mar 19, 2022 · 3 comments
Open

Problem occur during running the hypergraph.py #6

aeroplanepaper opened this issue Mar 19, 2022 · 3 comments

Comments

@aeroplanepaper
Copy link

I used the citeseer data provided in the data file, and run the hypergraph.py, however, the ACC is unstable during multiple tests(15 percent approximately), and the effect of this is much bigger than the alpha and beta hyerparameter, I am wondering if I am using the wrong settings in the training process.
The setting is unchanged and only add
args = gen_data_cora(args, data_path=data_path, do_val=True)
and
train(args)
Looking forward to your reply, thanks!

@twistedcubic
Copy link
Owner

twistedcubic commented Mar 19, 2022

Hi @aeroplanepaper, that is interesting, I just pulled the codebase again and ran it out of the box, using args = gen_data_cora(args, data_path=data_path, do_val=True), and consistently got ~6% standard deviation on the accuracy. It's reported as ~~~Mean VAL err 0.35+-0.06 for alpha -0.1 0.1 time 12.763674783706666~~~ by the code (this is for 1 layer on citeseer, more layers have similar std, just takes longer).

Is this not what you observe using the code?

@aeroplanepaper
Copy link
Author

Hi @aeroplanepaper, that is interesting, I just pulled the codebase again and ran it out of the box, using args = gen_data_cora(args, data_path=data_path, do_val=True), and consistently got ~6% standard deviation on the accuracy. It's reported as ~~~Mean VAL err 0.35+-0.06 for alpha -0.1 0.1 time 12.763674783706666~~~ by the code (this is for 1 layer on citeseer, more layers have similar std, just takes longer).

Is this not what you observe using the code?

Thanks for your reply! In my tests, the results seemed even worse, the lowest error is around 0.35 and in worst cases, the error can reach up to 0.51. It makes me confused about the experiments on alpha and beta normalization parameters choosing, where the fluctuation is only about 0.01 and 0.02.

@twistedcubic
Copy link
Owner

Hi @aeroplanepaper, that is indeed surprising. What parameter are you varying that led to this variance? And on which dataset? Based on what I tested, previously (by re-pulling this repo) and last week as described in above comment, the results across trials are consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants