Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data file example #1

Open
ghost opened this issue Jun 29, 2018 · 2 comments
Open

data file example #1

ghost opened this issue Jun 29, 2018 · 2 comments

Comments

@ghost
Copy link

ghost commented Jun 29, 2018

Hello,

Thank you for the great work ! I just read the paper and took a look to the code.
Could it be possible to have an example of (score?) data file that we could give as an input to the source code, for testing ?

Thanks!

@trangham283
Copy link

In a similar vein, it would be great to see some counterexamples (i.e. experiments that do significant testing but use the wrong test) -- like you mentioned in the paper's footnote 3

We considered the significance test to be inappropriate in three cases: 1. Using the t-test when the evaluation measure is not an average measure; 2. Using the t-test for a classification task (i.e. when the observations are categorical rather then continuous), even if the evaluation measure is an aver- age measure; and 3. Using a Boostrap test with a small test set size.

To me, a confusing issue is that depending on how you define an observation, the tests may or may not be valid. So no need to name names/papers, but I'd like to see the examples of experiment setups that you decided were using the wrong test.

Thanks for the paper and the code!

@rtmdrr
Copy link
Owner

rtmdrr commented Aug 19, 2018

Hello,
Thank you for your comments! I am now working on doing exactly that. Soon I'll update with results.

Best!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants