You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the great work ! I just read the paper and took a look to the code.
Could it be possible to have an example of (score?) data file that we could give as an input to the source code, for testing ?
Thanks!
The text was updated successfully, but these errors were encountered:
In a similar vein, it would be great to see some counterexamples (i.e. experiments that do significant testing but use the wrong test) -- like you mentioned in the paper's footnote 3
We considered the significance test to be inappropriate in three cases: 1. Using the t-test when the evaluation measure is not an average measure; 2. Using the t-test for a classification task (i.e. when the observations are categorical rather then continuous), even if the evaluation measure is an aver- age measure; and 3. Using a Boostrap test with a small test set size.
To me, a confusing issue is that depending on how you define an observation, the tests may or may not be valid. So no need to name names/papers, but I'd like to see the examples of experiment setups that you decided were using the wrong test.
Hello,
Thank you for the great work ! I just read the paper and took a look to the code.
Could it be possible to have an example of (score?) data file that we could give as an input to the source code, for testing ?
Thanks!
The text was updated successfully, but these errors were encountered: