You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, a great package, I am really happy that gamma exists as a measurement at all and also about a well-documented python implementation!
I had a brief question; say you are using this for an NER task. Your whole corpus might then lots of individual sentences which are annotated. I am now wondering how I'd best compute a global gamma for the whole corpus.
Reading the documentation, it seems that using the CLI, I could have each sentence in a file, then batch analyse them, have individual gamma measures per file and then report SD of gamma, lowest and highest values.
Or I could add each sentence after one another, meaning that token 3 in sentence 3 perhaps has the token-position 12, since I would treat is as one giant annotation task, and then have a gamma computed for the whole corpus.
I seem to see both approaches used in papers citing your work, most without code-sharing however; I was curious if you have a recommendation which approach makes more sense. Thanks a lot!
The text was updated successfully, but these errors were encountered:
First of all, a great package, I am really happy that gamma exists as a measurement at all and also about a well-documented python implementation!
I had a brief question; say you are using this for an NER task. Your whole corpus might then lots of individual sentences which are annotated. I am now wondering how I'd best compute a global gamma for the whole corpus.
I seem to see both approaches used in papers citing your work, most without code-sharing however; I was curious if you have a recommendation which approach makes more sense. Thanks a lot!
The text was updated successfully, but these errors were encountered: