This dataset is used in our paper: "ARNOR: Attention Regularization based Noise Reduction for Distant Supervision Relation Classification". We release a new NYT test set for sentence-level evaluation of distant supervision relation extraction model. It increases almost 5 times positive instances than the previous one [2]. And it is carefully annotated to ensure accuracy.
This dataset is based on Ren's [1] training set that is generated by distant supervision, and a manually annotated test set that contains 395 sentences from Hoffmann [2]. They are all from New York Times news articles [3]. However the number of positive instances in test set is small (only 396), and the quality is insufficient. We revise and annotate more test data based on it, and release two versions of datasets.
In a data file, each line is a json string. The content is like
{
"sentText": "The source sentence text",
"relationMentions": [
{
"em1Text": "The first entity in relation",
"em2Text": "The second entity in relation",
"label": "Relation label",
"is_noise": false # only occur in test set
},
...
],
"entityMentions": [
{
"text": "Entity words",
"label": "Entity type",
...
},
...
]
...
}
This version of dataset is the original one applied in our paper, which includes four files: train.json, test.json, dev_part.json, and test_part.json. Here dev_part.json and test_part.json are from test.json. This dataset can be downloaded here: https://baidu-nlp.bj.bcebos.com/arnor_dataset-1.0.0.tar.gz
We strongly recommend to apply this dataset in later relation classification studies. This version contains more annotated test data comparing with version 1.0.0. We continuely annotated more data that is shown in the below table. What is more, we have removed the relation "/location/administrative_division/country" from the training set and changed "/location/country/administrative_divisions" into "/location/location/contains". Because we do not label these two relation types in test set.
Test set | version 1.0.0 | version 2.0.0 |
---|---|---|
#Sentences | 1,024 | 3,192 |
#Instances | 4,543 | 9,051 |
#Positive instances | 671 | 2,224 |
The download address is: http://baidu-nlp.bj.bcebos.com/arnor_dataset-2.0.0.tar.gz
There are four files in it. Training set, dev set, and test set are all included. In addition, it also includes a "test_noise.json" file, which is for noise reduction evaluation.
We reproduce experiments following our ARNOR paper. The results are listed below.
Main results:
Method | Dev Prec. | Dev Rec. | Dev F1 | Test Prec. | Test Rec. | Test F1 |
---|---|---|---|---|---|---|
CNN | 39.27 | 73.80 | 51.26 | 42.41 | 76.64 | 54.60 |
PCNN | 39.08 | 74.74 | 51.32 | 42.18 | 77.50 | 54.64 |
BiLSTM | 41.16 | 70.17 | 52.12 | 44.12 | 71.12 | 54.45 |
BiLSTM+ATT | 40.81 | 70.37 | 51.66 | 42.77 | 71.59 | 53.55 |
PCNN+SelATT | 82.41 | 34.10 | 48.24 | 81.00 | 35.50 | 49.37 |
CNN+RL1 | 42.50 | 71.62 | 53.34 | 43.70 | 72.34 | 54.49 |
CNN+RL2 | 42.69 | 72.56 | 53.75 | 44.54 | 73.40 | 55.44 |
ARNOR | 78.14 | 59.82 | 67.77 | 79.70 | 62.30 | 69.93 |
Components results:
Method | Test Prec. | Test Rec. | Test F1 |
---|---|---|---|
BiLSTM+ATT | 42.77 | 71.59 | 53.55 |
+IDR | 84.98 | 50.14 | 63.07 |
+ART | 80.03 | 60.53 | 68.93 |
+BLP | 79.70 | 62.30 | 69.93 |
Noise reduction results:
Noise Reduction | Prec. | Rec. | F1 |
---|---|---|---|
CNN+RL2 | 40.19 | 95.39 | 56.56 |
ARNOR | 73.40 | 73.04 | 73.22 |
@inproceedings{jia2019arnor,
title={ARNOR: Attention Regularization based Noise Reduction for Distant Supervision Relation Classification},
author={Jia, Wei and Dai, Dai and Xiao, Xinyan and Wu, Hua},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
[1] Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of the 26th International Conference on World Wide Web, pages 1015–1024. International World Wide Web Conferences Steering Committee.
[2] Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1, pages 541–550. Association for Compu- tational Linguistics.
[3] Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer.