👉 DEMO 👈
ARElight is an application for a granular view onto sentiments between mentioned named entities in texts. This repository is a part of the ECIR-2024 demo paper: ARElight: Context Sampling of Large Texts for Deep Learning Relation Extraction.
pip install git+https://github.com/nicolay-r/[email protected]
Infer sentiment attitudes from text file in English:
python3 -m arelight.run.infer \
--sampling-framework "arekit" \
--ner-framework "deeppavlov" \
--ner-model-name "ner_ontonotes_bert" \
--ner-types "ORG|PERSON|LOC|GPE" \
--terms-per-context 50 \
--sentence-parser "nltk:english" \
--tokens-per-context 128 \
--bert-framework "opennre" \
--batch-size 10 \
--pretrained-bert "bert-base-cased" \
--bert-torch-checkpoint "ra4-rsr1_bert-base-cased_cls.pth.tar" \
--backend "d3js_graphs" \
--docs-limit 500 \
-o "output" \
--from-files "<PATH-TO-TEXT-FILE>"
The complete documentation is available via -h
flag:
python3 -m arelight.run.infer -h
Parameters:
sampling-framework
we consider onlyarekit
framework by default.from-files
-- list of filepaths to the related documents.- for the
.csv
files we consider that each line of the particularcolumn
as a separated document.csv-sep
-- separator between columns.csv-column
-- name of the column in CSV file.
- for the
collection-name
-- name of the result files based on sampled documents.terms-per-context
-- total amount of words for a single sample.sentence-parser
-- parser utilized for document split into sentences; list of the [supported parsers].synonyms-filepath
-- text file with listed synonymous entries, grouped by lines. [example].stemmer
-- for words lemmatization (optional); we support [PyMystem].- NER parameters:
ner-framework
-- type of the framework:deeppavlov
-- [DeepPavlov] list of models.transformers
-- [Transformers] list of models.
ner-model-name
-- model name within utilized NER framework.ner-types
-- list of types to be considered for annotation, separated by|
.
docs-limit
-- the total limit of documents for sampling.- Translation specific parameters
translate-framework
-- text translation backend (optional); we support [googletrans]translate-entity
-- (optional) source and target language supported by backend, separated by:
.translate-text
-- (optional) source and target language supported by backend, separated by:
.
bert-framework
-- samples classification framework; we support [OpenNRE].text-b-type
-- (optional)NLI
or None [supported].pretrained-bert
-- pretrained state name.batch-size
-- amount of samples per single inference iteration.tokens-per-context
-- size of input.bert-torch-checkpoint
-- fine-tuned state.device-type
--cpu
orgpu
.labels-fmt
-- list of the mappings fromlabel
to integer value; is ap:1,n:2,u:0
by default, where:p
-- positive label, which is mapped to1
.n
-- negative label, which is mapped to2
.u
-- undefined label (optional), which is mapped to0
.
backend
-- type of the backend (d3js_graphs
by default).host
-- port on which we expect to launch localhost server.label-names
-- default mapping isp:pos,n:neg,u:neu
.
-o
-- output folder for result collections and demo.
Framework parameters mentioned above as well as their related setups might be ommited.
For graph analysis you can perform several graph operations by this script:
- Arguments mode:
python3 -m arelight.run.operations \
--operation "<OPERATION-NAME>" \
--graph_a_file output/force/boris.json \
--graph_b_file output/force/rishi.json \
--weights y \
-o output \
--description "[OPERATION] between Boris Johnson and Rishi Sunak on X/Twitter"
- Interactive mode:
python3 -m arelight.run.operations
arelight.run.operations
allows you to operate ARElight's outputs using graphs: you can merge graphs, find their similarities or differences.
--graph_a_file
and--graph_b_file
are used to specify the paths to the.json
files for graphs A and B, which are used in the operations. These files should be located in the<your_output/force>
folder.--name
-- name of the new graph.--description
-- description of the new graph.--host
-- determines the server port to host after the calculations.-o
-- option allows you to specify the path to the folder where you want to store the output. You can either create a new output folder or use an existing one that has been created by ARElight.
Parameter operation
operation
Preparation
Consider that you used ARElight script for X/Twitter
to infer relations from
messages of UK politicians Boris Johnson
and Rishi Sunak
:
python3 -m arelight.run.infer ...other arguments... \
-o output --collection-name "boris" --from-files "twitter_boris.txt"
python3 -m arelight.run.infer ...other arguments... \
-o output --collection-name "rishi" --from-files "twitter_rishi.txt"
According to the results section, you will have output
directory with 2 files force
layout graphs:
output/
├── force/
├── rishi.json
└── boris.json
List of Operations
You can do the following operations to combine several outputs, ot better understand similarities, and differences between them:
UNION
- The result graph contains all the vertices and edges that are in
$G_1$ and$G_2$ . The edge weight is given by$W_e = W_{e1} + W_{e2}$ , and the vertex weight is its weighted degree centrality:$W_v = \sum_{e \in E_v} W_e(e)$ .python3 -m arelight.run.operations --operation UNION \ --graph_a_file output/force/boris.json \ --graph_b_file output/force/rishi.json \ --weights y -o output --name boris_UNION_rishi \ --description "UNION of Boris Johnson and Rishi Sunak Twits"
INTERSECTION
- The result graph contains only the vertices and edges common to
$G_1$ and$G_2$ . The edge weight is given by$W_e = \min(W_{e1},W_{e2})$ , and the vertex weight is its weighted degree centrality:$W_v = \sum_{e \in E_v} W_e(e)$ .python3 -m arelight.run.operations --operation INTERSECTION \ --graph_a_file output/force/boris.json \ --graph_b_file output/force/rishi.json \ --weights y -o output --name boris_INTERSECTION_rishi \ --description "INTERSECTION between Twits of Boris Johnson and Rishi Sunak"
DIFFERENCE
- NOTE: this operation is not commutative $(G_1 - G_2) ≠ G_2 - G_1)$)_
- The results graph contains all the vertices from
$G_1$ but only includes edges from$E_1$ that either don't appear in$E_2$ or have larger weights in$G_1$ compared to$G_2$ . The edge weight is given by$W_e = W_{e1} - W_{e2}$ if$e \in E_1$ ,$e \in E_1 \cap E_2$ and$W_{e1}(e) > W_{e2}(e)$ .python3 -m arelight.run.operations --operation DIFFERENCE \ --graph_a_file output/force/boris.json \ --graph_b_file output/force/rishi.json \ --weights y -o output --name boris_DIFFERENCE_rishi \ --description "Difference between Twits of Boris Johnson and Rishi Sunak"
You have the option to specify whether to include edge weights in calculations or not. These weights represent the frequencies of discovered edges, indicating how often a relation between two instances was found in the text analyzed by ARElight.
--weights
y
: the result will be based on the union, intersection, or difference of these frequencies.n
: all weights of input graphs will be set to 1. In this case, the result will reflect the union, intersection, or difference of the graph topologies, regardless of the frequencies. This can be useful when the existence of relations is more important to you, and the number of times they appear in the text is not a significant factor.
Note that using or not using the
weights
option may yield different topologies:
- AREkit [github]
Our one and my personal interest is to help you better explore and analyze attitude and relation extraction related tasks with ARElight. A great research is also accompanied with the faithful reference. if you use or extend our work, please cite as follows:
@inproceedings{rusnachenko2024arelight,
title={ARElight: Context Sampling of Large Texts for Deep Learning Relation Extraction},
author={Rusnachenko, Nicolay and Liang, Huizhi and Kolomeets, Maxim and Shi, Lei},
booktitle={European Conference on Information Retrieval},
year={2024},
organization={Springer}
}