Dealing with Negative Samples with Multi-Task Learning on Span-Based Joint Entity-Relation Extraction


Jiamin Lu and Chenguang Xue, Hohai University, China


Recent span-based joint extraction models have demonstrated significant advantages in both entity recognition and relation extraction. These models treat text spans as candidate entities, and span pairs as candidate relationship tuples, achieving state-of-the-art results on datasets like ADE. However, these models encounter a significant number of non-entity spans or irrelevant span pairs during the tasks, impairing model performance significantly. To address this issue, this paper introduces a span-based multitask entity-relation joint extraction model. This approach employs the multitask learning to alleviate the impact of negative samples on entity and relation classifiers. Additionally, we leverage the Intersection over Union(IoU) concept to introduce the positional information into the entity classifier, achieving a span boundary detection. Furthermore, by incorporating the entity Logits predicted by the entity classifier into the embedded representation of entity pairs, the semantic input for the relation classifier is enriched. Experimental results demonstrate that our proposed SpERT.MT model can effectively mitigate the adverse effects of excessive negative samples on the model performance. Furthermore, the model demonstrated commendable F1 scores of 73.61%, 53.72%, and 83.72% on three widely employed public datasets, namely CoNLL04, SciERC, and ADE, respectively.


Natural language processing, Joint entity and relation extraction, Span-based model, Multitask learning, Negative samples.

Full Text  Volume 13, Number 16