The paper “A Novel Hierarchical Binary Tagging Framework for Relational Triple Extraction” published by Wei Zheping, a 2019 Master’s candidate, was accepted by the CCF-A conference (ACL 2020), which is under the supervision of Prof. Chang Yi. This is the second CCF-A conference paper published by 2019 Master’s candidates since their enrollment.
Wei Zhepei got his Bachelor’s Degree in the College of Computer Science and Technology at Jilin University. He is under the co-supervision of Prof. Chang Yi and Huawei Research Institute from senior year. In 2019, he was admitted to the School as a postgraduate without taking its entrance examination and engaged in natural language processing and knowledge mapping research. The research of this paper was completed in cooperation with Su Jianlin, a researcher of ZHUIYI, and Wang Yue, a professor of the University of South Carolina.
The Annual Conference of the Association for Computational Linguistics (ACL), is the highest-level academic conference in the field of natural language processing and computational linguistics, sponsored by the International Association for Computational Linguistics.
First Author: Wei Zhepei
Title: A Novel Hierarchical Binary Tagging Framework for Relational Triple Extraction
Conference: The 58th Annual Conference of the Association for Computational Linguistics (ACL 2020)
Conference Category: CCF-A
Conference Time: July 5-10, Seattle, Washington, USA
Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction. However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities. In this work, we introduce a fresh perspective to revisit the relational triple extraction task and propose a novel cascade binary tagging framework (CasRel) derived from a principled problem formulation. Instead of treating relations as discrete labels as in previous works, our new framework models relations as functions that map subjects to objects in a sentence, which naturally handles the overlapping problem. Experiments show that the CasRel framework already outperforms state-of-the-art methods even when its encoder module uses a randomly initialized BERT encoder, showing the power of the new tagging framework. It enjoys further performance boost when employing a pre-trained BERT encoder, outperforming the strongest baseline by 17.5 and 30.2 absolute gain in F1-score on two public datasets NYT and WebNLG, respectively. In-depth analysis on different scenarios of overlapping triples shows that the method delivers consistent performance gain across all these scenarios.
School of Artificial Intelligence
April 7, 2020