Toward subgraph-guided knowledge graph question generation with graph neural networks
IEEE Transactions on Neural Networks and Learning Systems, 2023•ieeexplore.ieee.org
Knowledge graph (KG) question generation (QG) aims to generate natural language
questions from KGs and target answers. Previous works mostly focus on a simple setting that
is to generate questions from a single KG triple. In this work, we focus on a more realistic
setting where we aim to generate questions from a KG subgraph and target answers. In
addition, most previous works built on either RNN-or Transformer-based models to encode a
linearized KG subgraph, which totally discards the explicit structure information of a KG …
questions from KGs and target answers. Previous works mostly focus on a simple setting that
is to generate questions from a single KG triple. In this work, we focus on a more realistic
setting where we aim to generate questions from a KG subgraph and target answers. In
addition, most previous works built on either RNN-or Transformer-based models to encode a
linearized KG subgraph, which totally discards the explicit structure information of a KG …
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers. Previous works mostly focus on a simple setting that is to generate questions from a single KG triple. In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers. In addition, most previous works built on either RNN-or Transformer-based models to encode a linearized KG subgraph, which totally discards the explicit structure information of a KG subgraph. To address this issue, we propose to apply a bidirectional Graph2Seq model to encode the KG subgraph. Furthermore, we enhance our RNN decoder with a node-level copying mechanism to allow direct copying of node attributes from the KG subgraph to the output question. Both automatic and human evaluation results demonstrate that our model achieves new state-of-the-art scores, outperforming existing methods by a significant margin on two QG benchmarks. Experimental results also show that our QG model can consistently benefit the question-answering (QA) task as a means of data augmentation.
ieeexplore.ieee.org