Hybrid summary generation method based on HRAGS model
DOI:
CSTR:
Author:
Affiliation:

School of Information and Communication Engineering, North University of China, North University of China, Taiyuan 030051, China

Clc Number:

TP391.1

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Traditional extractive and abstractive methods lack readability and accuracy in the summary auto-generated task, so a HRAGS (Hybrid Guided Summarization with Redundancy-Aware) model-based hybrid summary generation method was proposed. First, the method used the BERT pre-trained language model to obtain a contextual representation and combined with redundancy-aware method to construct an extractive model. Then a couple of trained BERT encoders were united with a randomly-initialized Transformer decoder contained two encoder-decoder attention modules to construct an abstractive model. The abstractive model adopted a two-staged fine-tuning approach to resolve the training imbalance problem between encoders and decoders. Finally, an Oracle greedy algorithm chose key sentences as external guidance and source document with guidance were put into the abstractive model to acquire a summary, which was verified on the LCSTS evaluation dataset. Experimental results shows that the HRAGS model can generate a more readable, accurate and high ROUGE score summary compared with other benchmark models.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: April 08,2024
  • Published:
Article QR Code