Abstractive Summarization of News Articles Using BART
DOI:
https://doi.org/10.3126/pecj.v3i1.93538Keywords:
Abstractive Text Summarization, BART-Large, News Articles, Natural Language ProcessingAbstract
Abstractive text summarization generates concise summaries that preserve key concepts through newly constructed sentences rather than extraction from the source. As online news content grows rapidly, effective summarization systems are essential for efficient information consumption. This study investigates abstractive summarization using BART-Large, a transformer-based encoder-decoder model, with a novel training approach: fine-tuning on a combined dataset merging XSum and CNN/DailyMail. While XSum emphasizes highly abstractive single-sentence summaries and CNN/DailyMail targets longer multi-sentence outputs, combining these datasets during training aims to produce a more versatile model capable of handling diverse summarization demands. Performance is evaluated using ROUGE metrics, which measure n-gram overlap between generated and reference summaries. The fine-tuned model achieves a ROUGE-2 score of 22.86, demonstrating competitive performance against existing approaches. Qualitative analysis reveals that the model produces fluent, coherent summaries while maintaining factual consistency with source documents. These results indicate that exposing models to varied summarization demands during fine-tuning can improve flexibility without sacrificing quality. This work offers a practical direction for building more generalizable summarization systems and highlights the potential of multi-dataset training strategies for domain adaptation in natural language generation tasks.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Pokhara Engineering collegeThis license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator.