Back to top

Challenges in Domain-Specific Abstractive Summarization and How to Overcome Them.

Last modified by Anum Afzal Oct 14

Large Language Models work quite well with general-purpose data and many tasks in Natural LanguageProcessing. However, they show several limitations when used for a task such as domain-specific abstractivetext summarization. This paper identifies three of those limitations as research problems in the context ofabstractive text summarization: 1) Quadratic complexity of transformer-based models with respect to theinput text length; 2) Model Hallucination, which is a model’s ability to generate factually incorrect text; and3) Domain Shift, which happens when the distribution of the model’s training and test corpus is not the same.Along with a discussion of the open research questions, this paper also provides an assessment of existingstate-of-the-art techniques relevant to domain-specific text summarization to address the research gaps.

Files and Subpages

There are no subpages or files.