Adapter-based fine-tuning for PRIMERA

Loading...
Thumbnail Image

Date

2025

Journal Title

Journal ISSN

Volume Title

Publisher

Department of Computer Science and Engineering

Abstract

Multi-document summarisation (MDS) involves generating concise summaries from clusters of related documents. PRIMERA (Pyramid-based Masked Sentence Pre-training for Multi-document Summarisation) is a pre-trained model specifically designed for MDS, utilizing the LED architecture to handle long sequences effectively [1–4]. Despite its capabilities, fine-tuning PRIMERA for specific tasks remains resourceintensive. To mitigate this, we explore the integration of adapter modules—small, trainable components inserted within transformer layers—that allow models to adapt to new tasks by updating only a fraction of the parameters, thereby reducing computational requirements [5–8].

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By