Adapter-based fine-tuning for PRIMERA
| dc.contributor.author | Hewapathirana, K | |
| dc.contributor.author | De Silva, N | |
| dc.contributor.author | Athuraliya, CD | |
| dc.contributor.editor | Gunawardena, S | |
| dc.date.accessioned | 2025-11-19T04:30:53Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Multi-document summarisation (MDS) involves generating concise summaries from clusters of related documents. PRIMERA (Pyramid-based Masked Sentence Pre-training for Multi-document Summarisation) is a pre-trained model specifically designed for MDS, utilizing the LED architecture to handle long sequences effectively [1–4]. Despite its capabilities, fine-tuning PRIMERA for specific tasks remains resourceintensive. To mitigate this, we explore the integration of adapter modules—small, trainable components inserted within transformer layers—that allow models to adapt to new tasks by updating only a fraction of the parameters, thereby reducing computational requirements [5–8]. | |
| dc.identifier.conference | Applied Data Science & Artificial Intelligence (ADScAI) Symposium 2025 | |
| dc.identifier.department | Department of Computer Science & Engineering | |
| dc.identifier.doi | https://doi.org/10.31705/ADScAI.2025.57 | |
| dc.identifier.email | kushan.22@cse.mrt.ac.lk | |
| dc.identifier.email | nisansa@cse.mrt.ac.lk | |
| dc.identifier.email | cd@conscient.ai | |
| dc.identifier.faculty | Engineering | |
| dc.identifier.place | Moratuwa, Sri Lanka | |
| dc.identifier.proceeding | Proceedings of Applied Data Science & Artificial Intelligence Symposium 2025 | |
| dc.identifier.uri | https://dl.lib.uom.lk/handle/123/24392 | |
| dc.language.iso | en | |
| dc.publisher | Department of Computer Science and Engineering | |
| dc.subject | Multi-document Summarisation | |
| dc.subject | Natural Language Processing | |
| dc.subject | Pre-trained Models | |
| dc.subject | Adapters | |
| dc.title | Adapter-based fine-tuning for PRIMERA | |
| dc.type | Conference-Extended-Abstract |
