Model shifting unlearning: a scalable approach to data removal

dc.contributor.authorPallewela, LCK
dc.contributor.editorAthuraliya, CD
dc.date.accessioned2025-11-25T05:25:12Z
dc.date.issued2025
dc.description.abstractIn the modern world digital age data has become the main driver of progress for several areas of life. However, the growing dependence upon data increased the alarm over the safety of people’s private rights with the advent of regulatory systems such as the European Union’s General Data Protection Regulation, which places importance upon the ”Right to be Forgotten” [1]. The law grants individuals the right to require the removal of personal details from company databases and presents serious challenges for machine learning (ML) algorithms that are drawing conclusions from huge data sources [2]. Conventional compliance solutions require extensive retraining of machine learning models for the removal of specific data, something that requires much resources. The solution is generally impossible for big models, highlighting the critical need for more efficient solutions. Current practices, such as the use of influence functions and the introduction of noise, attempt to solve the challenge; however, they often sacrifice model performance or are confronted with scalability [3]. Motivated by these challenges our study introduces Model Shifting Unlearning, a novel technique designed to efficiently remove specific data influences from ML models without necessitating full retraining. This method aims to identify and suppress neurons significantly impacted by the data to be forgotten, thereby maintaining overall model integrity and performance. The primary objective of this study is the development of a scalable unlearning framework that preserves model performance and testing of the efficiency of the framework compared with state of the art techniques [4]. With the solution of the great challenge of knowledge removal within machine learning, the study helps advance more ethically accountable and lawfully compliant AI systems.
dc.identifier.conferenceApplied Data Science & Artificial Intelligence (ADScAI) Symposium 2025
dc.identifier.departmentDepartment of Computer Science & Engineering
dc.identifier.doihttps://doi.org/10.31705/ADScAI.2025.01
dc.identifier.emailcb010521@students.apiit.lk
dc.identifier.facultyEngineering
dc.identifier.placeMoratuwa, Sri Lanka
dc.identifier.proceedingProceedings of Applied Data Science & Artificial Intelligence Symposium 2025
dc.identifier.urihttps://dl.lib.uom.lk/handle/123/24470
dc.language.isoen
dc.publisherDepartment of Computer Science & Engineering
dc.subjectMachine Unlearning
dc.subjectModel-Shifting
dc.subjectPrivacy-Preserving AI
dc.subjectNeural Networks
dc.subjectData Removal
dc.titleModel shifting unlearning: a scalable approach to data removal
dc.typeConference-Extended-Abstract

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Paper 1 - ADScAI 2025.pdf
Size:
126.45 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections