Model shifting unlearning: a scalable approach to data removal
Loading...
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Department of Computer Science & Engineering
Abstract
In the modern world digital age data has become the main driver of progress for several areas of life. However, the growing dependence upon data increased the alarm over the safety of people’s private rights with the advent of regulatory systems such as the European Union’s General Data Protection Regulation, which places importance upon the ”Right to be Forgotten” [1]. The law grants individuals the right to require the removal of personal details from company databases and presents serious challenges for machine learning (ML) algorithms that are drawing conclusions from huge data sources [2]. Conventional compliance solutions require extensive retraining of machine learning models for the removal of specific data, something that requires much resources. The solution is generally impossible for big models, highlighting the critical need for more efficient solutions. Current practices, such as the use of influence functions and the introduction of noise, attempt to solve the challenge; however, they often sacrifice model performance or are confronted with scalability [3]. Motivated by these challenges our study introduces Model Shifting Unlearning, a novel technique designed to efficiently remove specific data influences from ML models without necessitating full retraining. This method aims to identify and suppress neurons significantly impacted by the data to be forgotten, thereby maintaining overall model integrity and performance. The primary objective of this study is the development of a scalable unlearning framework that preserves model performance and testing of the efficiency of the framework compared with state of the art techniques [4]. With the solution of the great challenge of knowledge removal within machine learning, the study helps advance more ethically accountable and lawfully compliant AI systems.
