Poster
LMEraser: Large Model Unlearning via Adaptive Prompt Tuning
Zihan Wu · Neil Chada
To address the growing demand for privacy protection in machine learning, we propose an efficient and exact machine unlearning method for Large Models, called LMEraser. LMEraser takes a divide-and-conquer strategy with an adaptive prompt tuning mechanism to isolate data influence effectively. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are clustered based on their diversity, and each cluster tunes a tailored prompt independently. This approach enables targeted unlearning by updating affected prompts, significantly reduces unlearning costs and maintains high model performance. Evaluations show that LMEraser reduces unlearning costs by 100 times compared to prior work without compromising model utility.
Live content is unavailable. Log in and register to view live content