You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A Survey of Poisoning Attacks and Countermeasures in Recommender Systems
A repository of poison attacks against recommender systems, as well as their countermeasures. This repository is associated with our systematic review, entitled Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures.
📌 We are actively tracking the latest research and welcome contributions to our repository and survey paper. If your studies are relevant, please feel free to create an issue or a pull request.
📰 2024-10-09: Our paper Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures has been published by ACM Computing Surveys (CSUR) (IF 16.6, Top 2%, CORE A*) at here.
📰 2024-06-20: Our paper Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures has been accepted by ACM Computing Surveys (CSUR) (IF 16.6, Top 2%, CORE A*) in 2024.
Citation
If you find this work helpful in your research, welcome to cite the paper and give a ⭐.
Nguyen, T.T., Quoc Viet Hung, N., Nguyen, T.T., Huynh, T.T., Nguyen, T.T., Weidlich, M. and Yin, H., 2024. Manipulating recommender systems: A survey of poisoning attacks and countermeasures. ACM Computing Surveys.
@article{nguyen2024manipulating,
title={Manipulating recommender systems: A survey of poisoning attacks and countermeasures},
author={Nguyen, Thanh Toan and Quoc Viet Hung, Nguyen and Nguyen, Thanh Tam and Huynh, Thanh Trung and Nguyen, Thanh Thi and Weidlich, Matthias and Yin, Hongzhi},
journal={ACM Computing Surveys},
year={2024},
publisher={ACM New York, NY}
}
Poisoning attacks are the process of tampering with the training data of a machine learning (ML) model in order to corrupt its availability and integrity. Below figure presents the typical process of a poisoning attack compared to the normal learning process. In the latter case, an ML model is trained based on data, which is subsequently used to derive a
recommendation. As such, the quality of the ML model depends on the quality of the data used for training. In a poisoning attack, data is injected into the training process, and hence the model, to produce unintended or harmful conclusions.
In this section, we review detection methods in more detail, starting with supervised methods, before turning to semi-supervised methods and unsupervised methods
Feel free to contact us if you have any queries or exciting news. In addition, we welcome all researchers to contribute to this repository and further contribute to the knowledge of this field.
If you have some other related references, please feel free to create a Github issue with the paper information. We will gladly update the repos according to your suggestions. (You can also create pull requests, but it might take some time for us to do the merge)