We investigate whether model extraction can be used to “steal” the weights of
sequential recommender systems, and the potential threats posed to victims of
such attacks. This type of risk has attracted attention in image and text
classification, but to our knowledge not in recommender systems. We argue that
sequential recommender systems are subject to unique vulnerabilities due to the
specific autoregressive regimes used to train them. Unlike many existing
recommender attackers, which assume the dataset used to train the victim model
is exposed to attackers, we consider a data-free setting, where training data
are not accessible. Under this setting, we propose an API-based model
extraction method via limited-budget synthetic data generation and knowledge
distillation. We investigate state-of-the-art models for sequential
recommendation and show their vulnerability under model extraction and
downstream attacks. We perform attacks in two stages. (1) Model extraction:
given different types of synthetic data and their labels retrieved from a
black-box recommender, we extract the black-box model to a white-box model via
distillation. (2) Downstream attacks: we attack the black-box model with
adversarial samples generated by the white-box recommender. Experiments show
the effectiveness of our data-free model extraction and downstream attacks on
sequential recommenders in both profile pollution and data poisoning settings.

By admin