Live Session
Hall 406 CX
Paper
20 Sep
 
11:15
SGT
Session 1: Collaborative filtering 1
Add Session to Calendar 2023-09-20 11:15 am 2023-09-20 12:35 pm Asia/Singapore Session 1: Collaborative filtering 1 Session 1: Collaborative filtering 1 is taking place on the RecSys Hub. Https://recsyshub.org
Research

Adversarial Collaborative Filtering for Free

View on ACM Digital Library

Huiyuan Chen (Visa Research), Xiaoting Li (Visa Research), Vivian Lai (Visa Research), Chin-Chia Michael Yeh (Visa Research), Yujie Fan (Visa Research), Yan Zheng (Visa Research), Mahashweta Das (Visa Research) and Hao Yang (Visa Research).

View Paper PDFView Poster
Abstract

Collaborative Filtering (CF) has been successfully applied to help users discover the items of interest. Nevertheless, existing CF methods suffer from noisy data issue, which negatively impacts the quality of personalized recommendation. To tackle this problem, many prior studies leverage the adversarial learning principle to regularize the representations of users and items, which has shown great ability in improving both generalizability and robustness. Generally, those methods learn adversarial perturbations and model parameters using min-max optimization framework. However, there still have two major limitations: 1) Existing methods lack theoretical guarantees of why adding perturbations improve the model generalizability and robustness since noisy data is naturally different from adversarial attacks; 2) Solving min-max optimization is time-consuming. In addition to updating the model parameters, each iteration requires additional computations to update the perturbations, making them not scalable for industry-scale datasets.

In this paper, we present Sharpness-aware Matrix Factorization (SharpMF), a simple yet effective method that conducts adversarial training without extra computational cost over the base optimizer. To achieve this goal, we first revisit the existing adversarial collaborative filtering and discuss its connection with recent Sharpness-aware Minimization. This analysis shows that adversarial training actually seeks model parameters that lie in neighborhoods having uniformly low loss values, resulting in better generalizability. To reduce the computational overhead, SharpMF introduces a novel trajectory loss to measure sharpness between current weights and past weights. Experimental results on real-world datasets demonstrate that our SharpMF achieves superior performance with almost zero additional computational cost comparing to adversarial training.

Join the Conversation

Head to Slido and select the paper's assigned session to join the live discussion.

Conference Agenda

View Full Agenda →
No items found.