Live Session
Session 11: Sequential Recommendation 2
Research
Equivariant Contrastive Learning for Sequential Recommendation
Peilin Zhou (HKUST (Guangzhou)), Jingqi Gao (Upstage), Yueqi Xie (HKUST), Qichen Ye (Peking University), Yining Hua (Harvard Medical School), Jaeboum Kim (The University of Hong Kong Science and Technology, Upstage), Shoujin Wang (Data Science Institute, University of Technology Sydney) and Sunghun Kim (The University of Hong Kong Science and Technology)
Abstract
Contrastive learning (CL) benefits the training of sequential recommendation models with informative self-supervision signals. Existing solutions apply general sequential data augmentation strategies to generate positive pairs and encourage their representations to be invariant. However, due to the inherent properties of user behavior sequences, some augmentation strategies, such as item substitution, can lead to changes in user intent. Learning indiscriminately invariant representations for all augmentation strategies might be sub-optimal. Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e.g., item substitution) and insensitive to mild augmentations (e.g., feature-level dropout masking). In detail, we use the conditional discriminator to capture differences in behavior due to item substitution, which encourages the user behavior encoder to be equivariant to invasive augmentations. Comprehensive experiments on four benchmark datasets show that the proposed ECL-SR framework achieves competitive performance compared to state-of-the-art SR models. The source code will be released.