Knowledge-augmented relation learning for complementary recommendation with large language models

-- Views

September 22, 25

スライド概要

Chihiro Yamasaki, Kai Sugahara, Kazushi Okamoto: Knowledge-augmented relation learning for complementary recommendation with large language models, The 2nd Workshop on Generative AI for E-Commerce 2025 in conjunction with the 19th ACM Conference on Recommender Systems (RecSys 2025), 2025.9, Prague, Czech Republic.

profile-image

Data Science Research Group, The University of Electro-Communications

シェア

またはPlayer版

埋め込む »CMSなどでJSが使えない場合

ダウンロード

関連スライド

各ページのテキスト
1.

Knowledge-Augmented Relation Learning for Complementary Recommendation with Large Language Models Chihiro Yamasaki, Kai Sugahara, Kazushi Okamoto [email protected], [email protected], [email protected] The University of Electro-Communications, Tokyo, Japan Summary: We propose KARL (Knowledge-Augmented Relation Learning), which combines active learning with large language models to efficiently expand high-quality function-based labels for complementary recommendation. It aims to clarify how to resolve the trade-off between label quality and cost, and how data diversity affects model generalization in different prediction contexts. Background: Complementary Recommendation Identify functionally compatible item pairs to enhance user satisfaction and boost sales Substitute Another branded phone Experiment Complementary A phone (query item) Its case Function-based Labels (FBLs) High-quality definition of complementaty relationships based on item functions [1]: i. Items and have the same function and usage = Subst. ii. Item can be replenished with item = Compl. iii. Item can be replenished with item = Compl. Fig.1: General overview of KARL iv. Items and must be combined to be usable = Compl. We conducted experiment on ASKUL* dataset. v. When combined with item , item becomes more useful = Compl. * https://github.com/okamoto-lab/fbl_dataset vi. When combined with item , item becomes more useful = Compl. Human FBLs: was used to train a model with 5-fold nested vii. Combining and makes them more useful = Compl. CV and evaluate in in-distribution (ID) setting, and was used viii. Items and have no relationship. ix. Items and seem to have a relationship, but it is difficult to to evaluate in out-of-distribution (OOD) setting verbalize. Uncertainty Sampling: Random,Query-by-Committee, or Margin Strength: Independence from users’ browsing or purchase logs, ML Classifier: Logistic Regression / LLM: GPT-4o-mini avoiding noise from them (rather than behavior-based labels [2]) Weakness: Requires human annotation, limiting the model’s RQ1 RQ2 ID/OOD Accuracy Analysis ability to generalize across diverse items ID setting: The improvement was less than 0.5%, with prolonged could degrade accuracy (Fig.2) Problem: How to accurately and efficiently classify diverse item learning setting: The improvement reached 37%, showing that data complementary relationships under limited FBLs datasets and OOD diversity by KARL enhanced knowledge acquisition (Fig.3) annotation resources? RQ3 Diversity–Accuracy Relation Analysis KARL: Knowledge Augmented Relation Learning A framework designed to mitigate the problem on FBLs Iterative four-step to improve the generalization capability of the model with the limited FBLs and LLM knowledge (Fig.1) Addressing the risk that models trained on small, domain limited datasets many only perform well in ID settings Fig.2: ID Accuracy over Iterations (RQ1) : # of the training pairs, : the feature vector of the -th pair Result: Diversity improves performance in OOD settings, though it can reduce accuracy in ID settings (Fig.4). This suggests that diversity expands knowledge in unfamiliar spaces while disrupting well-learned distributions. Fig.3: OOD Accuracy over Iterations (RQ2) Fig.4: Diversity - Accuracy Relation (RQ3) [1] C. Yamasaki, K. Sugahara, Y. Nagi, and K. Okamoto. 2025. Function-based Labels for Complementary Recommendation: Definition, Annotation, and LLM-as-a-Judge. arXiv:2507.03945. [2] J. McAuley, R. Pandey, and J. Leskovec. 2015. Inferring Networks of Substitutable and Complementary Products. In Proc. 21th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 785–794.