File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이연창

Lee, Yeon-Chang
Data Intelligence Lab
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 365 -
dc.citation.startPage 351 -
dc.citation.title INFORMATION SCIENCES -
dc.citation.volume 605 -
dc.contributor.author Park, Junha -
dc.contributor.author Lee, Yeon-Chang -
dc.contributor.author Kim, Sang-Wook -
dc.date.accessioned 2024-01-19T12:05:29Z -
dc.date.available 2024-01-19T12:05:29Z -
dc.date.created 2024-01-16 -
dc.date.issued 2022-08 -
dc.description.abstract In this paper, we start by pointing out the problem of a negative sampling (NS) strategy, denoted as nearest-NS (NNS), used in metric learning (ML)-based recommendation methods. NNS samples the items nearer to a user with higher probability among her unrated items. This could move her preferred items far away from her, thereby making the preferred items excluded from top-K recommendation. To address the problem, we first define a concept of a cage for a user, a region that contains the items highly likely preferred by her. Based on the concept, we propose a novel NS strategy, named as cage-based NS (CNS), that makes her preferred items rarely sampled as negative items, thereby improving the accuracy of top-K recommendation. Furthermore, we propose CNS+, an improved version of CNS, that reduces the computation overhead of CNS. CNS+ strategy provides performance than CNS much higher, however, not requiring to sacrifice the accuracy. Through extensive experiments using four real-life datasets, we validate the effectiveness (i.e., accuracy) and efficiency (i.e., performance) of the proposed approach. We first demonstrate that our CNS strategy addresses successfully the problem of NNS strategy. In addition, we show that applying our CNS strategy to three existing ML-based recommendation methods (i.e., CML, LRML, and SML) improves their accuracy consistently and significantly in all datasets and with all metrics. Also, we confirm that CNS+ strategy significantly reduces the execution times with (almost) no loss of accuracy of CNS strategy. Finally, we show that our CNS and CNS+ strategies have a linear scalability with the increasing number of ratings. (c) 2022 Elsevier Inc. All rights reserved. -
dc.identifier.bibliographicCitation INFORMATION SCIENCES, v.605, pp.351 - 365 -
dc.identifier.doi 10.1016/j.ins.2022.05.039 -
dc.identifier.issn 0020-0255 -
dc.identifier.scopusid 2-s2.0-85130814785 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/68072 -
dc.identifier.wosid 000807200000004 -
dc.language 영어 -
dc.publisher ELSEVIER SCIENCE INC -
dc.title Effective and efficient negative sampling in metric learning based recommendation -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems -
dc.relation.journalResearchArea Computer Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Recommender systems -
dc.subject.keywordAuthor Metric learning -
dc.subject.keywordAuthor Negative sampling -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.