File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

나승훈

Na, Seung-Hoon
Natural Language Processing Lab
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 4 -
dc.citation.startPage 1508 -
dc.citation.title APPLIED SCIENCES-BASEL -
dc.citation.volume 14 -
dc.contributor.author Shafqat, Wafa -
dc.contributor.author Na, Seung-Hoon -
dc.date.accessioned 2025-04-25T15:10:47Z -
dc.date.available 2025-04-25T15:10:47Z -
dc.date.created 2025-04-08 -
dc.date.issued 2024-02 -
dc.description.abstract In today's world, where information keeps growing rapidly and changing constantly, language models play a crucial role in making our lives easier across different fields. However, it is tough to keep these models updated with all the new data while making sure they stay accurate and relevant. To tackle this challenge, our study proposes an innovative approach to facilitate the propagation of complex entity knowledge within language models through extensive triplet representation. Using a specially curated dataset (CTR-KE) derived from reliable sources like Wikipedia and Wikidata, the research assesses the efficacy of editing methods in handling intricate relationships between entities across multiple tiers of information. By employing a comprehensive triplet representation strategy, the study aims to enrich contextual understanding while mitigating the risks associated with distorting or forgetting critical information. The study evaluates its proposed methodology using various evaluation metrics and four distinct editing methods across three diverse language models (GPT2-XL, GPT-J, and Llama-2-7b). The results indicate the superiority of mass-editing memory in a transformer (MEMIT) and in-context learning for knowledge editing (IKE) in efficiently executing multiple updates within the triplet representation framework. This research signifies a promising pathway for deeper exploration of data representation for knowledge editing within large language models, and improved understanding of contexts to facilitate continual learning. -
dc.identifier.bibliographicCitation APPLIED SCIENCES-BASEL, v.14, no.4, pp.1508 -
dc.identifier.doi 10.3390/app14041508 -
dc.identifier.issn 2076-3417 -
dc.identifier.scopusid 2-s2.0-85192466886 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/86770 -
dc.identifier.wosid 001168244800001 -
dc.language 영어 -
dc.publisher MDPI -
dc.title Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Chemistry, Multidisciplinary; Engineering, Multidisciplinary; Materials Science, Multidisciplinary; Physics, Applied -
dc.relation.journalResearchArea Chemistry; Engineering; Materials Science; Physics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor entity knowledge propagation (EKP) -
dc.subject.keywordAuthor comprehensive knowledge representation -
dc.subject.keywordAuthor knowledge graph -
dc.subject.keywordAuthor knowledge editing -
dc.subject.keywordAuthor large language models (LLMs) -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.