File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

나승훈

Na, Seung-Hoon
Natural Language Processing Lab
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number C -
dc.citation.startPage 129675 -
dc.citation.title EXPERT SYSTEMS WITH APPLICATIONS -
dc.citation.volume 298 -
dc.contributor.author Nyange, Roseline -
dc.contributor.author Qiao, Shanbao -
dc.contributor.author Na, Seung-Hoon -
dc.date.accessioned 2025-11-26T09:52:31Z -
dc.date.available 2025-11-26T09:52:31Z -
dc.date.created 2025-10-17 -
dc.date.issued 2026-03 -
dc.description.abstract Updating language models with new information through targeted edits without resorting to expensive full model retraining remains a critical challenge, particularly when aiming to preserve pre-existing capabilities. In this work, we introduce Modular Editing via Customized expert networks and Adaptors) (MECA), a unified framework that selectively integrates new knowledge into language models. MECA employs a module-level deferral router to evaluate whether incoming queries fall within the scope of existing edit requests. Queries are then dynamically routed to either customized editing experts or key-value adaptors. This modular strategy ensures that updates are localized, thereby mitigating risks of unintended alterations on unrelated outputs. We validate our approach on sequential editing tasks using Llama2-7B, Llama2-13B and Falcon 11B, benchmarked across two diverse datasets ZsRE and Hallucination. Experimental results show that MECA consistently outperforms several stateof-the-art knowledge editing techniques, achieving improved integration of new information while preserving the model's original performance. Our analysis further demonstrates that the deferral routing mechanism for selecting modules effectively balances editing precision with overall model stability. -
dc.identifier.bibliographicCitation EXPERT SYSTEMS WITH APPLICATIONS, v.298, no.C, pp.129675 -
dc.identifier.doi 10.1016/j.eswa.2025.129675 -
dc.identifier.issn 0957-4174 -
dc.identifier.scopusid 2-s2.0-105020576839 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/88548 -
dc.identifier.wosid 001584970400003 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title MECA: Modular editing via customized expert networks and adaptors in large language models -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic; Operations Research & Management Science -
dc.relation.journalResearchArea Computer Science; Engineering; Operations Research & Management Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Large language models (LLM) -
dc.subject.keywordAuthor Mixture of experts (MoE) -
dc.subject.keywordAuthor Knowledge editing -
dc.subject.keywordAuthor Continual learning -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.