File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이용재

Lee, Yongjae
Financial Engineering Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace SI -
dc.citation.endPage 637 -
dc.citation.startPage 632 -
dc.citation.title 6th ACM International Conference on AI in Finance, ICAIF 2025 -
dc.contributor.author Choi, Chanyeol -
dc.contributor.author Kwon, Jihoon -
dc.contributor.author Lopez-Lira, Alejandro -
dc.contributor.author Kim, Chaewoon -
dc.contributor.author Kim, Minjae -
dc.contributor.author Hwang, Juneha -
dc.contributor.author Ha, Jaeseon -
dc.contributor.author Choi, Hojun -
dc.contributor.author Yun, Suyeol -
dc.contributor.author Kim, Yongjin -
dc.contributor.author Lee, Yongjae -
dc.date.accessioned 2025-12-29T15:27:00Z -
dc.date.available 2025-12-29T15:27:00Z -
dc.date.created 2025-12-25 -
dc.date.issued 2025-11-14 -
dc.description.abstract Accurate information retrieval (IR) is critical in the financial domain, where investors must identify relevant information from large collections of documents. Traditional IR methods - whether sparse or dense - often fall short in retrieval accuracy, as it requires not only capturing semantic similarity but also performing fine-grained reasoning over document structure and domain-specific knowledge. Recent advances in large language models (LLMs) have opened up new opportunities for retrieval with multi-step reasoning, where the model ranks passages through iterative reasoning about which information is most relevant to a given query. However, there exists no benchmark to evaluate such capabilities in the financial domain. To address this gap, we introduce FinAgentBench, the first large-scale benchmark for evaluating retrieval with multi-step reasoning in finance - a setting we term agentic retrieval. The benchmark consists of 26K expert-annotated examples on S&P-500 listed firms and assesses whether LLM agents can (1) identify the most relevant document type among candidates, and (2) pinpoint the key passage within the selected document. Our evaluation framework explicitly separates these two reasoning steps to address context limitations. This design enables to provide a quantitative basis for understanding retrieval-centric LLM behavior in finance. We evaluate a suite of state-of-the-art models and further demonstrated how targeted fine-tuning can significantly improve agentic retrieval performance. Our benchmark provides a foundation for studying retrieval-centric LLM behavior in complex, domain-specific tasks for finance. -
dc.identifier.bibliographicCitation 6th ACM International Conference on AI in Finance, ICAIF 2025, pp.632 - 637 -
dc.identifier.doi 10.1145/3768292.3770362 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/89409 -
dc.language 영어 -
dc.publisher Association for Computing Machinery, Inc -
dc.title FinAgentBench: A Benchmark Dataset for Agentic Retrieval in Financial Question Answering -
dc.type Conference Paper -
dc.date.conferenceDate 2025-11-15 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.