File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이용재

Lee, Yongjae
Financial Engineering Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

FinAgentBench: A Benchmark Dataset for Agentic Retrieval in Financial Question Answering

Author(s)
Choi, ChanyeolKwon, JihoonLopez-Lira, AlejandroKim, ChaewoonKim, MinjaeHwang, JunehaHa, JaeseonChoi, HojunYun, SuyeolKim, YongjinLee, Yongjae
Issued Date
2025-11-14
DOI
10.1145/3768292.3770362
URI
https://scholarworks.unist.ac.kr/handle/201301/89409
Citation
6th ACM International Conference on AI in Finance, ICAIF 2025, pp.632 - 637
Abstract
Accurate information retrieval (IR) is critical in the financial domain, where investors must identify relevant information from large collections of documents. Traditional IR methods - whether sparse or dense - often fall short in retrieval accuracy, as it requires not only capturing semantic similarity but also performing fine-grained reasoning over document structure and domain-specific knowledge. Recent advances in large language models (LLMs) have opened up new opportunities for retrieval with multi-step reasoning, where the model ranks passages through iterative reasoning about which information is most relevant to a given query. However, there exists no benchmark to evaluate such capabilities in the financial domain. To address this gap, we introduce FinAgentBench, the first large-scale benchmark for evaluating retrieval with multi-step reasoning in finance - a setting we term agentic retrieval. The benchmark consists of 26K expert-annotated examples on S&P-500 listed firms and assesses whether LLM agents can (1) identify the most relevant document type among candidates, and (2) pinpoint the key passage within the selected document. Our evaluation framework explicitly separates these two reasoning steps to address context limitations. This design enables to provide a quantitative basis for understanding retrieval-centric LLM behavior in finance. We evaluate a suite of state-of-the-art models and further demonstrated how targeted fine-tuning can significantly improve agentic retrieval performance. Our benchmark provides a foundation for studying retrieval-centric LLM behavior in complex, domain-specific tasks for finance.
Publisher
Association for Computing Machinery, Inc

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.