File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

하준형

Ha, Junhyoung
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 2316 -
dc.citation.number 3 -
dc.citation.startPage 2307 -
dc.citation.title IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS -
dc.citation.volume 19 -
dc.contributor.author Ha, Junhyoung -
dc.contributor.author An, Byungchul -
dc.contributor.author Kim, Soonkyum -
dc.date.accessioned 2025-07-02T14:30:04Z -
dc.date.available 2025-07-02T14:30:04Z -
dc.date.created 2025-07-02 -
dc.date.issued 2023-03 -
dc.description.abstract In a graph search algorithm, a given environment is represented as a graph comprising a set of feasible system configurations and their neighboring connections. A path is generated by connecting the initial and goal configurations through graph exploration, whereby the path is often desired to be optimal or suboptimal. The computational performance of the optimal path generation depends on the avoidance of unnecessary explorations. Accordingly, heuristic functions have been widely adopted to guide the exploration efficiently by providing estimated costs to the goal configurations. The exploration is efficient when the heuristic functions estimate the optimal cost closely, which remains challenging because it requires a comprehensive understanding of the environment. However, this challenge presents the scope to improve the computational efficiency over the existing methods. Herein, we propose reinforcement learning heuristic A* (RLHA*), which adopts an artificial neural network as a learning heuristic function to closely estimate the optimal cost, while achieving a bounded suboptimal path. Instead of being trained by precomputed paths, the learning heuristic function keeps improving by using self-generated paths. Numerous simulations were performed to demonstrate the consistent and robust performance of RLHA* by comparing it with the existing methods. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, v.19, no.3, pp.2307 - 2316 -
dc.identifier.doi 10.1109/TII.2022.3188359 -
dc.identifier.issn 1551-3203 -
dc.identifier.scopusid 2-s2.0-85134225033 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/87271 -
dc.identifier.wosid 000967277300001 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Reinforcement Learning Heuristic A -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Automation & Control Systems; Computer Science, Interdisciplinary Applications; Engineering, Industrial -
dc.relation.journalResearchArea Automation & Control Systems; Computer Science; Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Costs -
dc.subject.keywordAuthor Heuristic algorithms -
dc.subject.keywordAuthor Path planning -
dc.subject.keywordAuthor Signal processing algorithms -
dc.subject.keywordAuthor Robots -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Planning -
dc.subject.keywordAuthor path planning -
dc.subject.keywordAuthor reinforcement learning -
dc.subject.keywordAuthor Graph search -
dc.subject.keywordPlus NEURAL-NETWORK -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.