File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오현동

Oh, Hyondong
Autonomous Systems Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 22 -
dc.citation.startPage 1 -
dc.citation.title INFORMATION FUSION -
dc.citation.volume 85 -
dc.contributor.author Ladosz, Pawel -
dc.contributor.author Weng, Lilian -
dc.contributor.author Kim, Minwoo -
dc.contributor.author Oh, Hyondong -
dc.date.accessioned 2023-12-21T13:43:01Z -
dc.date.available 2023-12-21T13:43:01Z -
dc.date.created 2022-05-26 -
dc.date.issued 2022-09 -
dc.description.abstract This paper reviews exploration techniques in deep reinforcement learning. Exploration techniques are of primary importance when solving sparse reward problems. In sparse reward problems, the reward is rare, which means that the agent will not find the reward often by acting randomly. In such a scenario, it is challenging for reinforcement learning to learn rewards and actions association. Thus more sophisticated exploration methods need to be devised. This review provides a comprehensive overview of existing exploration approaches, which are categorised based on the key contributions as: reward novel states, reward diverse behaviours, goal-based methods, probabilistic methods, imitation-based methods, safe exploration and random-based methods. Then, unsolved challenges are discussed to provide valuable future research directions. Finally, the approaches of different categories are compared in terms of complexity, computational effort and overall performance. -
dc.identifier.bibliographicCitation INFORMATION FUSION, v.85, pp.1 - 22 -
dc.identifier.doi 10.1016/j.inffus.2022.03.003 -
dc.identifier.issn 1566-2535 -
dc.identifier.scopusid 2-s2.0-85128759155 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/58575 -
dc.identifier.url https://www.sciencedirect.com/science/article/pii/S1566253522000288?via%3Dihub -
dc.identifier.wosid 000794853400001 -
dc.language 영어 -
dc.publisher ELSEVIER -
dc.title Exploration in deep reinforcement learning: A survey -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Computer Science, Theory & Methods -
dc.relation.journalResearchArea Computer Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Deep reinforcement learning -
dc.subject.keywordAuthor Exploration -
dc.subject.keywordAuthor Intrinsic motivation -
dc.subject.keywordAuthor Sparse reward problems -
dc.subject.keywordPlus CURIOSITY -
dc.subject.keywordPlus DIVERSITY -
dc.subject.keywordPlus STATE -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.