File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김영대

Kim, Youngdae
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 108546 -
dc.citation.title ELECTRIC POWER SYSTEMS RESEARCH -
dc.citation.volume 212 -
dc.contributor.author Zeng, Sihan -
dc.contributor.author Kody, Alyssa -
dc.contributor.author Kim, Youngdae -
dc.contributor.author Kim, Kibaek -
dc.contributor.author Molzahn, Daniel K. -
dc.date.accessioned 2024-08-09T10:35:07Z -
dc.date.available 2024-08-09T10:35:07Z -
dc.date.created 2024-08-09 -
dc.date.issued 2022-11 -
dc.description.abstract With the increasing penetration of distributed energy resources, distributed optimization algorithms have attracted significant attention for power systems applications due to their potential for superior scalability, privacy, and robustness to a single point-of-failure. The Alternating Direction Method of Multipliers (ADMM) is a popular distributed optimization algorithm; however, its convergence performance is highly dependent on the selection of penalty parameters, which are usually chosen heuristically. In this work, we use reinforcement learning (RL) to develop an adaptive penalty parameter selection policy for alternating current optimal power flow (ACOPF) problem solved via ADMM with the goal of minimizing the number of iterations until convergence. We train our RL policy using deep Q-learning and show that this policy can result in significantly accelerated convergence (up to a 59% reduction in the number of iterations compared to existing, curvatureinformed penalty parameter selection methods). Furthermore, we show that our RL policy demonstrates promise for generalizability, performing well under unseen loading schemes as well as under unseen losses of lines and generators (up to a 50% reduction in iterations). This work thus provides a proof-of-concept for using RL for parameter selection in ADMM for power systems applications. -
dc.identifier.bibliographicCitation ELECTRIC POWER SYSTEMS RESEARCH, v.212, pp.108546 -
dc.identifier.doi 10.1016/j.epsr.2022.108546 -
dc.identifier.issn 0378-7796 -
dc.identifier.scopusid 2-s2.0-85134590405 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/83428 -
dc.identifier.wosid 000856610700009 -
dc.language 영어 -
dc.publisher ELSEVIER SCIENCE SA -
dc.title A reinforcement learning approach to parameter selection for distributed optimal power flow -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Distributed optimization -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Deep Q-learning -
dc.subject.keywordAuthor Alternating direction method of multipliers -
dc.subject.keywordAuthor Alternating current optimal power flow -
dc.subject.keywordPlus ADAPTIVE ADMM -
dc.subject.keywordPlus OPF -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.