<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repository Collection:</title>
  <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/140" />
  <subtitle />
  <id>https://scholarworks.unist.ac.kr/handle/201301/140</id>
  <updated>2026-04-08T00:39:52Z</updated>
  <dc:date>2026-04-08T00:39:52Z</dc:date>
  <entry>
    <title>Decentralized Optimal Control for Leader-Follower Tilted-Hexarotors</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/81276" />
    <author>
      <name>Lee, Myoung Hoon</name>
    </author>
    <author>
      <name>Moon, Jun</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/81276</id>
    <updated>2024-01-31T16:40:32Z</updated>
    <published>2018-06-23T15:00:00Z</published>
    <summary type="text">Title: Decentralized Optimal Control for Leader-Follower Tilted-Hexarotors
Author(s): Lee, Myoung Hoon; Moon, Jun
Abstract: In this paper, we consider leader-follower decentralized optimal control for a hexarotor group with one leader and large population followers. Our hexarotor is modeled based on the quaternion framework to resolve singularity of the rotation matrix represented by Euler&amp;apos;s angle, and has 6-DoF due to six tilted propellers, which allows to control its translation and attitude simultaneously. By using the mean field Stackelberg game framework, we obtain a set of decentralized optimal controls for the leader and &#x1d441; follower hexarotors when N is arbitrarily large where the decentralized optimal controls constitute an &#x1d716;-Stackelberg equilibrium for the leader and &#x1d441; followers, where &#x1d716; → 0 as &#x1d441; → ∞. Furthermore, we validate the theoretical results with simulations of two different operating scenarios.</summary>
    <dc:date>2018-06-23T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>A Scalable Unit Differential Power Processing System Design for Photovoltaic Applications</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/81275" />
    <author>
      <name>Jeong, Hoejeong</name>
    </author>
    <author>
      <name>Cho, Hyeun-Tae</name>
    </author>
    <author>
      <name>Kim, Taewon</name>
    </author>
    <author>
      <name>Liu, Yu-Chen</name>
    </author>
    <author>
      <name>Kim, Katherine A.</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/81275</id>
    <updated>2024-01-31T16:40:31Z</updated>
    <published>2018-06-24T15:00:00Z</published>
    <summary type="text">Title: A Scalable Unit Differential Power Processing System Design for Photovoltaic Applications
Author(s): Jeong, Hoejeong; Cho, Hyeun-Tae; Kim, Taewon; Liu, Yu-Chen; Kim, Katherine A.
Abstract: Differential power processing (DPP) systems are able to achieve high system efficiency and maintain maximum power production even under mismatched lighting conditions. However, DPP in large-scale systems has challenges of complicated wire connections and high voltage ratings. The unit DPP structure is introduced to overcome these scalability problems, which consists of bidirectional flyback converters and a bidirectional boost converter to achieve maximum power point operation while minimizing processed power. Both voltage balancing and maximum power point tracking modes are used to effectively control system operation. The unit DPP system and control algorithm are verified through simulation and hardware experimentation. The unit DPP system successfully controls the current of each PV panel to reach its maximum power point. Results shows a 9-10% system efficiency increase compared to series string under uneven lighting conditions.</summary>
    <dc:date>2018-06-24T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Reference Modulation for Performance Enhancement in Motion Control Systems</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/81259" />
    <author>
      <name>Lee, Youngwoo</name>
    </author>
    <author>
      <name>Sun, Liting</name>
    </author>
    <author>
      <name>Moon, Jun</name>
    </author>
    <author>
      <name>Tomizuka, Masayoshi</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/81259</id>
    <updated>2024-01-31T16:40:14Z</updated>
    <published>2018-06-26T15:00:00Z</published>
    <summary type="text">Title: Reference Modulation for Performance Enhancement in Motion Control Systems
Author(s): Lee, Youngwoo; Sun, Liting; Moon, Jun; Tomizuka, Masayoshi
Abstract: In control engineering, there are many situations, where a system designed with a fixed feedback controller is not customizable, and yet its closed-loop performance (e.g., disturbance attenuation) is not satisfactory. To further enhance the closed-loop performance for such systems, we propose a reference modulation method that is compatible with any prefixed controller. By introducing an additional modulated function and a direct feedforward channel, our method allows more flexibilities in customizing the closed-loop sensitivity function, so that both the disturbance attenuation and the tracking performance can be improved. We also provide robust stability analysis of the control systems designed by the proposed method. Simulations are performed on a wafer scanner system with a pre-designed feedback controller. The simulation results show that via reference modulation, both the tracking and disturbance rejection of the closed-loop system are enhanced.</summary>
    <dc:date>2018-06-26T15:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling</title>
    <link rel="alternate" href="https://scholarworks.unist.ac.kr/handle/201301/81172" />
    <author>
      <name>Lee, Kyowoon</name>
    </author>
    <author>
      <name>Kim, Sol-A</name>
    </author>
    <author>
      <name>Choi, Jaesik</name>
    </author>
    <author>
      <name>Lee, Seong-Whan</name>
    </author>
    <id>https://scholarworks.unist.ac.kr/handle/201301/81172</id>
    <updated>2024-01-31T16:38:53Z</updated>
    <published>2018-07-10T15:00:00Z</published>
    <summary type="text">Title: Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling
Author(s): Lee, Kyowoon; Kim, Sol-A; Choi, Jaesik; Lee, Seong-Whan
Abstract: Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces. Recently, deep neural networks have successfully been applied to games with discrete actions spaces. However, deep neural networks for discrete actions are not suitable for devising strategies for games where a very small change in an action can dramatically affect the outcome. In this paper, we present a new selfplay reinforcement learning framework which equips a continuous search algorithm which enables
to search in continuous action spaces with a kernel regression method. Without any handcrafted features, our network is trained by supervised learning followed by self-play reinforcement learning with a high-fidelity simulator for the Olympic sport of curling. The program trained under our framework outperforms existing programs equipped with several hand-crafted features and won an international digital curling competition.</summary>
    <dc:date>2018-07-10T15:00:00Z</dc:date>
  </entry>
</feed>

