File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Optimization and Deep Learning for Wireless Communications and Robotics Automation

Author(s)
Jang, Jonggyu
Advisor
Yoon, Sung Whan
Issued Date
2021-02
URI
https://scholarworks.unist.ac.kr/handle/201301/82444 http://unist.dcollection.net/common/orgView/200000371196
Abstract
In this dissertation, several studies on optimization theory and deep learning in wireless communication and robotics are proposed. In recent years, deep learning has attracted attention due to its outstanding performance in various fields of research. Although there are various types of deep learning methods such as supervised learning, unsupervised learning, and reinforcement learning, most of the current studies are concentrated on data-dependent supervised learning. Supervised learning shows remarkable performance on computer vision and pattern recognition, but it is difficult to apply supervised learning to wireless communication networks due to its time-varying property. Therefore, our focus is to apply reinforcement learning to wireless communication networks and robotics, especially for systems that require appropriate decisions for a given situation. In research on wireless communication, frequency resource allocation (RA), user association (UA), and power control (PC) in communication networks are studied. First, we propose a UA,
RA, and PC scheme based on the optimization theory. In this study, a low-complexity solution is proposed that closely achieves the outer bound of the NP-hard combinatorial problem. However, RA and PC techniques that can be applied to any channel state information (CSI) assumptions have long been a challenge. Therefore, secondly, we propose a deep-learning-based RA and PC method maximizing the sum-rate, which can be used under any CSI assumptions. The weakness of this method is that UA is fixed, and there is a loss of performance by using distributed RA and PC to reduce the size of the variable. To cover this weakness, we focus on the study on UA, RA, and PC that maximizes the α-fairness in renewable energy source (RES)-enabled heterogeneous networks (HetNets). In this study, the reinforcement learning architecture is efficiently reduced by the optimization theory, and the solution can be obtained in a shorter time than the algorithms based on the optimization theory. Also, by utilizing deep reinforcement learning, dynamic PC can be designed, which could not be designed with the optimization theory. On the other hand, for studies on robotics, we have studied autofocus controllers for scanning electron microscopes (SEMs). As a result of this study, we have developed the world’s first deep-learningbased autofocusing SEM, which has higher quality and faster speed than existing autofocus algorithms. Here is the link for the detailed video demo: https://youtu.be/MvSaoPQvDdo.
Publisher
Ulsan National Institute of Science and Technology (UNIST)
Degree
Doctor
Major
Department of Electrical Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.