File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

최진석

Choi, Jinseok
Intelligent Wireless Communications Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 7167 -
dc.citation.number 10 -
dc.citation.startPage 7152 -
dc.citation.title IEEE TRANSACTIONS ON COMMUNICATIONS -
dc.citation.volume 67 -
dc.contributor.author Mismar, Faris B. -
dc.contributor.author Choi, Jinseok -
dc.contributor.author Evans, Brian L. -
dc.date.accessioned 2023-12-21T18:37:01Z -
dc.date.available 2023-12-21T18:37:01Z -
dc.date.created 2020-10-26 -
dc.date.issued 2019-10 -
dc.description.abstract Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the performance for indoor and outdoor environments. By leveraging the ability of Q-learning to estimate future performance improvement rewards, we propose two algorithms: 1) closed loop power control (PC) for downlink voice over LTE (VoLTE) and 2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the signal-to-interference plus noise ratio (SINR) of a user equipment (UE) meets the target SINR. It does so without the UE having to send power control requests. The SON fault management algorithm uses RL to improve the performance of an outdoor base station cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex performance optimization problem using RL. Simulation results show that our proposed RL-based algorithms outperform the industry standards today in realistic cellular communication environments. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON COMMUNICATIONS, v.67, no.10, pp.7152 - 7167 -
dc.identifier.doi 10.1109/TCOMM.2019.2926715 -
dc.identifier.issn 0090-6778 -
dc.identifier.scopusid 2-s2.0-85077495526 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/48607 -
dc.identifier.url https://ieeexplore.ieee.org/document/8758212 -
dc.identifier.wosid 000502107500039 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title A Framework for Automated Cellular Network Tuning With Reinforcement Learning -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Engineering, Electrical & Electronic; Telecommunications -
dc.relation.journalResearchArea Engineering; Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Framework -
dc.subject.keywordAuthor reinforcement learning -
dc.subject.keywordAuthor artificial intelligence -
dc.subject.keywordAuthor VoLTE -
dc.subject.keywordAuthor MOS -
dc.subject.keywordAuthor QoE -
dc.subject.keywordAuthor wireless -
dc.subject.keywordAuthor tuning -
dc.subject.keywordAuthor optimization -
dc.subject.keywordAuthor SON -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.