There are no files associated with this item.
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.citation.endPage | 7167 | - |
dc.citation.number | 10 | - |
dc.citation.startPage | 7152 | - |
dc.citation.title | IEEE TRANSACTIONS ON COMMUNICATIONS | - |
dc.citation.volume | 67 | - |
dc.contributor.author | Mismar, Faris B. | - |
dc.contributor.author | Choi, Jinseok | - |
dc.contributor.author | Evans, Brian L. | - |
dc.date.accessioned | 2023-12-21T18:37:01Z | - |
dc.date.available | 2023-12-21T18:37:01Z | - |
dc.date.created | 2020-10-26 | - |
dc.date.issued | 2019-10 | - |
dc.description.abstract | Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the performance for indoor and outdoor environments. By leveraging the ability of Q-learning to estimate future performance improvement rewards, we propose two algorithms: 1) closed loop power control (PC) for downlink voice over LTE (VoLTE) and 2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the signal-to-interference plus noise ratio (SINR) of a user equipment (UE) meets the target SINR. It does so without the UE having to send power control requests. The SON fault management algorithm uses RL to improve the performance of an outdoor base station cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex performance optimization problem using RL. Simulation results show that our proposed RL-based algorithms outperform the industry standards today in realistic cellular communication environments. | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON COMMUNICATIONS, v.67, no.10, pp.7152 - 7167 | - |
dc.identifier.doi | 10.1109/TCOMM.2019.2926715 | - |
dc.identifier.issn | 0090-6778 | - |
dc.identifier.scopusid | 2-s2.0-85077495526 | - |
dc.identifier.uri | https://scholarworks.unist.ac.kr/handle/201301/48607 | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/8758212 | - |
dc.identifier.wosid | 000502107500039 | - |
dc.language | 영어 | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | A Framework for Automated Cellular Network Tuning With Reinforcement Learning | - |
dc.type | Article | - |
dc.description.isOpenAccess | FALSE | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic; Telecommunications | - |
dc.relation.journalResearchArea | Engineering; Telecommunications | - |
dc.type.docType | Article | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Framework | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | artificial intelligence | - |
dc.subject.keywordAuthor | VoLTE | - |
dc.subject.keywordAuthor | MOS | - |
dc.subject.keywordAuthor | QoE | - |
dc.subject.keywordAuthor | wireless | - |
dc.subject.keywordAuthor | tuning | - |
dc.subject.keywordAuthor | optimization | - |
dc.subject.keywordAuthor | SON | - |
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Tel : 052-217-1404 / Email : scholarworks@unist.ac.kr
Copyright (c) 2023 by UNIST LIBRARY. All rights reserved.
ScholarWorks@UNIST was established as an OAK Project for the National Library of Korea.