File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이훈

Lee, Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

A Bipartite Graph Neural Network Approach for Scalable Beamforming Optimization

Author(s)
Kim, JunbeomLee, HoonHong, Seung-EunPark, Seok-Hwan
Issued Date
2023-01
DOI
10.1109/TWC.2022.3193138
URI
https://scholarworks.unist.ac.kr/handle/201301/65437
Fulltext
https://ieeexplore.ieee.org/document/9844981
Citation
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, v.22, no.1, pp.333 - 347
Abstract
Deep learning (DL) techniques have been intensively studied for the optimization of multi-user multiple-input single-output (MU-MISO) downlink systems owing to the capability of handling nonconvex formulations. However, the fixed computation structure of existing deep neural networks (DNNs) lacks flexibility with respect to the system size, i.e., the number of antennas or users. This paper develops a bipartite graph neural network (BGNN) framework, a scalable DL solution designed for multi-antenna beamforming optimization. The MU-MISO system is first characterized by a bipartite graph where two disjoint vertex sets, each of which consists of transmit antennas and users, are connected via pairwise edges. These vertex interconnection states are modeled by channel fading coefficients. Thus, a generic beamforming optimization process is interpreted as a computation task over a weighted bipartite graph. This approach partitions the beamforming optimization procedure into multiple suboperations dedicated to individual antenna vertices and user vertices. Separated vertex operations lead to scalable beamforming calculations that are invariant to the system size. The vertex operations are realized by a group of DNN modules that collectively form the BGNN architecture. Identical DNNs are reused at all antennas and users so that the resultant learning structure becomes flexible to the network size. Component DNNs of the BGNN are trained jointly over numerous MU-MISO configurations with randomly varying network sizes. As a result, the trained BGNN can be universally applied to arbitrary MU-MISO systems. Numerical results validate the advantages of the BGNN framework over conventional methods.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
ISSN
1536-1276
Keyword (Author)
Graph neural networkmessage-passingdeep learningmulti-user beamformingArray signal processingOptimizationAntennasBipartite graphTask analysisScalabilityTraining
Keyword
FREE MASSIVE MIMOMANAGEMENTALLOCATIONDESIGN

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.