File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김지수

Kim, Gi-Soo
Statistical Decision Making
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace AU -
dc.citation.conferencePlace Vienna -
dc.citation.endPage 17278 -
dc.citation.startPage 17246 -
dc.citation.title IEEE International Conference on Machine Learning -
dc.contributor.author Hahn, Seok-Ju -
dc.contributor.author Kim, Gi-Soo -
dc.contributor.author Lee, Junghye -
dc.date.accessioned 2024-09-26T08:35:05Z -
dc.date.available 2024-09-26T08:35:05Z -
dc.date.created 2024-09-25 -
dc.date.issued 2024-07-21 -
dc.description.abstract In traditional federated learning, a single global model cannot perform equally well for all clients. Therefore, the need to achieve the client-level fairness in federated system has been emphasized, which can be realized by modifying the static aggregation scheme for updating the global model to an adaptive one, in response to the local signals of the participating clients. Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework, in other words, a central server's sequential decision making process. To enhance the decision making capability, we propose simple and intuitive improvements for suboptimal designs within existing methods, presenting AAggFF. Considering practical requirements, we further subdivide our method tailored for the cross-device and the cross-silo settings, respectively. Theoretical analyses guarantee sublinear regret upper bounds for both settings: O(√T log K) for the cross-device setting, and O(K log T) for the cross-silo setting, with K clients and T federation rounds. Extensive experiments demonstrate that the federated system equipped with AAggFF achieves better degree of client-level fairness than existing methods in both practical settings. Code is available at https://github.com/vaseline555/AAggFF. -
dc.identifier.bibliographicCitation IEEE International Conference on Machine Learning, pp.17246 - 17278 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/83945 -
dc.language 영어 -
dc.publisher ML Research Press -
dc.title Pursuing Overall Welfare in Federated Learning through Sequential Decision Making -
dc.type Conference Paper -
dc.date.conferenceDate 2024-07-21 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.