File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Algorithms for Collaborative Machine Learning under Statistical Heterogeneity

Author(s)
Hahn, Seok-Ju
Advisor
Kim, Gi-Soo
Issued Date
2024-08
URI
https://scholarworks.unist.ac.kr/handle/201301/84116 http://unist.dcollection.net/common/orgView/200000813156
Abstract
Learning from distributed data without accessing them is undoubtedly a challenging and non-trivial task. Nevertheless, the necessity for distributed training of a statistical model has been increasing, due to the privacy concerns of local data owners and the cost in centralizing the massively distributed data. Federated learning (FL) is currently the de facto standard of training a machine learning model across heterogeneous data owners, without leaving the raw data out of local silos. Nevertheless, several challenges must be addressed in order for FL to be more practical in reality. Among these challenges, the statistical heterogeneity problem is the most significant and requires immediate attention. From the main objective of FL, three major factors can be considered as starting points — parameter, mixing coefficient, and local data distributions.
In alignment with the components, this dissertation is organized into three parts. In Chapter II, a novel personalization method, SuPerFed, inspired by the mode-connectivity is introduced. This method aims to find better parameters that are suitable for achieving enhanced generalization ability in all local data distributions. In Chapter III, an adaptive decision-making algorithm, AAggFF, is introduced for inducing uniform performance distributions in participating clients, which is realized by online convex optimization framework. This method explicitly learns fairness-inducing mixing coefficients sequentially, and is also specialized for two practical FL settings. Finally, in Chapter IV, a collaborative synthetic data generation method, FedEvg, is introduced, leveraging the flexibility and compositionality of an energy-based modeling approach. The objective of this method is to emulate the joint density of disparate local data distributions without accessing them, which enables to emulate centralized training of a model using the proxy dataset.
Taken together, all of these approaches provide practical solutions to mitigate the statistical heterogeneity problem in data-decentralized settings, paving the way for distributed systems and applications using collaborative machine learning methods.
Publisher
Ulsan National Institute of Science and Technology
Degree
Doctor
Major
Department of Industrial Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.