File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

박새롬

Park, Saerom
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace ZZ -
dc.citation.conferencePlace Online -
dc.citation.endPage 3499 -
dc.citation.startPage 3488 -
dc.citation.title International World Wide Web Conference -
dc.contributor.author Park, Saerom -
dc.contributor.author Kim, Seongmin -
dc.contributor.author Lim, Yeon-Sup -
dc.date.accessioned 2024-01-31T20:37:55Z -
dc.date.available 2024-01-31T20:37:55Z -
dc.date.created 2023-05-30 -
dc.date.issued 2022-04-25 -
dc.description.abstract Algorithmic discrimination is one of the significant concerns in applying machine learning models to a real-world system. Many researchers have focused on developing fair machine learning algorithms without discrimination based on legally protected attributes. However, the existing research has barely explored various security issues that can occur while evaluating model fairness and verifying fair models. In this study, we propose a fairness audit framework that assesses the fairness of ML algorithms while addressing potential security issues such as data privacy, model secrecy, and trustworthiness. To this end, our proposed framework utilizes confidential computing and builds a chain of trust through enclave attestation primitives combined with public scrutiny and state-of-the-art software-based security techniques, enabling fair ML models to be securely certified and clients to verify a certified one. Our micro-benchmarks on various ML models and real-world datasets show the feasibility of the fairness certification implemented with Intel SGX in practice. In addition, we analyze the impact of data poisoning, which is an additional threat during data collection for fairness auditing. Based on the analysis, we illustrate the theoretical curves of fairness gap and minimal group size and the empirical results of fairness certification on poisoned datasets. -
dc.identifier.bibliographicCitation International World Wide Web Conference, pp.3488 - 3499 -
dc.identifier.doi 10.1145/3485447.3512244 -
dc.identifier.scopusid 2-s2.0-85129894680 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/76136 -
dc.language 영어 -
dc.publisher Association for Computing Machinery, Inc -
dc.title Fairness Audit of Machine Learning Models with Confidential Computing -
dc.type Conference Paper -
dc.date.conferenceDate 2022-04-25 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.