File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

문현곤

Moon, Hyungon
Computer Systems Security Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Precise Extraction ofDeep Learning Models viaSide-Channel Attacks onEdge/Endpoint Devices

Author(s)
Lee, YounghanJun, SoheeCho, YungiHan, WoorimMoon, HyungonPaek, Yunheung
Issued Date
2022-09-27
DOI
10.1007/978-3-031-17143-7_18
URI
https://scholarworks.unist.ac.kr/handle/201301/75475
Citation
European Symposium on Research in Computer Security, pp.364 - 383
Abstract
With growing popularity, deep learning (DL) models are becoming larger-scale, and only the companies with vast training datasets and immense computing power can manage their business serving such large models. Most of those DL models are proprietary to the companies who thus strive to keep their private models safe from the model extraction attack (MEA), whose aim is to steal the model by training surrogate models. Nowadays, companies are inclined to offload the models from central servers to edge/endpoint devices. As revealed in the latest studies, adversaries exploit this opportunity as new attack vectors to launch side-channel attack (SCA) on the device running victim model and obtain various pieces of the model information, such as the model architecture (MA) and image dimension (ID). Our work provides a comprehensive understanding of such a relationship for the first time and would benefit future MEA studies in both offensive and defensive sides in that they may learn which pieces of information exposed by SCA are more important than the others. Our analysis additionally reveals that by grasping the victim model information from SCA, MEA can get highly effective and successful even without any prior knowledge of the model. Finally, to evince the practicality of our analysis results, we empirically apply SCA, and subsequently, carry out MEA under realistic threat assumptions. The results show up to 5.8 times better performance than when the adversary has no model information about the victim model.
Publisher
Springer Science and Business Media Deutschland GmbH

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.