Using model optimization as countermeasure against model recovery attacks

Machine learning (ML) and Deep learning (DL) have been widely studied and adopted for different applications across various fields. There is a growing demand for ML implementations as well as ML accelerators for small devices for Internet-of-Things (IoT) applications. Often, these accelerators allow...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: Jap, Dirmanto, Bhasin, Shivam
مؤلفون آخرون: Applied Cryptography and Network Security Workshops (ACNS 2023)
التنسيق: Conference or Workshop Item
اللغة:English
منشور في: 2024
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/173621
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Machine learning (ML) and Deep learning (DL) have been widely studied and adopted for different applications across various fields. There is a growing demand for ML implementations as well as ML accelerators for small devices for Internet-of-Things (IoT) applications. Often, these accelerators allow efficient edge-based inference based on pre-trained deep neural network models for IoT setting. First, the model will be trained separately on a more powerful machine and then deployed on the edge device for inference. However, there are several attacks reported that could recover and steal the pre-trained model. For example, recently an attack was reported on edge-based machine learning accelerator demonstrated recovery of target neural network models (architecture and weights) using cold-boot attack. Using this information, the adversary can reconstruct the model, albeit with certain errors due to the corruption of the data during the recovery process. Hence, this indicate potential vulnerability of implementation of ML/DL model on edge devices for IoT applications. In this work, we investigate generic countermeasures for model recovery attacks, based on neural network (NN) model optimization technique, such as quantization, binarization, pruning, etc. We first study and investigate the performance improvement offered and how these transformations could help in mitigating the model recovery process. Our experimental results show that model optimization methods, in addition to achieving better performance, can result in accuracy degradation which help to mitigate model recovery attacks.