一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內(nèi)容
驗證碼

基于圖神經(jīng)網(wǎng)絡(luò)模型校準的成員推理攻擊

謝麗霞 史鏡琛 楊宏宇 胡澤 成翔

謝麗霞, 史鏡琛, 楊宏宇, 胡澤, 成翔. 基于圖神經(jīng)網(wǎng)絡(luò)模型校準的成員推理攻擊[J]. 電子與信息學報. doi: 10.11999/JEIT240477
引用本文: 謝麗霞, 史鏡琛, 楊宏宇, 胡澤, 成翔. 基于圖神經(jīng)網(wǎng)絡(luò)模型校準的成員推理攻擊[J]. 電子與信息學報. doi: 10.11999/JEIT240477
XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240477
Citation: XIE Lixia, SHI Jingchen, YANG Hongyu, HU Ze, CHENG Xiang. Membership Inference Attacks Based on Graph Neural Network Model Calibration[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT240477

基于圖神經(jīng)網(wǎng)絡(luò)模型校準的成員推理攻擊

doi: 10.11999/JEIT240477
基金項目: 國家自然科學基金民航聯(lián)合研究基金重點項目(U2433205),國家自然科學基金(62201576, U1833107),江蘇省基礎(chǔ)研究計劃自然科學基金青年基金(BK20230558)
詳細信息
    作者簡介:

    謝麗霞:女,碩士,教授,研究方向為網(wǎng)絡(luò)信息安全

    史鏡琛:男,碩士生,研究方向為人工智能安全

    楊宏宇:男,博士,教授,博士生導師,研究方向為網(wǎng)絡(luò)與系統(tǒng)安全、軟件安全、網(wǎng)絡(luò)安全態(tài)勢感知

    胡澤:男,博士,講師,研究方向為人工智能、自然語言處理、網(wǎng)絡(luò)信息安全

    成翔:男,博士,講師,研究方向為網(wǎng)絡(luò)與系統(tǒng)安全、網(wǎng)絡(luò)安全態(tài)勢感知、APT攻擊檢測

    通訊作者:

    楊宏宇 yhyxlx@hotmail.com

  • 中圖分類號: TN915.08; TP309

Membership Inference Attacks Based on Graph Neural Network Model Calibration

Funds: Civil Aviation Joint Research Fund Project of the National Natural Science Foundation of China (U2433205), The National Natural Science Foundation of China (62201576, U1833107), Jiangsu Provincial Basic Research Program Natural Science Foundation-Youth Fund (BK20230558)
  • 摘要: 針對圖神經(jīng)網(wǎng)絡(luò)(GNN)模型在其預(yù)測中常處于欠自信狀態(tài),導致該狀態(tài)下實施成員推理攻擊難度大且攻擊漏報率高的問題,該文提出一種基于GNN模型校準的成員推理攻擊方法。首先,設(shè)計一種基于因果推斷的GNN模型校準方法,通過基于注意力機制的因果圖提取、因果圖與非因果圖解耦、后門路徑調(diào)整策略和因果關(guān)聯(lián)圖生成過程,構(gòu)建用于訓練GNN模型的因果關(guān)聯(lián)圖。其次,使用與目標因果關(guān)聯(lián)圖在相同數(shù)據(jù)分布下的影子因果關(guān)聯(lián)圖構(gòu)建影子GNN模型,模擬目標GNN模型的預(yù)測行為。最后,使用影子GNN模型的后驗概率構(gòu)建攻擊數(shù)據(jù)集以訓練攻擊模型,根據(jù)目標GNN模型對目標節(jié)點的后驗概率輸出推斷其是否屬于目標GNN模型的訓練數(shù)據(jù)。在4個數(shù)據(jù)集上的實驗結(jié)果表明,該文方法在2種攻擊模式下面對不同架構(gòu)的GNN模型進行攻擊時,攻擊準確率最高為92.6%,攻擊漏報率最低為6.7%,性能指標優(yōu)于基線攻擊方法,可有效地實施成員推理攻擊。
  • 圖  1  MIAs-MC攻擊方法架構(gòu)

    圖  2  基于因果推斷的模型校準方法

    圖  3  GNN中的結(jié)構(gòu)因果模型

    圖  4  攻擊模式2下MIAs的攻擊準確率

    圖  5  攻擊模式2下MIAs的攻擊精確率

    圖  6  攻擊模式2下MIAs的攻擊漏報率

    1  因果關(guān)聯(lián)圖生成算法

     輸入:GNN模型初始的訓練子圖G(Gt, GsG),迭代次數(shù)T
     輸出:目標因果關(guān)聯(lián)圖Gtarget,影子因果關(guān)聯(lián)圖Gshadow
     (1) for t = 1 to T
     (2)  Gc, Gu ← Attention(G) //因果圖提取
     (3)  Lc, Lu ← Decouple(G) //因果圖與非因果圖解耦,生成對
        應(yīng)損失函數(shù)
     (4)  Lcau ← Backdoor Adjustment(Gc, Gu) //后門路徑調(diào)整,
        生成后門路徑調(diào)整損失函數(shù)
     (5)  LLc, Lu, Lcau //計算模型總損失函數(shù)
     (6)  θt+1 ← Update(θt) //更新模型參數(shù)
     (7)  Gt+1Gt //迭代更新因果注意力圖
     (8) endfor
     (9) Gtarget, Gshadow ← GT //生成目標因果關(guān)聯(lián)圖和影子因果關(guān)
       聯(lián)圖
     (10) 結(jié)束算法返回目標因果關(guān)聯(lián)圖Gtarget,影子因果關(guān)聯(lián)圖
       Gshadow
    下載: 導出CSV

    表  1  數(shù)據(jù)集的統(tǒng)計信息

    數(shù)據(jù)集類別數(shù)節(jié)點數(shù)邊數(shù)節(jié)點特征維度使用節(jié)點數(shù)
    Cora72 7085 4291 4332 520
    CiteSeer63 3274 7323 7032 400
    PubMed319 71744 33850018 000
    Flickr789 250449 87850042 000
    下載: 導出CSV

    表  2  攻擊模式1下MIAs-MC的攻擊結(jié)果

    數(shù)據(jù)集 GNN架構(gòu) Accuracy Precision AUC Recall F1-score
    Cora GCN 0.926 0.920 0.912 0.913 0.912
    GAT 0.911 0.914 0.910 0.911 0.911
    GraphSAGE 0.905 0.908 0.904 0.905 0.905
    SGC 0.914 0.923 0.915 0.914 0.914
    CiteSeer GCN 0.918 0.912 0.917 0.918 0.918
    GAT 0.857 0.879 0.857 0.857 0.855
    GraphSAGE 0.933 0.936 0.931 0.933 0.933
    SGC 0.930 0.938 0.929 0.930 0.930
    PubMed GCN 0.750 0.784 0.750 0.751 0.743
    GAT 0.642 0.686 0.643 0.642 0.621
    GraphSAGE 0.748 0.754 0.747 0.748 0.748
    SGC 0.690 0.702 0.691 0.690 0.690
    Flickr GCN 0.841 0.846 0.841 0.841 0.841
    GAT 0.786 0.801 0.787 0.786 0.785
    GraphSAGE 0.732 0.764 0.732 0.732 0.725
    SGC 0.907 0.916 0.908 0.907 0.907
    下載: 導出CSV

    表  3  攻擊模式1下基線攻擊方法的攻擊結(jié)果

    數(shù)據(jù)集 GNN架構(gòu) Accuracy Precision AUC Recall F1-score
    Cora GCN 0.763 0.770 0.764 0.763 0.763
    GAT 0.721 0.728 0.718 0.721 0.720
    GraphSAGE 0.825 0.837 0.825 0.825 0.824
    SGC 0.806 0.812 0.808 0.806 0.807
    CiteSeer GCN 0.860 0.865 0.859 0.860 0.860
    GAT 0.772 0.775 0.769 0.772 0.771
    GraphSAGE 0.858 0.875 0.859 0.858 0.827
    SGC 0.863 0.868 0.862 0.863 0.863
    PubMed GCN 0.647 0.655 0.647 0.647 0.647
    GAT 0.593 0.612 0.593 0.593 0.580
    GraphSAGE 0.554 0.560 0.553 0.554 0.553
    SGC 0.664 0.685 0.665 0.664 0.658
    Flickr GCN 0.774 0.805 0.775 0.774 0.769
    GAT 0.601 0.613 0.602 0.601 0.598
    GraphSAGE 0.689 0.755 0.688 0.689 0.668
    SGC 0.877 0.893 0.878 0.877 0.876
    下載: 導出CSV

    表  4  Cora數(shù)據(jù)集上影子模型與目標模型準確率差異(%)

    GNN架構(gòu)基線攻擊下訓練準確率差值基線攻擊下測試準確率差值模型校準后訓練準確率差值模型校準后測試準確率差值
    GCN0.323.970.790.95
    GAT3.651.991.912.22
    GraphSAGE0.324.920.160.80
    SGC0.660.471.111.70
    下載: 導出CSV

    表  5  PubMed數(shù)據(jù)集上影子模型與目標模型準確率差異(%)

    GNN架構(gòu)基線攻擊下訓練準確率差值基線攻擊下測試準確率差值模型校準后訓練準確率差值模型校準后測試準確率差值
    GCN1.450.751.150.14
    GAT0.361.150.820.51
    GraphSAGE0.205.120.133.15
    SGC1.580.600.730.56
    下載: 導出CSV
  • [1] SHOKRI R, STRONATI M, SONG Congzheng, et al. Membership inference attacks against machine learning models[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 3–18. doi: 10.1109/SP.2017.41.
    [2] SALEM A, ZHANG Yang, HUMBERT M, et al. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models[C]. The Network and Distributed System Security Symposium (NDSS), San Diego, USA, 2019: 24–27.
    [3] LONG Yunhui, WANG Lei, BU Diyue, et al. A pragmatic approach to membership inferences on machine learning models[C]. 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 2020: 521–534. doi: 10.1109/EuroSP48549.2020.00040.
    [4] KO M, JIN Ming, WANG Chenguang, et al. Practical membership inference attacks against large-scale multi-modal models: A pilot study[C]. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023: 4848–4858. doi: 10.1109/ICCV51070.2023.00449.
    [5] CHOQUETTE-CHOO C A, TRAMER F, CARLINI N, et al. Label-only membership inference attacks[C]. The 38th International Conference on Machine Learning, 2021: 1964–1974.
    [6] LIU Han, WU Yuhao, YU Zhiyuan, et al. Please tell me more: Privacy impact of explainability through the lens of membership inference attack[C]. 2024 IEEE Symposium on Security and Privacy (SP), San Francisco, USA, 2024: 120–120. doi: 10.1109/SP54263.2024.00120.
    [7] 吳博, 梁循, 張樹森, 等. 圖神經(jīng)網(wǎng)絡(luò)前沿進展與應(yīng)用[J]. 計算機學報, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.

    WU Bo, LIANG Xun, ZHANG Shusen, et al. Advances and applications in graph neural network[J]. Chinese Journal of Computers, 2022, 45(1): 35–68. doi: 10.11897/SP.J.1016.2022.00035.
    [8] OLATUNJI I E, NEJDL W, and KHOSLA M. Membership inference attack on graph neural networks[C]. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, USA, 2021: 11–20. doi: 10.1109/TPSISA52974.2021.00002.
    [9] HE Xinlei, WEN Rui, WU Yixin, et al. Node-level membership inference attacks against graph neural networks[EB/OL]. https://arxiv.org/abs/2102.05429, 2021.
    [10] WU Bang, YANG Xiangwen, PAN Shirui, et al. Adapting membership inference attacks to GNN for graph classification: Approaches and implications[C]. 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 2021: 1421–1426. doi: 10.1109/ICDM51629.2021.00182.
    [11] WANG Xiuling and WANG W H. Link membership inference attacks against unsupervised graph representation learning[C]. The 39th Annual Computer Security Applications Conference, Austin, USA, 2023: 477–491. doi: 10.1145/3627106.3627115.
    [12] WANG Xiao, LIU Hongrui, SHI Chuan, et al. Be confident! Towards trustworthy graph neural networks via confidence calibration[C]. The 35th Conference on Neural Information Processing Systems, 2021: 1820.
    [13] HSU H H H, SHEN Y, TOMANI C, et al. What makes graph neural networks miscalibrated?[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1001.
    [14] LIU Tong, LIU Yushan, HILDEBRANDT M, et al. On calibration of graph neural networks for node classification[C]. 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022: 1–8. doi: 10.1109/IJCNN55064.2022.9892866.
    [15] YANG Zhilin, COHEN W W, and SALAKHUTDINOV R. Revisiting semi-supervised learning with graph embeddings[C]. The 33rd International Conference on International Conference on Machine Learning, New York, USA, 2016: 40–48.
    [16] ZENG Hanqing, ZHOU Hongkuan, SRIVASTAVA A, et al. GraphSAINT: Graph sampling based inductive learning method[C]. The 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020: 1–19.
    [17] KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. The 5th International Conference on Learning Representations, Toulon, France, 2017: 1–14.
    [18] VELI?KOVI? P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018: 1–12.
    [19] HAMILTON W, YING Z, and LESKOVEC J. Inductive representation learning on large graphs[C]. Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 1025–1035.
    [20] WU F, SOUZA A, ZHANG Tianyi, et al. Simplifying graph convolutional networks[C]. The 36th International Conference on Machine Learning, Long Beach, USA, 2019: 6861–6871.
  • 加載中
圖(6) / 表(6)
計量
  • 文章訪問數(shù):  116
  • HTML全文瀏覽量:  40
  • PDF下載量:  21
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-06-12
  • 修回日期:  2025-02-17
  • 網(wǎng)絡(luò)出版日期:  2025-02-26

目錄

    /

    返回文章
    返回