一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于自適應(yīng)梯度壓縮的高效聯(lián)邦學(xué)習(xí)通信機(jī)制研究

唐倫 汪智平 蒲昊 吳壯 陳前斌

唐倫, 汪智平, 蒲昊, 吳壯, 陳前斌. 基于自適應(yīng)梯度壓縮的高效聯(lián)邦學(xué)習(xí)通信機(jī)制研究[J]. 電子與信息學(xué)報(bào), 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
引用本文: 唐倫, 汪智平, 蒲昊, 吳壯, 陳前斌. 基于自適應(yīng)梯度壓縮的高效聯(lián)邦學(xué)習(xí)通信機(jī)制研究[J]. 電子與信息學(xué)報(bào), 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
TANG Lun, WANG Zhiping, PU Hao, WU Zhuang, CHEN Qianbin. Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression[J]. Journal of Electronics & Information Technology, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
Citation: TANG Lun, WANG Zhiping, PU Hao, WU Zhuang, CHEN Qianbin. Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression[J]. Journal of Electronics & Information Technology, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262

基于自適應(yīng)梯度壓縮的高效聯(lián)邦學(xué)習(xí)通信機(jī)制研究

doi: 10.11999/JEIT211262
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(62071078), 重慶市教委科學(xué)技術(shù)研究項(xiàng)目(KJZD-M201800601), 川渝聯(lián)合實(shí)施重點(diǎn)研發(fā)項(xiàng)目(2021YFQ0053)
詳細(xì)信息
    作者簡(jiǎn)介:

    唐倫:男,教授,博士,研究方向?yàn)橄乱淮鸁o線通信網(wǎng)絡(luò)、異構(gòu)蜂窩網(wǎng)絡(luò)、軟件定義網(wǎng)絡(luò)等

    汪智平:男,碩士生,研究方向?yàn)檫吘壷悄苡?jì)算協(xié)同機(jī)理、聯(lián)邦學(xué)習(xí)通信優(yōu)化等

    蒲昊:男,碩士生,研究方向?yàn)檫吘壷悄苡?jì)算資源分配與協(xié)同機(jī)理等

    吳壯:男,碩士生,研究方向?yàn)檫吘壷悄苡?jì)算資源分配、無人機(jī)動(dòng)態(tài)規(guī)劃等

    陳前斌:男,教授,博士生導(dǎo)師,研究方向?yàn)閭€(gè)人通信、多媒體信息處理與傳輸、異構(gòu)蜂窩網(wǎng)絡(luò)等

    通訊作者:

    汪智平 2609116705@qq.com

  • 中圖分類號(hào): TN929.5

Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression

Funds: The National Natural Science Foundation of China (62071078), The Science and Technology Research Program of Chongqing Municipal Education Commission (KJZD-M201800601), Sichuan and Chongqing Key R&D Projects (2021YFQ0053)
  • 摘要: 針對(duì)物聯(lián)網(wǎng)(IoTs)場(chǎng)景下,聯(lián)邦學(xué)習(xí)(FL)過程中大量設(shè)備節(jié)點(diǎn)之間因冗余的梯度交互通信而帶來的不可忽視的通信成本問題,該文提出一種閾值自適應(yīng)的梯度通信壓縮機(jī)制。首先,引用了一種基于邊緣-聯(lián)邦學(xué)習(xí)的高效通信(CE-EDFL)機(jī)制,其中邊緣服務(wù)器作為中介設(shè)備執(zhí)行設(shè)備端的本地模型聚合,云端執(zhí)行邊緣服務(wù)器模型聚合及新參數(shù)下發(fā)。其次,為進(jìn)一步降低聯(lián)邦學(xué)習(xí)檢測(cè)時(shí)的通信開銷,提出一種閾值自適應(yīng)的梯度壓縮機(jī)制(ALAG),通過對(duì)本地模型梯度參數(shù)壓縮,減少設(shè)備端與邊緣服務(wù)器之間的冗余通信。實(shí)驗(yàn)結(jié)果表明,所提算法能夠在大規(guī)模物聯(lián)網(wǎng)設(shè)備場(chǎng)景下,在保障深度學(xué)習(xí)任務(wù)完成準(zhǔn)確率的同時(shí),通過降低梯度交互通信次數(shù),有效地提升了模型整體通信效率。
  • 圖  1  基于邊緣-聯(lián)邦學(xué)習(xí)的高效通信檢測(cè)模型

    圖  2  閾值自適應(yīng)選擇機(jī)制

    圖  3  不同α取值下的CCI值

    圖  4  不同客戶端數(shù)量下模型性能對(duì)比

    圖  5  4種模型訓(xùn)練損失對(duì)比

    圖  6  4種模型檢測(cè)精準(zhǔn)度對(duì)比

    圖  7  4種模型全局所需通信次數(shù)

    算法1 基于邊緣-聯(lián)邦學(xué)習(xí)的高效通信算法
     輸入:云端初始化參數(shù)$ {\omega _0} $,客戶端數(shù)量N,邊緣設(shè)備L
     輸出:全局模型參數(shù)$ \omega (k) $
     (1) for $ k = 1,2, \cdots ,K $ do
     (2)   for each Client $ i = 1,2, \cdots ,N $ in parallel do
     (3)    使用式(3)計(jì)算本地更新梯度$ \omega _i^l(k) $
     (4)    end for
     (5)   if $ k|{K_1} = 0 $ then
     (6)     for each Edge server $ l = 1,2, \cdots ,L $ in parallel do
     (7)      使用式(4)計(jì)算參數(shù)$ {\omega ^l}(k) $
     (8)      if $ k|{K_1}{K_2} \ne 0 $ then
     (9)      該邊緣端下所有設(shè)備參數(shù)保持不變:
            $ {\omega ^l}(k) \leftarrow \omega _i^l(k) $
     (10)      end if
     (11)     end for
     (12)   end if
     (13)   if $ k|{K_1}{K_2} = 0 $ then
     (14)     使用式(5)計(jì)算參數(shù)$ \omega (k) $
     (15)     for each Client $ i = 1,2, \cdots ,N $ in parallel do
     (16)     設(shè)備端參數(shù)更新為云端參數(shù):$ \omega (k) \leftarrow \omega _i^l(k) $
     (17)     end for
     (18)   end if
     (19) end for
    下載: 導(dǎo)出CSV
    算法2 一種閾值自適應(yīng)的梯度壓縮算法
     輸入:設(shè)備端節(jié)點(diǎn)m當(dāng)前所處迭代k,總迭代次數(shù)K,初始化全局
        梯度$ \nabla F $
     輸出:完成訓(xùn)練并符合模型要求的設(shè)備節(jié)點(diǎn)$ {M_{\text{L}}} $,M為設(shè)備節(jié)點(diǎn)
        集合
     (1) 初始化全局下發(fā)參數(shù)$ \omega (k - 1) $
     (2)  for $ k = 1,2, \cdots ,K $
     (3)    for $ m = 1,2, \cdots ,M $ do
     (4)    計(jì)算當(dāng)前m節(jié)點(diǎn)下的本地參數(shù)梯度$ \nabla {F_m}(\theta (k - 1)) $
     (5)    判斷參數(shù)梯度是否滿足梯度自檢式(16)
     (6)    滿足則跳過本輪通信,本地梯度累計(jì)
     (7)    參數(shù)梯度更新:$ \nabla {F_m}(\theta (k)) \leftarrow \nabla {F_m}(\theta (k - 1)) $
     (8)    不滿足上傳參數(shù)梯度$ \nabla {F_m}(\theta (k - 1)) $至邊緣服務(wù)器端
     (9)    end for
     (10)  end for
    下載: 導(dǎo)出CSV

    表  1  不同$ \alpha $取值下的模型檢測(cè)準(zhǔn)確率及壓縮率

    $\alpha $壓縮前平均
    通信次數(shù)
    壓縮后平均
    通信次數(shù)
    模型測(cè)試平均
    準(zhǔn)確率
    壓縮率(%)
    0.1400320.91758.00
    0.24002580.929864.50
    0.34002700.930167.50
    0.44002950.931473.75
    0.54003280.933582.00
    0.64003420.934185.50
    0.74003510.933687.75
    0.84003650.935291.25
    0.94003740.935193.75
    1.04004000.9349100.00
    下載: 導(dǎo)出CSV

    表  2  不同α, β下各算法性能對(duì)比

    實(shí)驗(yàn)驗(yàn)證指標(biāo)LAGEAFLMALAG
    Acc(Train set)0.88900.93680.9342
    CR (%)5.11008.77008.0000
    CCI($ {\beta _1} = 0.4,{\beta _2} = 0.6 $)0.92740.92060.9318
    CCI($ {\beta _1} = 0.5,{\beta _2} = 0.5 $)0.92200.92260.9331
    CCI($ {\beta _1} = 0.6,{\beta _2} = 0.4 $)0.91670.92470.9315
    下載: 導(dǎo)出CSV
  • [1] LI Tian, SAHU A K, TALWALKAR A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50–60. doi: 10.1109/MSP.2020.2975749
    [2] LUO Siqi, CHEN Xu, WU Qiong, et al. HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning[J]. IEEE Transactions on Wireless Communications, 2020, 19(10): 6535–6548. doi: 10.1109/TWC.2020.3003744
    [3] HUANG Liang, FENG Xu, FENG Anqi, et al. Distributed deep learning-based offloading for mobile edge computing networks[J]. Mobile Networks and Applications, 2022, 27: 1123–1130. doi: 10.1007/s11036-018-1177-x
    [4] 趙英, 王麗寶, 陳駿君, 等. 基于聯(lián)邦學(xué)習(xí)的網(wǎng)絡(luò)異常檢測(cè)[J]. 北京化工大學(xué)學(xué)報(bào):自然科學(xué)版, 2021, 48(2): 92–99. doi: 10.13543/j.bhxbzr.2021.02.012

    ZHAO Ying, WANG Libao, CHEN Junjun, et al. Network anomaly detection based on federated learning[J]. Journal of Beijing University of Chemical Technology:Natural Science, 2021, 48(2): 92–99. doi: 10.13543/j.bhxbzr.2021.02.012
    [5] 周傳鑫, 孫奕, 汪德剛, 等. 聯(lián)邦學(xué)習(xí)研究綜述[J]. 網(wǎng)絡(luò)與信息安全學(xué)報(bào), 2021, 7(5): 77–92. doi: 10.11959/j.issn.2096-109x.2021056

    ZHOU Chuanxin, SUN Yi, WANG Degang, et al. Survey of federated learning research[J]. Chinese Journal of Network and Information Security, 2021, 7(5): 77–92. doi: 10.11959/j.issn.2096-109x.2021056
    [6] SHI Weisong, CAO Jie, ZHANG Quan, et al. Edge computing: Vision and challenges[J]. IEEE Internet of Things Journal, 2016, 3(5): 637–646. doi: 10.1109/JIOT.2016.2579198
    [7] WANG Shiqiang, TUOR T, SALONIDIS T, et al. Adaptive federated learning in resource constrained edge computing systems[J]. IEEE Journal on Selected Areas in Communications, 2019, 37(6): 1205–1221. doi: 10.1109/JSAC.2019.2904348
    [8] ABESHU A and CHILAMKURTI N. Deep learning: The frontier for distributed attack detection in fog-to-things computing[J]. IEEE Communications Magazine, 2018, 56(2): 169–175. doi: 10.1109/MCOM.2018.1700332
    [9] LIU Lumin, ZHANG Jun, SONG Shenghui, et al. Client-edge-cloud hierarchical federated learning[C]. ICC 2020 - 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 2020: 1–6.
    [10] SATTLER F, WIEDEMANN S, MüLLER K R, et al. Robust and communication-efficient federated learning from Non-i. i. d. data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(9): 3400–3413. doi: 10.1109/TNNLS.2019.2944481
    [11] SUN Jun, CHEN Tianyi, GIANNAKIS G B, et al. Lazily aggregated quantized gradient innovation for communication-efficient federated learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 2031–2044. doi: 10.1109/TPAMI.2020.3033286
    [12] CHEN Tianyi, SUN Yuejiao, and YIN Wotao. Communication-adaptive stochastic gradient methods for distributed learning[J]. IEEE Transactions on Signal Processing, 2021, 69: 4637–4651. doi: 10.1109/TSP.2021.3099977
    [13] LU Xiaofeng, LIAO Yuying, LIO P, et al. Privacy-preserving asynchronous federated learning mechanism for edge network computing[J]. IEEE Access, 2020, 8: 48970–48981. doi: 10.1109/ACCESS.2020.2978082
    [14] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. The 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2016: 1273–1282.
  • 加載中
圖(7) / 表(4)
計(jì)量
  • 文章訪問數(shù):  1425
  • HTML全文瀏覽量:  1423
  • PDF下載量:  355
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2021-11-12
  • 修回日期:  2022-04-22
  • 網(wǎng)絡(luò)出版日期:  2022-04-28
  • 刊出日期:  2023-01-17

目錄

    /

    返回文章
    返回