Loading [MathJax]/jax/output/HTML-CSS/jax.js

一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

面向大規(guī)模多接入邊緣計(jì)算場景的任務(wù)卸載算法

盧先領(lǐng) 李德康

盧先領(lǐng), 李德康. 面向大規(guī)模多接入邊緣計(jì)算場景的任務(wù)卸載算法[J]. 電子與信息學(xué)報(bào), 2025, 47(1): 116-127. doi: 10.11999/JEIT240624
引用本文: 盧先領(lǐng), 李德康. 面向大規(guī)模多接入邊緣計(jì)算場景的任務(wù)卸載算法[J]. 電子與信息學(xué)報(bào), 2025, 47(1): 116-127. doi: 10.11999/JEIT240624
LU Xianling, LI Dekang. Task Offloading Algorithm for Large-scale Multi-access Edge Computing Scenarios[J]. Journal of Electronics & Information Technology, 2025, 47(1): 116-127. doi: 10.11999/JEIT240624
Citation: LU Xianling, LI Dekang. Task Offloading Algorithm for Large-scale Multi-access Edge Computing Scenarios[J]. Journal of Electronics & Information Technology, 2025, 47(1): 116-127. doi: 10.11999/JEIT240624

面向大規(guī)模多接入邊緣計(jì)算場景的任務(wù)卸載算法

doi: 10.11999/JEIT240624
基金項(xiàng)目: 國家自然科學(xué)基金(61773181)
詳細(xì)信息
    作者簡介:

    盧先領(lǐng):男,教授,博士生導(dǎo)師,研究方向?yàn)闊o線傳感器網(wǎng)絡(luò)、大數(shù)據(jù)、移動(dòng)邊緣計(jì)算等

    李德康:男,碩士生,研究方向?yàn)檫吘売?jì)算、強(qiáng)化學(xué)習(xí)

    通訊作者:

    盧先領(lǐng) jnluxl@jiangnan.edu.cn

  • 中圖分類號(hào): TN929.5

Task Offloading Algorithm for Large-scale Multi-access Edge Computing Scenarios

Funds: The National Natural Science Foundation of China (61773181)
  • 摘要: 基于單智能體強(qiáng)化學(xué)習(xí)的任務(wù)卸載算法在解決大規(guī)模多接入邊緣計(jì)算(MEC)系統(tǒng)任務(wù)卸載時(shí),存在智能體之間相互影響,策略退化的問題。而以多智能體深度確定性策略梯度(MADDPG)為代表的傳統(tǒng)多智能體算法的聯(lián)合動(dòng)作空間維度隨著系統(tǒng)內(nèi)智能體的數(shù)量增加而成比例增加,導(dǎo)致系統(tǒng)擴(kuò)展性變差。為解決以上問題,該文將大規(guī)模多接入邊緣計(jì)算任務(wù)卸載問題,描述為部分可觀測馬爾可夫決策過程(POMDP),提出基于平均場多智能體的任務(wù)卸載算法。通過引入長短期記憶網(wǎng)絡(luò)(LSTM)解決局部觀測問題,引入平均場近似理論降低聯(lián)合動(dòng)作空間維度。仿真結(jié)果表明,所提算法在任務(wù)時(shí)延與任務(wù)掉線率上的性能優(yōu)于單智能體任務(wù)卸載算法,并且在降低聯(lián)合動(dòng)作空間的維度情況下,任務(wù)時(shí)延與任務(wù)掉線率上的性能與MADDPG一致。
  • 圖  1  任務(wù)卸載隊(duì)列模型示意圖

    圖  2  MF-MATO算法框圖

    圖  3  策略網(wǎng)絡(luò)展開示意圖

    圖  4  平均累計(jì)回報(bào)曲線

    圖  5  不同算法平均時(shí)延曲線

    圖  6  平均時(shí)延與MDR隨MD數(shù)量的變化

    圖  7  平均時(shí)延與MDR隨任務(wù)到達(dá)率的變化曲線

    圖  8  平均時(shí)延與MDR隨ES數(shù)量的變化曲線

    1  MF-MATO算法流程

     輸入:MEC系統(tǒng)內(nèi)所有MD在時(shí)隙t內(nèi)的觀測向量
     輸出:MEC系統(tǒng)內(nèi)所有MD的任務(wù)卸載策略
     (1) 初始化所有Agent策略網(wǎng)絡(luò)參數(shù)wmHaQ值網(wǎng)絡(luò)參數(shù)θm
     與Hc。選擇Adam優(yōu)化器,并設(shè)置學(xué)習(xí)率ηc, ηa,設(shè)置目標(biāo)網(wǎng)絡(luò)
     軟更新系數(shù)τc, τa;
     (2) for episode = 1,2,···,I do
     (3)  for m = 1,2,···,M do
     (4)   for t = 1,2,···,T do
     (5)    每個(gè)Agent得到觀測omt向量,輸入決策網(wǎng)絡(luò)得到動(dòng)作
        amt=μm(omt);
     (6)   由at生成卸載決策并與環(huán)境交互,并得到回報(bào)rmt;
     (7)  end for
     (8)  將一個(gè)episode結(jié)束后得到的經(jīng)驗(yàn)E存儲(chǔ)至經(jīng)驗(yàn)池;
     (9)  從經(jīng)驗(yàn)池中隨機(jī)均勻采樣經(jīng)驗(yàn)E
     (10) 由式(27)計(jì)算策略網(wǎng)絡(luò)損失函數(shù),并更新網(wǎng)絡(luò)參數(shù)wm;
     (11) 由式(28)計(jì)算Q值網(wǎng)絡(luò)損失函數(shù),并更新網(wǎng)絡(luò)參數(shù)θm;
     (12) 軟更新目標(biāo)網(wǎng)絡(luò)參數(shù)
       ˜θmτcθm+(1τc)˜θm, ˜wmτawm+(1τa)˜wm;
     (13) end for
     (14) end for
    下載: 導(dǎo)出CSV

    表  1  仿真參數(shù)

    參數(shù) 參數(shù)
    Δ(s) 0.1 fdevicem(GHz) 2.5
    λ [0.35,0.90] fedgen(GHz) 41.8
    T 200 rtrann,m(Mbps) 24
    ρm(cycle·Mbit–1) 0.297 τlocal(時(shí)隙) 10
    ηc 0.000 1 τtran(時(shí)隙) 10
    ηa 0.000 1 τedge(時(shí)隙) 10
    τc 0.001 M 50~100
    τa 0.001 N 5~10
    任務(wù)數(shù)據(jù)量(Mbit) 2~5 γ 0.9
    下載: 導(dǎo)出CSV
  • [1] KHAN W Z, AHMED E, HAKAK S, et al. Edge computing: A survey[J]. Future Generation Computer Systems, 2019, 97: 219–235. doi: 10.1016/j.future.2019.02.050.
    [2] HUA Haochen, LI Yutong, WANG Tonghe, et al. Edge computing with artificial intelligence: A machine learning perspective[J]. ACM Computing Surveys, 2023, 55(9): 184. doi: 10.1145/3555802.
    [3] LI Tianxu, ZHU Kun, LUONG N C, et al. Applications of multi-agent reinforcement learning in future internet: A comprehensive survey[J]. IEEE Communications Surveys & Tutorials, 2022, 24(2): 1240–1279. doi: 10.1109/COMST.2022.3160697.
    [4] FENG Chuan, HAN Pengchao, ZHANG Xu, et al. Computation offloading in mobile edge computing networks: A survey[J]. Journal of Network and Computer Applications, 2022, 202: 103366. doi: 10.1016/j.jnca.2022.103366.
    [5] LUO Quyuan, HU Shihong, LI Changle, et al. Resource scheduling in edge computing: A survey[J]. IEEE Communications Surveys & Tutorials, 2021, 23(4): 2131–2165. doi: 10.1109/COMST.2021.3106401.
    [6] CHEN Weiwei, WANG Dong, and LI Keqin. Multi-user multi-task computation offloading in green mobile edge cloud computing[J]. IEEE Transactions on Services Computing, 2019, 12(5): 726–738. doi: 10.1109/TSC.2018.2826544.
    [7] PORAMBAGE P, OKWUIBE J, LIYANAGE M, et al. Survey on multi-access edge computing for internet of things realization[J]. IEEE Communications Surveys & Tutorials, 2018, 20(4): 2961–2991. doi: 10.1109/COMST.2018.2849509.
    [8] SAEIK F, AVGERIS M, SPATHARAKIS D, et al. Task offloading in edge and cloud computing: A survey on mathematical, artificial intelligence and control theory solutions[J]. Computer Networks, 2021, 195: 108177. doi: 10.1016/j.comnet.2021.108177.
    [9] LI Shancang, XU Lida, and ZHAO Shanshan. 5G internet of things: A survey[J]. Journal of Industrial Information Integration, 2018, 10: 1–9. doi: 10.1016/j.jii.2018.01.005.
    [10] 夏士超, 姚枝秀, 鮮永菊, 等. 移動(dòng)邊緣計(jì)算中分布式異構(gòu)任務(wù)卸載算法[J]. 電子與信息學(xué)報(bào), 2020, 42(12): 2891–2898. doi: 10.11999/JEIT190728.

    XIA Shichao, YAO Zhixiu, XIAN Yongju, et al. A distributed heterogeneous task offloading methodology for mobile edge computing[J] Journal of Electronics & Information Technology, 2020, 42(12): 2891–2898. doi: 10.11999/JEIT190728.
    [11] RANAWEERA P, JURCUT A D, and LIYANAGE M. Survey on multi-access edge computing security and privacy[J]. IEEE Communications Surveys & Tutorials, 2021, 23(2): 1078–1124. doi: 10.1109/COMST.2021.3062546.
    [12] TRAN T X and POMPILI D. Joint task offloading and resource allocation for multi-server mobile-edge computing networks[J]. IEEE Transactions on Vehicular Technology, 2019, 68(1): 856–868. doi: 10.1109/TVT.2018.2881191.
    [13] BI Suzhi, HUANG Liang, WANG Hui, et al. Lyapunov-guided deep reinforcement learning for stable online computation offloading in mobile-edge computing networks[J]. IEEE Transactions on Wireless Communications, 2021, 20(11): 7519–7537. doi: 10.1109/TWC.2021.3085319.
    [14] CHEN Xianfu, ZHANG Honggang, WU Celimuge, et al. Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning[J]. IEEE Internet of Things Journal, 2019, 6(3): 4005–4018. doi: 10.1109/JIOT.2018.2876279.
    [15] HUANG Liang, BI Suzhi, and ZHANG Y J A. Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks[J]. IEEE Transactions on Mobile Computing, 2020, 19(11): 2581–2593. doi: 10.1109/TMC.2019.2928811.
    [16] CAO Zilong, ZHOU Pan, LI Ruixuan, et al. Multiagent deep reinforcement learning for joint multichannel access and task offloading of mobile-edge computing in industry 4.0[J]. IEEE Internet of Things Journal, 2020, 7(7): 6201–6213. doi: 10.1109/JIOT.2020.2968951.
    [17] ZHU Xiaoyu, LUO Yueyi, LIU Anfeng, et al. Multiagent deep reinforcement learning for vehicular computation offloading in IoT[J]. IEEE Internet of Things Journal, 2021, 8(12): 9763–9773. doi: 10.1109/JIOT.2020.3040768.
    [18] HEYDARI J, GANAPATHY V, and SHAH M. Dynamic task offloading in multi-agent mobile edge computing networks[C]. 2019 IEEE Global Communications Conference, Waikoloa, USA, 2019: 1–6. doi: 10.1109/GLOBECOM38437.2019.9013115.
    [19] GAO Zhen, YANG Lei, and DAI Yu. Large-scale computation offloading using a multi-agent reinforcement learning in heterogeneous multi-access edge computing[J]. IEEE Transactions on Mobile Computing, 2023, 22(6): 3425–3443. doi: 10.1109/TMC.2022.3141080.
    [20] YANG Yaodong, LUO Rui, LI Minne, et al. Mean field multi-agent reinforcement learning[C]. The 35th International Conference on Machine Learning, Stockholm, Sweden, 2018: 5571–5580.
    [21] TANG Ming and WONG V W S. Deep reinforcement learning for task offloading in mobile edge computing systems[J]. IEEE Transactions on Mobile Computing, 2022, 21(6): 1985–1997. doi: 10.1109/TMC.2020.3036871.
  • 加載中
圖(8) / 表(2)
計(jì)量
  • 文章訪問數(shù):  258
  • HTML全文瀏覽量:  49
  • PDF下載量:  64
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2024-07-18
  • 修回日期:  2024-12-02
  • 網(wǎng)絡(luò)出版日期:  2024-12-09
  • 刊出日期:  2025-01-31

目錄

    /

    返回文章
    返回