一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

用于表示級(jí)特征融合與分類的相關(guān)熵融合極限學(xué)習(xí)機(jī)

吳超 李雅倩 張亞茹 劉彬

吳超, 李雅倩, 張亞茹, 劉彬. 用于表示級(jí)特征融合與分類的相關(guān)熵融合極限學(xué)習(xí)機(jī)[J]. 電子與信息學(xué)報(bào), 2020, 42(2): 386-393. doi: 10.11999/JEIT190186
引用本文: 吳超, 李雅倩, 張亞茹, 劉彬. 用于表示級(jí)特征融合與分類的相關(guān)熵融合極限學(xué)習(xí)機(jī)[J]. 電子與信息學(xué)報(bào), 2020, 42(2): 386-393. doi: 10.11999/JEIT190186
Chao WU, Yaqian LI, Yaru ZHANG, Bin LIU. Correntropy-based Fusion Extreme Learning Machine forRepresentation Level Feature Fusion and Classification[J]. Journal of Electronics & Information Technology, 2020, 42(2): 386-393. doi: 10.11999/JEIT190186
Citation: Chao WU, Yaqian LI, Yaru ZHANG, Bin LIU. Correntropy-based Fusion Extreme Learning Machine forRepresentation Level Feature Fusion and Classification[J]. Journal of Electronics & Information Technology, 2020, 42(2): 386-393. doi: 10.11999/JEIT190186

用于表示級(jí)特征融合與分類的相關(guān)熵融合極限學(xué)習(xí)機(jī)

doi: 10.11999/JEIT190186
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(51641609)
詳細(xì)信息
    作者簡(jiǎn)介:

    吳超:男,1990年生,博士生,研究方向?yàn)橛?jì)算機(jī)視覺(jué)

    李雅倩:女,1982年生,副教授,研究方向?yàn)橛?jì)算機(jī)視覺(jué)

    張亞茹:女,1995年生,博士生,研究方向?yàn)橛?jì)算機(jī)視覺(jué)

    劉彬:男,1953年生,教授,研究方向?yàn)橛?jì)算機(jī)視覺(jué)

    通訊作者:

    李雅倩 yaqianli@126.com

  • 中圖分類號(hào): TP391

Correntropy-based Fusion Extreme Learning Machine forRepresentation Level Feature Fusion and Classification

Funds: The National Natural Science Foundation of China (51641609)
  • 摘要:

    在極限學(xué)習(xí)機(jī)(ELM)網(wǎng)絡(luò)結(jié)構(gòu)和訓(xùn)練模式的基礎(chǔ)上,該文提出了相關(guān)熵融合極限學(xué)習(xí)機(jī)(CF-ELM)。針對(duì)多數(shù)分類方法中表示級(jí)特征融合不充分的問(wèn)題,該文將核映射與系數(shù)加權(quán)相結(jié)合,提出了能夠有效融合表示級(jí)特征的融合極限學(xué)習(xí)機(jī)(F-ELM)。在此基礎(chǔ)上,用相關(guān)熵?fù)p失函數(shù)替代均方誤差(MSE)損失函數(shù),推導(dǎo)出用于訓(xùn)練F-ELM各層權(quán)重矩陣的相關(guān)熵循環(huán)更新公式,以增強(qiáng)其分類能力與魯棒性。為了檢驗(yàn)方法的可行性,該文分別在數(shù)據(jù)庫(kù)Caltech 101, MSRC和15 Scene上進(jìn)行實(shí)驗(yàn)。實(shí)驗(yàn)結(jié)果證明,該文所提CF-ELM能夠在原有基礎(chǔ)上進(jìn)一步融合表示級(jí)特征,從而提高分類正確率。

  • 圖  1  融合極限學(xué)習(xí)機(jī)的網(wǎng)絡(luò)結(jié)構(gòu)

    圖  2  CF-ELM中循環(huán)次數(shù)對(duì)Caltech 101與MSRC數(shù)據(jù)庫(kù)上正確率的影響曲線

    圖  3  Caltech 101數(shù)據(jù)庫(kù)上2種組合的正確率曲面圖

    圖  4  MSRC數(shù)據(jù)庫(kù)上3種組合的正確率曲面圖

    圖  5  F-ELM與CF-ELM對(duì)兩種數(shù)據(jù)庫(kù)中具有嘈雜背景圖像的正確率

    圖  6  CF-ELM中循環(huán)次數(shù)對(duì)15 Scene數(shù)據(jù)庫(kù)上正確率的影響曲線

    圖  7  15 Scene數(shù)據(jù)庫(kù)上3種組合的正確率曲面圖

    圖  8  F-ELM與CF-ELM對(duì)15 Scene的正確率

    表  1  Caltech 101與MSRC的正確率與訓(xùn)練時(shí)間

    組合方法SVMKELMF-ELMCF-ELM
    組合1組合2組合3組合1組合2組合3組合1組合2組合3組合1組合2組合3
    Caltech 101 (%)72.0477.9379.4378.8480.3180.1980.5983.65
    訓(xùn)練時(shí)間(s)862.16540.30860.37538.56861.76539.77902.13569.80
    MSRC (%)90.2688.5794.1391.4290.4291.7491.7490.5893.4990.9592.0695.76
    訓(xùn)練時(shí)間(s)79.5917.1117.1179.5117.0117.0179.5417.0417.0480.6818.2018.20
    下載: 導(dǎo)出CSV

    表  2  Caltech 101的結(jié)果比較

    方法SPM[9]LLC[17]文獻(xiàn)[18]SDCD+PHOW[14]文獻(xiàn)[19]ScSPM+DVM[20]文獻(xiàn)[12]文獻(xiàn)[8]文獻(xiàn)[21]文獻(xiàn)[16]CF-ELM-組合2
    字典維數(shù)20020481000102440010242048600400
    正確率(%)64.6073.4474.3075.3776.0077.7077.9378.0079.7083.9083.65
    下載: 導(dǎo)出CSV

    表  3  15 Scene的正確率與訓(xùn)練時(shí)間

    組合方法SVMKELMF-ELMCF-ELM
    組合1組合2組合4組合1組合2組合4組合1組合2組合4組合1組合2組合4
    15 Scene(%)74.3477.9286.4672.0080.7383.5377.2082.0684.3379.0083.0687.76
    訓(xùn)練時(shí)間(s)347.44106.25106.25347.12106.11106.11347.37106.41106.41367.50120.10120.10
    下載: 導(dǎo)出CSV

    表  4  15 Scene的結(jié)果比較

    方法SPM[9]LLC[17]SLC[22]LSVQ[22]文獻(xiàn)[23]LGF[15]文獻(xiàn)[12]MFS[24]文獻(xiàn)[16]CF-ELM-組合4
    字典維數(shù)2001000102410241024400600400
    正確率(%)81.1081.7381.8983.0885.7085.8086.4687.1090.1087.76
    下載: 導(dǎo)出CSV
  • HUANG Guangbin, ZHU Qinyu, and SIEW C K. Extreme learning machine: Theory and applications[J]. Neurocomputing, 2006, 70(1/3): 489–501. doi: 10.1016/j.neucom.2005.12.126
    HUANG Guangbin, ZHOU Hongming, DING Xiaojian, et al. Extreme learning machine for regression and multiclass classification[J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2012, 42(2): 513–529. doi: 10.1109/TSMCB.2011.2168604
    KASUN L L C, ZHOU Hongming, HUANG Guangbin, et al. Representational learning with extreme learning machine for big data[J]. IEEE Intelligent Systems, 2013, 28(6): 31–34.
    XING Hongjie and WANG Xinmei. Training extreme learning machine via regularized correntropy criterion[J]. Neural Computing and Applications, 2013, 23(7/8): 1977–1986. doi: 10.1007/s00521-012-1184-y
    CHEN Liangjun, HONEINE P, QU Hua, et al. Correntropy-based robust multilayer extreme learning machines[J]. Pattern Recognition, 2018, 84: 357–370. doi: 10.1016/j.patcog.2018.07.011
    LUO Xiong, SUN Jiankun, WANG Long, et al. Short-term wind speed forecasting via stacked extreme learning machine with generalized correntropy[J]. IEEE Transactions on Industrial Informatics, 2018, 14(11): 4963–4971. doi: 10.1109/TII.2018.2854549
    HAN Honggui, WANG Lidan, and QIAO Junfei. Hierarchical extreme learning machine for feedforward neural network[J]. Neurocomputing, 2014, 128: 128–135. doi: 10.1016/j.neucom.2013.01.057
    LI Qing, PENG Qiang, CHEN Junzhou, et al. Improving image classification accuracy with ELM and CSIFT[J]. Computing in Science & Engineering, 2019, 21(5): 26–34. doi: 10.1109/MCSE.2018.108164708
    LAZEBNIK S, SCHMID C, and PONCE J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 2169–2178. doi: 10.1109/CVPR.2006.68.
    JÉGOU H, DOUZE M, SCHMID C, et al. Aggregating local descriptors into a compact image representation[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 3304–3311. doi: 10.1109/CVPR.2010.5540039.
    SáNCHEZ J, PERRONNIN F, MENSINK T, et al. Image classification with the fisher vector: Theory and practice[J]. International Journal of Computer Vision, 2013, 105(3): 222–245. doi: 10.1007/s11263-013-0636-x
    李雅倩, 吳超, 李海濱, 等. 局部位置特征與全局輪廓特征相結(jié)合的圖像分類方法[J]. 電子學(xué)報(bào), 2018, 46(7): 1726–1731. doi: 10.3969/j.issn.0372-2112.2018.07.026

    LI Yaqian, WU Chao, LI Haibin, et al. Image classification method combining local position feature with global contour feature[J]. Acta Electronica Sinica, 2018, 46(7): 1726–1731. doi: 10.3969/j.issn.0372-2112.2018.07.026
    AHMED K T, IRTAZA A, and IQBAL M A. Fusion of local and global features for effective image extraction[J]. Applied Intelligence, 2017, 47(2): 526–543. doi: 10.1007/s10489-017-0916-1
    MANSOURIAN L, ABDULLAH M T, ABDULLAH L N, et al. An effective fusion model for image retrieval[J]. Multimedia Tools and Applications, 2018, 77(13): 16131–16154. doi: 10.1007/s11042-017-5192-x
    ZOU Jinyi, LI Wei, CHEN Chen, et al. Scene classification using local and global features with collaborative representation fusion[J]. Information Sciences, 2016, 348: 209–226. doi: 10.1016/j.ins.2016.02.021
    KONIUSZ P, YAN Fei, GOSSELIN P H, et al. Higher-order occurrence pooling for bags-of-words: Visual concept detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2): 313–326. doi: 10.1109/TPAMI.2016.2545667
    WANG Jinjun, YANG Jianchao, YU Kai, et al. Locality-constrained linear coding for image classification[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 3360–3367.
    ZHU Qihai, WANG Zhezheng, MAO Xiaojiao, et al. Spatial locality-preserving feature coding for image classification[J]. Applied Intelligence, 2017, 47(1): 148–157. doi: 10.1007/s10489-016-0887-7
    XIONG Wei, ZHANG Lefei, DU Bo, et al. Combining local and global: Rich and robust feature pooling for visual recognition[J]. Pattern Recognition, 2017, 62: 225–235. doi: 10.1016/j.patcog.2016.08.006
    GUI Jie, LIU Tongliang, TAO Dacheng, et al. Representative vector machines: A unified framework for classical classifiers[J]. IEEE Transactions on Cybernetics, 2016, 46(8): 1877–1888. doi: 10.1109/TCYB.2015.2457234
    GOH H, THOME N, CORD M, et al. Learning deep hierarchical visual feature coding[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(12): 2212–2225. doi: 10.1109/TNNLS.2014.2307532
    肖文華, 包衛(wèi)東, 陳立棟, 等. 一種用于圖像分類的語(yǔ)義增強(qiáng)線性編碼方法[J]. 電子與信息學(xué)報(bào), 2015, 37(4): 791–797. doi: 10.11999/JEIT140743

    XIAO Wenhua, BAO Weidong, CHEN Lidong, et al. A semantic enhanced linear coding for image classification[J]. Journal of Electronics &Information Technology, 2015, 37(4): 791–797. doi: 10.11999/JEIT140743
    LI Lijia, SU Hao, LIM Y, et al. Object bank: An object-level image representation for high-level visual recognition[J]. International Journal of Computer Vision, 2014, 107(1): 20–39. doi: 10.1007/s11263-013-0660-x
    SONG Xinhang, JIANG Shuqiang, and HERRANZ L. Multi-scale multi-feature context modeling for scene recognition in the semantic manifold[J]. IEEE Transactions on Image Processing, 2017, 26(6): 2721–2735. doi: 10.1109/TIP.2017.2686017
  • 加載中
圖(8) / 表(4)
計(jì)量
  • 文章訪問(wèn)數(shù):  2750
  • HTML全文瀏覽量:  1348
  • PDF下載量:  112
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-03-27
  • 修回日期:  2019-09-03
  • 網(wǎng)絡(luò)出版日期:  2019-09-12
  • 刊出日期:  2020-02-19

目錄

    /

    返回文章
    返回