一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言內(nèi)容
驗(yàn)證碼

基于改進(jìn)深度卷積神經(jīng)網(wǎng)絡(luò)的紙幣識(shí)別研究

蓋杉 鮑中運(yùn)

蓋杉, 鮑中運(yùn). 基于改進(jìn)深度卷積神經(jīng)網(wǎng)絡(luò)的紙幣識(shí)別研究[J]. 電子與信息學(xué)報(bào), 2019, 41(8): 1992-2000. doi: 10.11999/JEIT181097
引用本文: 蓋杉, 鮑中運(yùn). 基于改進(jìn)深度卷積神經(jīng)網(wǎng)絡(luò)的紙幣識(shí)別研究[J]. 電子與信息學(xué)報(bào), 2019, 41(8): 1992-2000. doi: 10.11999/JEIT181097
Shan GAI, Zhongyun BAO. Banknote Recognition Research Based on Improved Deep Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2019, 41(8): 1992-2000. doi: 10.11999/JEIT181097
Citation: Shan GAI, Zhongyun BAO. Banknote Recognition Research Based on Improved Deep Convolutional Neural Network[J]. Journal of Electronics & Information Technology, 2019, 41(8): 1992-2000. doi: 10.11999/JEIT181097

基于改進(jìn)深度卷積神經(jīng)網(wǎng)絡(luò)的紙幣識(shí)別研究

doi: 10.11999/JEIT181097
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(61563037),江西省杰出青年計(jì)劃(20171BCB23057)
詳細(xì)信息
    作者簡(jiǎn)介:

    蓋杉:男,1980年生,副教授,碩士生導(dǎo)師,研究方向?yàn)橛?jì)算機(jī)視覺、圖像處理、深度學(xué)習(xí)

    鮑中運(yùn):男,1990年生,碩士生,研究方向?yàn)橛?jì)算機(jī)視覺、圖像處理、深度學(xué)習(xí)

    通訊作者:

    蓋杉 gaishan@nchu.edu.cn

  • 中圖分類號(hào): TP391.41; TP181

Banknote Recognition Research Based on Improved Deep Convolutional Neural Network

Funds: The National Natural Science Foundation of China(61563037), The Outstanding Youth Scheme of Jiangxi Province (20171BCB23057)
  • 摘要: 針對(duì)如何提高紙幣識(shí)別率的問(wèn)題,該文提出一種改進(jìn)深度卷積神經(jīng)網(wǎng)絡(luò)(DCNN)的紙幣識(shí)別算法。該算法首先通過(guò)融合遷移學(xué)習(xí)、帶泄露整流(Leaky ReLU)函數(shù)、批量歸一化(BN)和多層次殘差單元構(gòu)造深度卷積層,對(duì)輸入的不同尺寸紙幣進(jìn)行穩(wěn)定而快速的特征提取與學(xué)習(xí);然后采用改進(jìn)的多層次空間金字塔池化算法對(duì)提取的紙幣特征實(shí)現(xiàn)固定大小的輸出表示;最后通過(guò)網(wǎng)絡(luò)全連接層和softmax層實(shí)現(xiàn)紙幣圖像分類。實(shí)驗(yàn)結(jié)果表明,該算法在分類性能、泛化能力與穩(wěn)定性上明顯優(yōu)于常用的紙幣分類算法;同時(shí)該算法也能夠滿足紙幣清分系統(tǒng)的實(shí)時(shí)性要求。
  • 圖  1  算法結(jié)構(gòu)示意圖

    圖  2  紙幣圖像預(yù)處理(RMB-100)

    圖  3  紙幣圖像預(yù)處理(USD-100)

    圖  4  紙幣圖像預(yù)處理(EUR-500)

    圖  5  多層次殘差單元結(jié)構(gòu)圖

    圖  6  多層次空間金字塔池化算法結(jié)構(gòu)框架

    圖  7  紙幣圖像的4個(gè)面向

    表  1  紙幣數(shù)據(jù)庫(kù)

    紙幣種類紙幣面值紙幣分類紙幣樣本數(shù)訓(xùn)練樣本數(shù)測(cè)試樣本數(shù)
    人民幣(RMB)5, 10, 20, 50, 10020460003600010000
    美元(USD)1, 2, 10, 20, 50, 10024380002500013000
    歐元(EUR)5, 10, 20, 50, 100, 200, 5002835000260009000
    下載: 導(dǎo)出CSV

    表  2  數(shù)據(jù)庫(kù)DB1平均識(shí)別率(%)

    人民幣網(wǎng)格特征[3]自由掩模[2]VGGNet19[10]PReLU-net18]BN-inception[16]ResNet-34B[13]本文算法
    10074.2576.4491.5291.4592.3894.1696.68
    5074.0274.7590.8391.7692.1195.9897.80
    2075.2376.8892.3491.5693.6494.8895.03
    1080.1283.3494.0694.7695.6796.8696.97
    583.2480.5793.1693.2795.5395.6697.82
    下載: 導(dǎo)出CSV

    表  3  數(shù)據(jù)庫(kù)DB2平均識(shí)別率(%)

    美元網(wǎng)格特征[3]自由掩模[2]VGGNet19[10]PReLU-net[18]BN-inception[16]ResNet-34B[13]本文算法
    10070.1372.2489.2691.3393.2594.4695.67
    5073.1472.2891.3591.4992.9894.2994.96
    2074.5677.8290.2392.1493.0595.1195.89
    1076.2175.3491.2593.3493.6794.2895.15
    278.1180.1292.1392.8693.5895.6796.75
    181.2380.0291.2490.3694.2796.1697.98
    下載: 導(dǎo)出CSV

    表  4  數(shù)據(jù)庫(kù)DB3平均識(shí)別率(%)

    歐元網(wǎng)格特征[3]自由掩模[2]VGGNet19[10]PReLU-net[18]BN-inception[16]ResNet-34B[13]本文算法
    50081.1284.2393.2592.9194.5694.9396.98
    20081.6582.3293.2494.1394.6895.1298.20
    10085.4686.9494.1294.6795.2396.1197.75
    5079.2583.2493.2093.1294.3595, 2996.79
    2083.2484.5294.2595.2895.6496.3398.76
    1085.3387.1294.2494.7694.1997.2097.88
    584.2083.5294.1693.2695.1295.7897.89
    下載: 導(dǎo)出CSV

    表  5  污損紙幣實(shí)際測(cè)試識(shí)別率(%)

    污損樣本網(wǎng)格特征[3]自由掩模[2]VGGNet19[10]PReLU-net[18]BN-inception[16]ResNet-34B[13]本文算法
    DB1(16100)78.6582.4992.4593.1894.3795.0697.58
    DB2(15960)75.4279.1688.2491.0792.5394.8496.75
    DB3(10500)80.2883.1791.5293.6595.1896.7897.29
    下載: 導(dǎo)出CSV

    表  6  不同識(shí)別算法運(yùn)行時(shí)間(s)

    自由掩模[2]網(wǎng)格特征[3]VGGNet19[10]PReLU-Net[18]BN-inception[16]ResNet-34B[13]本文算法
    0.980.851.971.721.581.241.06
    下載: 導(dǎo)出CSV
  • KATO N, SUZUKI M, OMACHI S, et al. A handwritten character recognition system using directional element feature and asymmetric mahalanobis distance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(3): 258–262. doi: 10.1109/34.754617
    TAKEDA F and OMATU S. High speed paper currency recognition by neural networks[J]. IEEE Transactions on Neural Networks, 1995, 6(1): 73–77. doi: 10.1109/72.363448
    劉家鋒, 劉松波, 唐降龍. 一種實(shí)時(shí)紙幣識(shí)別方法的研究[J]. 計(jì)算機(jī)研究與發(fā)展, 2003, 40(7): 1057–1061.

    LIU Jiafeng, LIU Songbo, and TANG Xianglong. An algorithm of real-time paper currency recongnition[J]. Journal of Computer Research and Development, 2003, 40(7): 1057–1061.
    CHOI E, LEE J, and YOON J. Feature extraction for bank note classification using wavelet transform[C]. The IEEE 18th International Conference on Pattern Recognition (ICPR), Hong Kong, China, 2006: 934–937. doi: 10.1109/ICPR.2006.553.
    GAI Shan, YANG Guowei, and WAN Minghua. Employing quaternion wavelet transform for banknote classification[J]. Neurocomputing, 2013, 118: 171–178. doi: 10.1016/j.neucom.2013.02.029
    JIN Ye, SONG Ling, TANG Xianglong, et al. A hierarchical approach for banknote image processing using homogeneity and FFD model[J]. IEEE Signal Processing Letters, 2008, 15: 425–428. doi: 10.1109/LSP.2008.921470
    吳震東, 王雅妮, 章堅(jiān)武. 基于深度學(xué)習(xí)的污損指紋識(shí)別研究[J]. 電子與信息學(xué)報(bào), 2017, 39(7): 1585–1591. doi: 10.11999/JEIT161121

    WU Zhendong, WANG Yani, and ZHANG Jianwu. Fouling and damaged fingerprint recognition based on deep learning[J]. Journal of Electronics &Information Technology, 2017, 39(7): 1585–1591. doi: 10.11999/JEIT161121
    樊養(yǎng)余, 李祖賀, 王鳳琴, 等. 基于跨領(lǐng)域卷積稀疏自動(dòng)編碼器的抽象圖像情緒性分類[J]. 電子與信息學(xué)報(bào), 2017, 39(1): 167–175. doi: 10.11999/JEIT160241

    FAN Yangyu, LI Zuhe, WANG Fengqin, et al. Affective abstract image classification based on convolutional sparse autoencoders across different domains[J]. Journal of Electronics &Information Technology, 2017, 39(1): 167–175. doi: 10.11999/JEIT160241
    KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Nevada, USA, 2012: 1097–1105.
    SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. International Conference on Learning Representations (ICLR), Banff, Canada, 2015: 168–175.
    SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 1–9. doi: 10.1109/CVPR.2015.7298594.
    SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 2818–2826. doi: 10.1109/CVPR.2016.308.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904–1916. doi: 10.1109/TPAMI.2015.2389824
    PENG Peixi, TIAN Yonghong, XIANG Tao, et al. Joint semantic and latent attribute modeling for cross-class transfer learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(7): 1625–1638. doi: 10.1109/TPAMI.2017.2723882
    IOFFE S and SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift[C]. Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 2015: 448–456.
    KINGMA D P and BA J. Adam: A method for stochastic optimization[C]. Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, USA, 2015: 1–8.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C]. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015: 1026–1034. doi: 10.1109/ICCV.2015.123.
  • 加載中
圖(7) / 表(6)
計(jì)量
  • 文章訪問(wèn)數(shù):  4230
  • HTML全文瀏覽量:  1281
  • PDF下載量:  116
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2018-11-28
  • 修回日期:  2019-03-27
  • 網(wǎng)絡(luò)出版日期:  2019-04-21
  • 刊出日期:  2019-08-01

目錄

    /

    返回文章
    返回