一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言?xún)?nèi)容
驗(yàn)證碼

基于遷移權(quán)重的條件對(duì)抗領(lǐng)域適應(yīng)

王進(jìn) 王科 閔子劍 孫開(kāi)偉 鄧欣

王進(jìn), 王科, 閔子劍, 孫開(kāi)偉, 鄧欣. 基于遷移權(quán)重的條件對(duì)抗領(lǐng)域適應(yīng)[J]. 電子與信息學(xué)報(bào), 2019, 41(11): 2729-2735. doi: 10.11999/JEIT190115
引用本文: 王進(jìn), 王科, 閔子劍, 孫開(kāi)偉, 鄧欣. 基于遷移權(quán)重的條件對(duì)抗領(lǐng)域適應(yīng)[J]. 電子與信息學(xué)報(bào), 2019, 41(11): 2729-2735. doi: 10.11999/JEIT190115
Jin WANG, Ke WANG, Zijian MIN, Kaiwei SUN, Xin DENG. Transfer Weight Based Conditional Adversarial Domain Adaptation[J]. Journal of Electronics & Information Technology, 2019, 41(11): 2729-2735. doi: 10.11999/JEIT190115
Citation: Jin WANG, Ke WANG, Zijian MIN, Kaiwei SUN, Xin DENG. Transfer Weight Based Conditional Adversarial Domain Adaptation[J]. Journal of Electronics & Information Technology, 2019, 41(11): 2729-2735. doi: 10.11999/JEIT190115

基于遷移權(quán)重的條件對(duì)抗領(lǐng)域適應(yīng)

doi: 10.11999/JEIT190115
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(61806033), 國(guó)家社會(huì)科學(xué)基金西部項(xiàng)目(18XGL013)
詳細(xì)信息
    作者簡(jiǎn)介:

    王進(jìn):男,1979年生,教授,研究方向?yàn)闄C(jī)器學(xué)習(xí)、數(shù)據(jù)挖掘

    王科:男,1993年生,碩士生,研究方向?yàn)闄C(jī)器學(xué)習(xí)

    閔子劍:男,1995年生,碩士生,研究方向?yàn)闄C(jī)器學(xué)習(xí)

    孫開(kāi)偉:男,1987年生,講師,研究方向?yàn)闄C(jī)器學(xué)習(xí)、數(shù)據(jù)挖掘

    鄧欣:男,1981年生,副教授,研究方向?yàn)闄C(jī)器學(xué)習(xí)、認(rèn)知計(jì)算

    通訊作者:

    王進(jìn) wangjin@cqupt.edu.cn

  • 中圖分類(lèi)號(hào): TP391.41

Transfer Weight Based Conditional Adversarial Domain Adaptation

Funds: The National Nature Science Foundation of China(61806033), The National Social Science Foundation of China(18XGL013)
  • 摘要: 針對(duì)條件對(duì)抗領(lǐng)域適應(yīng)(CDAN)方法未能充分挖掘樣本的可遷移性,仍然存在部分難以遷移的源域樣本擾亂目標(biāo)域數(shù)據(jù)分布的問(wèn)題,該文提出一種基于遷移權(quán)重的條件對(duì)抗領(lǐng)域適應(yīng)(TW-CDAN)方法。首先利用領(lǐng)域判別模型的判別結(jié)果作為衡量樣本遷移性能的主要度量指標(biāo),使不同的樣本具有不同的遷移性能;其次將樣本的可遷移性作為權(quán)重應(yīng)用在分類(lèi)損失和最小熵?fù)p失上,旨在消除條件對(duì)抗領(lǐng)域適應(yīng)中難以遷移樣本對(duì)模型造成的影響;最后使用Office-31數(shù)據(jù)集的6個(gè)遷移任務(wù)和Office-Home數(shù)據(jù)集的12個(gè)遷移任務(wù)進(jìn)行了實(shí)驗(yàn),該方法在14個(gè)遷移任務(wù)上取得了提升,在平均精度上分別提升1.4%和3.1%。
  • 圖  1  TW-CDAN模型結(jié)構(gòu)圖

    圖  2  Office-Home數(shù)據(jù)集

    圖  3  算法收斂性對(duì)比實(shí)驗(yàn)

    圖  4  T-SNE特征可視化

    表  1  Office-31數(shù)據(jù)集結(jié)果(使用平均精度進(jìn)行評(píng)價(jià))

    方法AWDWWDADDAWA平均
    ResNet-50[17]68.496.799.368.962.560.776.1
    DAN[7]80.597.199.678.663.662.880.4
    RTN[8]84.596.899.477.566.264.881.6
    DANN[10]82.096.999.179.768.267.482.2
    ADDA[11]86.296.298.477.869.568.982.9
    JAN[18]85.497.499.884.768.670.084.3
    GTA[19]89.597.999.887.772.871.486.5
    CDAN[13]93.198.6100.092.971.069.387.5
    TW-CDAN94.999.2100.094.072.772.588.9
    下載: 導(dǎo)出CSV

    表  2  Office-Home數(shù)據(jù)集結(jié)果(使用平均精度進(jìn)行評(píng)價(jià))

    方法ArClArPrArRwClArClPrClRwPrArPrClPrRwRwArRwClRwPr平均
    ResNet-50[17]34.950.058.037.441.946.238.531.260.453.941.259.946.1
    DAN[7]43.657.067.945.856.560.444.043.667.763.151.574.356.3
    DANN[10]45.659.370.147.058.560.946.143.768.563.251.876.857.6
    JAN[18]45.961.268.950.459.761.045.843.470.363.952.476.858.3
    CDAN[13]50.665.973.455.762.764.251.849.174.568.256.980.762.8
    TW-CDAN48.871.176.761.668.970.260.446.677.971.355.481.965.9
    下載: 導(dǎo)出CSV

    表  3  不同遷移權(quán)重設(shè)置在Office-31數(shù)據(jù)集結(jié)果(使用平均精度進(jìn)行評(píng)價(jià))

    方法AWDWWDADDAWA平均
    CDAN[13]93.198.6100.092.971.069.387.5
    CDAN(S)93.098.7100.092.771.069.187.4
    TW-CDAN(E)93.798.8100.093.471.571.388.1
    TW-CDAN(C)94.298.9100.093.172.171.888.4
    TW-CDAN94.999.2100.094.072.772.588.9
    下載: 導(dǎo)出CSV
  • YOSINSKI J, CLUNE J, BENGIO Y, et al. How transferable are features in deep neural networks?[C]. Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 3320-3328.
    PAN S J and YANG Qiang. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345–1359. doi: 10.1109/TKDE.2009.191
    GEBRU T, HOFFMAN J, LI Feifei, et al. Fine-grained recognition in the wild: A multi-task domain adaptation approach[C]. Proceedings of IEEE International Conference on Computer Vision, Venice, Italy, 2017: 1358–1367.
    GLOROT X, BORDES A, and BENGIO Y. Domain adaptation for large-scale sentiment classification: A deep learning approach[C]. Proceedings of the 28th International Conference on Machine Learning, Bellevue, USA, 2011: 513–520.
    WANG Mei and DENG Weihong. Deep visual domain adaptation: A survey[J]. Neurocomputing, 2018, 312: 135–153. doi: 10.1016/j.neucom.2018.05.083
    GRETTON A, BORGWARDT K, RASCH M, et al. A kernel method for the two-sample-problem[C]. Proceedings of the 19th Conference on Neural Information Processing Systems, Vancouver, Canada, 2007: 513–520.
    LONG Mingsheng, CAO Yue, WANG Jianmin, et al. Learning transferable features with deep adaptation networks[C]. Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 2015: 97–105.
    LONG Mingsheng, ZHU Han, WANG Jianmin, et al. Deep transfer learning with joint adaptation networks[C]. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 2017: 2208–2217.
    GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]. Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 2672–2680.
    GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-adversarial training of neural networks[J]. The Journal of Machine Learning Research, 2016, 17(1): 2096–2030.
    TZENG E, HOFFMAN J, SAENKO K, et al. Adversarial discriminative domain adaptation[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2962–2971.
    MIRZA M and OSINDERO S. Conditional generative adversarial nets[EB/OL]. https://arxiv.org/abs/1411.1784, 2014.
    LONG Mingsheng, CAO Zhangjie, WANG Jianmin, et al. Conditional adversarial domain adaptation[C]. Proceedings of the 32nd Conference on Neural Information Processing Systems, Montréal, Canada, 2018: 1647–1657.
    GRANDVALET Y and BENGIO Y. Semi-supervised learning by entropy minimization[C]. Proceedings of the 17th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2004: 529–536.
    SAENKO K, KULIS B, FRITZ M, et al. Adapting visual category models to new domains[C]. Proceedings of the 11th European Conference on Computer Vision, Heraklion, Greece, 2010: 213–226.
    VENKATESWARA H, EUSEBIO J, CHAKRABORTY S, et al. Deep hashing network for unsupervised domain adaptation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5385–5394.
    HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778. doi: 10.1109/CVPR.2016.90.
    LONG Mingsheng, ZHU Han, WANG Jianmin, et al. Unsupervised domain adaptation with residual transfer networks[C]. Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain, 2016: 136–144.
    SANKARANARAYANAN S, BALAJI Y, CASTILLO C D, et al. Generate to adapt: Aligning domains using generative adversarial networks[C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 8503–8512.
  • 加載中
圖(4) / 表(3)
計(jì)量
  • 文章訪(fǎng)問(wèn)數(shù):  3770
  • HTML全文瀏覽量:  1424
  • PDF下載量:  93
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-02-27
  • 修回日期:  2019-06-11
  • 網(wǎng)絡(luò)出版日期:  2019-06-24
  • 刊出日期:  2019-11-01

目錄

    /

    返回文章
    返回