基于半監(jiān)督學(xué)習(xí)生成對(duì)抗網(wǎng)絡(luò)的人臉還原算法研究
doi: 10.11999/JEIT170357
基金項(xiàng)目:
國(guó)家自然科學(xué)基金(61370195, U1536121)
Research on Face Reduction Algorithm Based on Generative Adversarial Nets with Semi-supervised Learning
Funds:
The National Natural Science Foundation of China (61370195, U1536121)
-
摘要: 基于大量訓(xùn)練樣本生成高置信度圖像的生成對(duì)抗網(wǎng)絡(luò)研究已經(jīng)取得一些成果,但是現(xiàn)有的研究只針對(duì)已知訓(xùn)練樣本進(jìn)行圖像生成,而未將訓(xùn)練的參數(shù)用于訓(xùn)練樣本之外的圖像生成。該文設(shè)計(jì)了一種改進(jìn)的生成對(duì)抗網(wǎng)絡(luò)模型,在已有網(wǎng)絡(luò)的基礎(chǔ)上增加一個(gè)還原層,使得測(cè)試圖像可以通過(guò)改進(jìn)的對(duì)抗網(wǎng)絡(luò)生成對(duì)應(yīng)的高置信度圖像。實(shí)驗(yàn)結(jié)果表明,改進(jìn)的生成對(duì)抗網(wǎng)絡(luò)參數(shù)可以應(yīng)用到訓(xùn)練集之外的普通樣本。同時(shí)本文改進(jìn)了生成模型的損失算法,極大地縮短了網(wǎng)絡(luò)的收斂時(shí)間。
-
關(guān)鍵詞:
- 生成對(duì)抗網(wǎng)絡(luò) /
- 半監(jiān)督學(xué)習(xí) /
- 生成模型 /
- 損失函數(shù)
Abstract: Based on a large number of training samples to generate high confidence images, generative adversarial nets achieve good results, but the existing network of image generation in the training sample basis, the training parameters can not be used to generate images outside of training samples. In this paper, an improved generative adversarial nets model is proposed, and a reduction layer is added on the basis of the existing network, so that the test image can generate the corresponding high confidence image through the improved generative adversarial nets. The experimental results show that the improved generative adversarial nets parameters can be applied to the common samples outside the training set. At the same time, this paper improves the loss algorithm of the generated model, which greatly shortens the convergence time of the network.-
Key words:
- Generative adversarial nets /
- Semisupervised learning /
- Generative model /
- Loss function
-
NGUYEN A, YOSINSKI J, and CLUNE J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015: 427-436. doi: 10.1109/CVPR.2015. 7298640. SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[J]. CoRR, 2013, 12(6199): 1-6. GOODFELLOW I J, POUGETABADIE J, MIRZA M, et al. Generative adversarial nets[J]. Advances in Neural Information Processing Systems, 2014, 3: 2672-2680. RADFORD A, METZ L, and CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. CoRR, 2015, 11(06434): 1-7. DENG Jia, DONG Wei, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]. Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, 2009, 248-255. doi: 10.1109/CVPR.2009.5206848. KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. International Conference on Neural Information Processing Systems, Doha, Qatar, 2012: 1097-1105. GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[J]. CoRR, 2014, 12(6572): 1-7. RASMUS A, VALPOLA H, et al. Semisupervised learning with ladder network[J]. CoRR, 2015, 7(2672): 1-7. IOFFE S and SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[J]. CoRR, 2015, 2(3167): 1-9. DOSOVITSKIY A, FISCHER P, SPRINGENBERG J T, et al. Discriminative unsupervised feature learning with exemplar convolutional neural net-works[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38: 1734-1747. doi: 10.1109/ TPAMI.2015.2496141. KINGMA D P and BA J L. Adam: A method for stochastic optimization[J]. CoRR, 2014, 12(6980 ): 1-6. HINTON G E, SRIVASTAVA N, KRIZHEVSKY A, et al. Improving neural networks by preventing co-adaptation of feature detectors[J]. Computer Science, 2012, 3(4): 212-223. ARJOVSKY M and BOTTOU L. Towards principled methods for training generative adversarial networks[J]. CoRR, 2017, 1(4862): 1-8. ODENA A, OLAH C, and SHLENS J. Conditional image synthesis with auxiliary classifier gans[J]. CoRR, 2016, 10(9585): 1-8. WANG X, SHRIVASTAVA A, and GUPTA A. A-fast- RCNN: Hard positive generation via adversary for object detection[J]. CoRR, 2017, 4(3414): 1-6. ARJOVSKY M, CHINTALA S, and BOTTOU L. Wasserstein gan[J]. CoRR, 2017, 1(7875): 1-7. GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[J]. CoRR, 2017, 4(0028): 1-8. HU H and HAAN G D. Low cost robust blur estimator[C]. IEEE International Conference on Image Processing. San Antinio, TX, 2007: 617-620. AHONEN T, RAHTU E, OJANSIVU V, et al. Recognition of blurred faces using local phase quantization[C]. IEEE International Conference on Pattern Recognition, Tampa, Florida, USA, 2008: 1-4. NISHIYAMA M, TAKESHIMA H, SHOTTON J, et al. Facial deblur inference to improve recognition of blurred faces[C]. IEEE Conference on Computer Vision and Pattern Recognition. Miami, Florida, USA, 2009: 1115-1122. SWAMINATH A, MAO Y, and WU M. Robust and secure image hashing[J]. IEEE Transactions on Information Forensics Security, 2013, 1(2): 215-230. -
計(jì)量
- 文章訪問數(shù): 1551
- HTML全文瀏覽量: 221
- PDF下載量: 275
- 被引次數(shù): 0