基于分裂基-2/(2a)FFT算法的卷積神經(jīng)網(wǎng)絡(luò)加速性能的研究
doi: 10.11999/JEIT160357
國(guó)家自然科學(xué)基金(61201344, 61271312, 61401085),高等學(xué)校博士學(xué)科點(diǎn)專項(xiàng)科研基金(20120092120036)
Acceleration Performance Study of Convolutional Neural Network Based on Split-radix-2/(2a) FFT Algorithms
The National Natural Science Foundation of China (61201344, 61271312, 61401085), The Special Research Fund for the Doctoral Program of Higher Education (20120092120036)
-
摘要: 卷積神經(jīng)網(wǎng)絡(luò)在語(yǔ)音識(shí)別和圖像識(shí)別等眾多領(lǐng)域取得了突破性進(jìn)展,限制其大規(guī)模應(yīng)用的很重要的一個(gè)因素就是其計(jì)算復(fù)雜度,尤其是其中空域線性卷積的計(jì)算。利用卷積定理在頻域中實(shí)現(xiàn)空域線性卷積被認(rèn)為是一種非常有效的實(shí)現(xiàn)方式,該文首先提出一種統(tǒng)一的基于時(shí)域抽取方法的分裂基-2/(2a) 1維FFT快速算法,其中a為任意自然數(shù),然后在CPU環(huán)境下對(duì)提出的FFT算法在一類卷積神經(jīng)網(wǎng)絡(luò)中的加速性能進(jìn)行了比較研究。在MNIST手寫數(shù)字?jǐn)?shù)據(jù)庫(kù)以及Cifar-10對(duì)象識(shí)別數(shù)據(jù)集上的實(shí)驗(yàn)表明:利用分裂基-2/4 FFT算法和基-2 FFT算法實(shí)現(xiàn)的卷積神經(jīng)網(wǎng)絡(luò)相比于空域直接實(shí)現(xiàn)的卷積神經(jīng)網(wǎng)絡(luò),精度并不會(huì)有損失,并且分裂基-2/4能取得最好的提速效果,在以上兩個(gè)數(shù)據(jù)集上分別提速38.56%和72.01%。因此,在頻域中實(shí)現(xiàn)卷積神經(jīng)網(wǎng)絡(luò)的線性卷積操作是一種十分有效的實(shí)現(xiàn)方式。
-
關(guān)鍵詞:
- 信號(hào)處理 /
- 深度學(xué)習(xí) /
- 卷積神經(jīng)網(wǎng)絡(luò) /
- 快速傅里葉變換
Abstract: Convolution Neural Networks (CNN) make breakthrough progress in many areas recently, such as speech recognition and image recognition. A limiting factor for use of CNN in large-scale application is, until recently, their computational expense, especially the calculation of linear convolution in spatial domain. Convolution theorem provides a very effective way to implement a linear convolution in spatial domain by multiplication in frequency domain. This paper proposes an unified one-dimensional FFT algorithm based on decimation-in-time split- radix-2/(2a), in which a is an arbitrary natural number. The acceleration performance of convolutional neural network is studied by using the proposed FFT algorithm on CPU environment. Experimental results on the MNIST database and Cifar-10 database show great improvement when compared to the direct linear convolution based CNN with no loss in accuracy, and the radix-2/4 FFT gets the best time savings of 38.56% and 72.01% respectively. Therefore, it is a very effective way to realize linear convolution operation in frequency domain. -
HINTON G E and SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313: 504-507. doi: 10.1126/science.1127647. HINTON G E, OSINDERO S, and TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554. doi: 10.1162/neco.2006.18.7.1527. BENGIO Y, COURVILLE A, and VINCENT P. Representation learning: A review and new perspectives[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1798-1828. doi: 10.1109/TPAMI. 2013.50. LECUN Y, BENGIO Y, and HINTON G E. Deep learning[J]. Nature, 2015, 521(7553): 436-444. doi: 10.1038/nature14539. DENG L and YU D. Deep learning: Methods and applications[J]. Foundations and Trends in Signal Processing, 2014, 7(3): 197-387. LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. doi: 10.1109/5.726791. KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. Imagenet classification with deep convolutional neural networks[C]. Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 2012: 1097-1105. SZEGEDY C, LIU W, JIA Y Q, et al.. Going deeper with convolutions[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015: 1-9. JADERBERG M, VEDALDI A, and ZISSERMAN A. Speeding up convolutional neural networks with low rank expansions[J]. Computer Science, 2014, 4(4): XIII. LIU B, WANG M, FOROOSH H, et al. Sparse Convolutional neural networks[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, United States, 2015: 806-814. VASILACHE N, JOHNSON J, MATHIEU M, et al. Fast convolutional nets with fbfft: A GPU performance evaluation [C]. International Conference on Learning Representations, San Diego, CA,USA, 2015: 1-17. COOLEY J W and TUKEY J W. An algorithm for the machine calculation of complex Fourier series[J]. Mathematics of Computation, 1965, 90(19): 297-301. doi: 10.2307/2003354. DUHAMEL P and VETTERLI M. Fast Fourier transforms: A tutorial review and a state of the art [J]. Signal Processing, 1990, 19(4): 259-299. doi: 10.1016/0165-1684(90)90158-U. WINOGRAD S. On computing the discrete Fourier transform[J]. Proceedings of the National Academy of Sciences of the United States of America, 1976, 73(4): 1005-1006. doi: 10.1073/pnas.73.4.1005. KOLBA D P and PARKS T W. A prime factor algorithm using high-speed convolution[J]. IEEE Transactions on Acoustics Speech Signal Processing, 1977, 25(4): 281-294. doi: 10.1109/TASSP.1977.1162973. DUHAMEL P and HOLLMANN H. Implementation of Split-radix FFT algorithms for complex, real, and real symmetric data[C]. IEEE International Conference on Acoustics, Speech, and Signal Processing, Tampa, FL, USA, 1985: 285-295. BOUGUEZEL S, AHMAD M O, and SWAMY M N S. A general class of split-radix FFT algorithms for the computation of the DFT of length-2m [J]. IEEE Transactions on Signal Processing, 2007, 55(8): 4127-4138. doi: 10.1109/ TSP.2007.896110. BI G, LI G, and LI X. A unified expression for split-radix DFT algorithms[C]. IEEE International Conference on Communications, Circuits and Systems, Chengdu, China, 2010: 323-326. FARHANG BOROUJENY B and LIM Y C. A comment on the computational complexity of sliding FFT[J]. IEEE Transactions on Circuits and Systems II Analog and Digital Signal Processing, 1992, 39(12): 875-876. doi: 10.1109/82. 208583. PARK C S and KO S J. The hopping discrete Fourier transform[J]. IEEE Signal Processing Magazine, 2014, 31(2): 135-139. doi: 10.1109/MSP.2013.2292891. GOUK H G and BLAKE A M. Fast sliding window classification with convolutional neural networks[C]. Proceedings of the 29th International Conference on Image and Vision Computing, New Zealand, 2014: 114-118. RAO K R, KIM D N, and HWANG J J. Fast Fourier Transform: Algorithms and Applications[M]. Berlin: Springer Science Business Media, 2011: 5-6. FRIGO M and JOHNSON S G. FFTW: An adaptive software architecture for the FFT[C]. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Seattle, WA, USA, 1998: 1381-1384. SERMANET P, EIGEN D, ZHANG X, et al. Overfeat: Integrated recognition, localization and detection using convolutional networks[OL]. arXiv:1312.6229. 2013: 1-16. -
計(jì)量
- 文章訪問(wèn)數(shù): 1919
- HTML全文瀏覽量: 183
- PDF下載量: 661
- 被引次數(shù): 0