Efficient and Accurate Approximations of Nonlinear Convolutional Networks 高效率和准确的非线性的卷积神经网络逼近Abstract This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses , subject to a low-rank constraint which helps to reduce the complexity of filters 。 We develop an effective solution to this constrained nonlinear optimization problem 。 An algorithm is also presented for reducing the accumulated error when multiple layers are approximated。 A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top—5 error rate is only increased by 0 。 9% 。 Our accelerated model has a comparably fast speed as the “AlexNet” [11] , but is 4 。 7 % more accurate。摘要:本文旨在提高深度卷积神经网络的计算测试时间(CNNs).与现有的近似线性滤波器或线性响应设计的方法不同,该方法考虑了非线性单位。我们将重建非线性响应的误差降到最小,一个低等级的限制有助于减少过滤器的复杂性.我们将非线性响应的重建误差降到最小,除有助于减少过滤器的复杂性的一个低等级的限制.我们研制一个有效的解决这个约束非线性优化的问题.为了减少多个图层逼近时的累积误差,提出了一个算法, 整个4×的加速比模型论证了在大型 ImageNet(图像处理软件)网络训练,即使 top-5(五大低价主机排名)的错误率也仅增加 0.9%。我们加速模型有一个 比较 快的速度为 "AlexNet” [ 11] ,但4。7%更准确.Introduction This paper addresses efficient test—time computation of deep convolutional neural networks (CNNs) [12, 11]。 Since the success of CNNs [11] for large-scale image classification, the accuracy of the newly developed CNNs [24, 17,8, 18, 19] ha...