Skip to main content
placeholder image

A neural network pruning approach based on compressive sampling

Conference Paper


Download full-text (Open Access)

Abstract


  • The balance between computational complexity and the architecture bottlenecks the development of Neural Networks (NNs), An architecture that is too large or too small will influence the performance to a large extent in terms of generalization and computational cost. In the past, saliency analysis has been employed to determine the most suitable structure, however, it is time-consuming and the performance

    is not robust. In this paper, a family of new algorithms for pruning elements (weighs and hidden neurons) in Neural Networks is presented based on Compressive Sampling (CS) theory. The proposed framework makes it possible to locate the significant elements, and hence find a sparse structure, without computing their saliency. Experiment results are presented which demonstrate the effectiveness of the proposed approach.

Publication Date


  • 2009

Citation


  • Yang, J., Bouzerdoum, A. & Phung, S. (2009). A neural network pruning approach based on compressive sampling. International Joint Conference on Neural Networks 2009 (pp. 3428-3435). New Jersey, USA: IEEE.

Scopus Eid


  • 2-s2.0-70449448556

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1810&context=infopapers

Ro Metadata Url


  • http://ro.uow.edu.au/infopapers/796

Has Global Citation Frequency


Start Page


  • 3428

End Page


  • 3435

Place Of Publication


  • New Jersey, USA

Abstract


  • The balance between computational complexity and the architecture bottlenecks the development of Neural Networks (NNs), An architecture that is too large or too small will influence the performance to a large extent in terms of generalization and computational cost. In the past, saliency analysis has been employed to determine the most suitable structure, however, it is time-consuming and the performance

    is not robust. In this paper, a family of new algorithms for pruning elements (weighs and hidden neurons) in Neural Networks is presented based on Compressive Sampling (CS) theory. The proposed framework makes it possible to locate the significant elements, and hence find a sparse structure, without computing their saliency. Experiment results are presented which demonstrate the effectiveness of the proposed approach.

Publication Date


  • 2009

Citation


  • Yang, J., Bouzerdoum, A. & Phung, S. (2009). A neural network pruning approach based on compressive sampling. International Joint Conference on Neural Networks 2009 (pp. 3428-3435). New Jersey, USA: IEEE.

Scopus Eid


  • 2-s2.0-70449448556

Ro Full-text Url


  • http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1810&context=infopapers

Ro Metadata Url


  • http://ro.uow.edu.au/infopapers/796

Has Global Citation Frequency


Start Page


  • 3428

End Page


  • 3435

Place Of Publication


  • New Jersey, USA