Skip to main content
placeholder image

Feed-forward neural network training using sparse representation

Journal Article


Abstract


  • The feed-forward neural network (FNN) has drawn great interest in many applications due to its universal approximation capability. In this paper, a novel algorithm for training FNNs is proposed using the concept of sparse representation. The major advantage of the proposed algorithm is that it is capable of training the initial network and optimizing the network structure simultaneously. The proposed algorithm consists of two core stages: structure optimization and weight update. In the structure optimization stage, the sparse representation technique is employed to select important hidden neurons that minimize the residual output error. In the weight update stage, a dictionary learning based method is implemented to update network weights by maximizing the output diversity from hidden neurons. This weight-updating process is designed to improve the performance of the structure optimization. Based on several benchmark classification and regression problems, we present experimental results comparing the proposed algorithm with state-of-the-art methods. Simulation results show that the proposed algorithm offers comparative performance in terms of the final network size and generalization ability.

Publication Date


  • 2019

Citation


  • Yang, J. & Ma, J. (2019). Feed-forward neural network training using sparse representation. Expert Systems with Applications, 116 255-264.

Scopus Eid


  • 2-s2.0-85053792168

Ro Metadata Url


  • http://ro.uow.edu.au/smartpapers/248

Number Of Pages


  • 9

Start Page


  • 255

End Page


  • 264

Volume


  • 116

Place Of Publication


  • United Kingdom

Abstract


  • The feed-forward neural network (FNN) has drawn great interest in many applications due to its universal approximation capability. In this paper, a novel algorithm for training FNNs is proposed using the concept of sparse representation. The major advantage of the proposed algorithm is that it is capable of training the initial network and optimizing the network structure simultaneously. The proposed algorithm consists of two core stages: structure optimization and weight update. In the structure optimization stage, the sparse representation technique is employed to select important hidden neurons that minimize the residual output error. In the weight update stage, a dictionary learning based method is implemented to update network weights by maximizing the output diversity from hidden neurons. This weight-updating process is designed to improve the performance of the structure optimization. Based on several benchmark classification and regression problems, we present experimental results comparing the proposed algorithm with state-of-the-art methods. Simulation results show that the proposed algorithm offers comparative performance in terms of the final network size and generalization ability.

Publication Date


  • 2019

Citation


  • Yang, J. & Ma, J. (2019). Feed-forward neural network training using sparse representation. Expert Systems with Applications, 116 255-264.

Scopus Eid


  • 2-s2.0-85053792168

Ro Metadata Url


  • http://ro.uow.edu.au/smartpapers/248

Number Of Pages


  • 9

Start Page


  • 255

End Page


  • 264

Volume


  • 116

Place Of Publication


  • United Kingdom