Jo Jan 15, 2025

Forward neural networks are the most common neural networks in a neural network structure, for transmitting signals from input layers to output layers in one direction. In the feedforward neural network, there is no recursive coupling in the hidden layer. This feedforward neural network can simulate complex nonlinear relationships based on training data sets and it has a wide range of applications. In recent years, the feedforward neural network has been applied to many fields such as pattern recognition, decision making, future prediction and inaccessible object simulation. Currently, there is a growing interest in improving convergence and generalization ability in the application of feedforward neural networks.

With increasing researches on convolutional neural networks, the image recognition performance has been greatly improved. Convolutional neural networks have a structure for extracting the features of data by combining multiple convolutional layers with local filtering characteristics. Convolutional neural networks have been used to solve many problems such as satellite image analysis, object detection in natural images, face recognition, object recognition, etc.

Recently, sparse representations have attracted a lot of attention in the field of pattern recognition. The study of sparse representations has been carried out for nearly a century and they have been applied to various fields. In particular, the signal processing sector has aroused interest in sparse representations for compression and interpretation of speech, images and animations in the last decade.

Previous works presented a sparse matrix generation method by singular value decomposition, i.e, k-SVD algorithm. This algorithm increases the order of the dictionary matrix by no less than that of the measurement matrix, which leads to a higher number of iterations and a lower computational efficiency. Thus, it is not suitable for high-dimensional measurement signal processing.

Hwang Chol Hyon, a section head at the Faculty of Information Science and Technology, has studied a sparse representation computation method by singular value decomposition that updates several column data at a time, and proposed an improved k-SVD algorithm. In addition, he has constructed a three-layer BP neural network using the sparse representation calculation procedure of multiple measurement vectors, and implemented the feature extraction of multiple measurement vectors.

The feature extraction method with sparse representation can improve the coupling characteristics of neural network, thus greatly improving the learning speed.