Li* , Zhao* , Xiao* , and Wang*: Face Recognition Based on the Combination of Enhanced Local Texture Feature and DBN under Complex Illumination Conditions

# Face Recognition Based on the Combination of Enhanced Local Texture Feature and DBN under Complex Illumination Conditions

Abstract: To combat the adverse impact imposed by illumination variation in the face recognition process, an effective and feasible algorithm is proposed in this paper. Firstly, an enhanced local texture feature is presented by applying the central symmetric encode principle on the fused component images acquired from the wavelet decomposition. Then the proposed local texture features are combined with Deep Belief Network (DBN) to gain robust deep features of face images under severe illumination conditions. Abundant experiments with different test schemes are conducted on both CMU-PIE and Extended Yale-B databases which contain face images under various illumination condition. Compared with the DBN, LBP combined with DBN and CSLBP combined with DBN, our proposed method achieves the most satisfying recognition rate regardless of the database used, the test scheme adopted or the illumination condition encountered, especially for the face recognition under severe illumination variation.

Keywords: Deep Belief Network , Enhanced Local Texture Feature , Face Recognition , Illumination Variation

## 1. Introduction

Face recognition has been a hotspot in both biometric recognition and commercial activity in the past few years due to its friendliness and convenience [1]. Under constrained circumstances with frontal face images, most previous studies can achieve satisfying recognition rate [2,3]. However for practical application utilizing face recognition, many unconstrained factors occur, such as occlusion, pose variation, complex background as well as illumination variation which is really common and inevitable. It can cause a great change of the grayscale distribution on the candidate face images to change greatly due to the different intensity and incidence angle of the ambient lighting [4,5].

In order to decrease the impact of illumination variation on face recognition, two categories of solution are normally studied. One is conducting illumination preprocessing, the other is extracting effective face expression that is irrelevant to illumination change. However most preprocessing method will remove useful information to some extent while eliminating the effect of illumination change [6]. Therefore, effective face expression that is irrelevant to illumination change has been a challenging issue and attracted lots of attention. For the last few years, many face feature expression methods have been studied. Local binary pattern (LBP) [7-10] is an effective local texture descriptor. It is widely used in face recognition because of its advantages in image texture description. But its dimension is relatively high, and the excessively detailed description makes it sensitive to noise. On the basis of LBP, a modified central symmetric local binary pattern (CSLBP) descriptor is proposed [11-13]. Due to the center symmetric principle CSLBP adopted, its dimension is much lower than LBP and it is relatively robust to noise. However for the face images with severe illumination effects, it still cannot achieve satisfying result.

Hence, many researchers aim to explore deeper and more robust feature representation method on the basic of these shallow layer features. Deep learning seems to be a feasible way [14,15]. Deep learning simulates the organizational structure of human brain. It can obtain more precise and efficient highlevel feature representation by combining low-level features [16-18]. The feature extraction process is automatic without artificial interference. However the deep learning method might ignore the local features if the input of the multi-layer net is pixel-level image. Liang and Zhang etc. propose to use LBP feature as the input of the deep learning network [19-21]. It improves the performance of LBP and deep learning algorithm respectively. However for severe illumination effect, its result still cannot meet the requirement of practical application.

Therefore, this paper presents an effective and feasible way to extract the robust deep features of face images under severe illumination conditions. It is on the basis of combining the enhanced CSLBP (ECSLBP) with the Deep Belief Network (DBN) [22-24], which is an effective deep learning network. This paper is organized as follows: in Section 2 we describe the proposed ECSLBP descriptor and the construction of DBN combined with ECSLBP. The experimental results are given in Section 3, massive experimental results are illustrated and discussed to show the validity of the proposed algorithm. And the conclusions are drawn in Section 4.

## 2. Technical Approach

2.1 Center Symmetric Local Binary Pattern

Fig. 1.

Encoding process of the CSLBP descriptor.

Original local binary pattern proposed by Heikkila has been proven to be effective for local texture feature description. However features extracted by LBP are too detailed with high dimension, which lead to high computational complexity and unrobustness to noise. To improve its performance, central symmetric local binary pattern [12] is generated from LBP. It applies the center symmetric encode principle to describe each pixel of the image. By comparing the gray value transformation among symmetrical pixel pairs around the center pixel, the texture feature is well extracted.

As shown in Fig. 1, pi denotes K pixels around the center pixel, and g(x) is calculated as the follow equation:

##### (1)
[TeX:] $$g ( x ) = \left\{ \begin{array} { l l } { 1 , } \ { x \geq T } \\ { 0 , \text { otherwise } } \end{array} \right.$$

where T is a threshold representing the image intensity variation. CSLBP has lower computational complexity due to its calculation rule, and it is more robust to some extend compared with the original LBP [12,13].

2.2 Enhance Center Symmetric Local Binary Pattern

However the performance of CSLBP on images under severe illumination variation is still not satisfying. To compensate this shortage, an improved CSLBP is presented in this section. It is based on the combination of wavelet transform and CSLBP. The procedure will be detailed in the following part. The wavelet decomposition is conducted on the face image to acquire the corresponding low frequency and high frequency component images [25]. The decomposition results of the face images are shown as follows:

Fig. 2.

Wavelet decomposition of images under normal illumination (a) and severe illumination variation (b).

As shown in Fig. 2(a), 4 sub-band images are generated by wavelet decomposition of the twodimensional image J*. They are low frequency sub-band image, also known as approximate component image A which containing the main information of the image, as well as three high frequency sub-band image H, V and D which respectively reflecting the horizontal, vertical and diagonal direction information of the image. As shown in Fig. 2(b), for the image under severe illumination variation, such as very dark environment, the result of wavelet decomposition reflects very less useful information of the image compared with image under good illumination condition.

Hence, a nonlinear grayscale enhancement transform is introduced into the wavelet decomposition as a pre-procedure. After the nonlinear grayscale enhancement, wavelet decomposition result on the very dark image achieves great progress even compared with the images under normal illumination condition. The results of the wavelet decomposition are shown in Fig. 3.

After acquiring the component images, the wavelet fusion and encoding with the center symmetric encode principle is conducted. The process is illustrated with the following Fig. 4.

Fig. 3.

Wavelet decomposition after nonlinear grayscale enhancement.

Fig. 4.

Fusion process.

As shown in Fig. 4, only the vertical and horizontal component images are reserved to construct the improved descriptor. The vertical and horizontal component images are fused according to the fusion rules illustrated in Fig. 4. Then the center symmetric encode principle is applied to describe the fusion data. The improved feature extraction method is named as enhanced central symmetric local binary pattern. The extracted features can describe the images robustly since the combination of wavelet fusion and center symmetric encode principle not only suppress the influence of severe illumination variation but also enhance the effective information for identification.

2.3 Deep Belief Network

Deep learning architecture is a non-supervised neural network consist of multi-layers. The output of the former layer are usually set as the input of the latter layer. The learning aim is to make the original input information and the final output information as similar as possible by constructing the network architecture and training the parameters. Some typical deep architectures have been proposed, such as DBN [23,24], Convolution Neural Network (CNN) [26], and so on. The DBN consists of a number of unsupervised Restricted Boltzmann Machines (RBM). And each RBM contains a visual layer and a hidden layer. A three-layer DBN model is demonstrated in Fig. 5.

Fig. 5.

DBN structure of three-layer RBM model.

As shown, v represents the visible layer, hi (i = 1, 2,...) represents the hidden layer. By training and expressing the input data through the multi-layer network, essential features that reveal hidden information and high-order correlations of data can be extracted.

For a DBN with c layer, the joint probability distribution between the visual unit and the hidden unit can be represent with the following equation:

##### (2)
[TeX:] $$f \left( v , h ^ { 1 } , h ^ { 2 } , \ldots , h ^ { c } \right) = F ( v | h ^ { 1 } ) F \left( h ^ { 2 } | h ^ { 1 } \right) \ldots F \left( h ^ { c - 1 } | h ^ { c } \right)$$

Among them, v=h0 is the visual unit of DBN, h is the hidden unit, as well ashi (i = 1, 2,...) is the i-th hidden unit.

Two adjacent hidden units hi and hi+1 should satisfy the following formulas:

##### (3)
[TeX:] $$F \left( h ^ { i } | h ^ { i + 1 } \right) = \prod _ { k } F \left( h _ { k } ^ { i } | h ^ { i + 1 } \right)$$

##### (4)
[TeX:] $$F \left( h _ { k } ^ { i + 1 } = 1 | h ^ { i + 1 } \right) = \delta \left( b _ { k } ^ { i } + \sum _ { u } W _ { k u } ^ { i } h _ { u } ^ { i + 1 } \right) , \delta ( x ) = \frac { 1 } { 1 + \exp ( - x ) }$$

where [TeX:] $$F \left( h ^ { i } , h ^ { i + 1 } \right)$$ represent a RBM model, [TeX:] $$b _ { k } ^ { i }$$ represents the i-th layer bias, [TeX:] $$W _ { k u } ^ { i }$$ denotes the weight between the i-th layer and the (i+1)th layer.

For each RBM model [TeX:] $$F \left( h ^ { i } , h ^ { i + 1 } \right)$$, its energy function can be expressed as:

##### (5)
[TeX:] $$E ( v , h | \varphi ) = - \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { m } v _ { i } W _ { i j } h _ { j } - \sum _ { i = 1 } ^ { n } c _ { i } ^ { \prime } v _ { i } - \sum _ { j = 1 } ^ { m } c _ { j } ^ { * } h _ { j }$$

where [TeX:] $$\varphi = \left\{ W _ { i j } , c _ { i } ^ { \prime } , c _ { j } ^ { * } \right\}$$ is the parameters of RBM, [TeX:] $$W _ { i j }$$ is the weight parameters between visible units i and hidden j. [TeX:] $$C _ { i } ^ { \prime } \text { and } c _ { j } ^ { * }$$ denote the visible unit bias and hidden unit bias respectively. n and m are the numbers of visible units and hidden units.

Training the DBN model consists of pre-training and fine-tuning procedure. Pre-training includes training each RBM layer by layer. The unsupervised greedy algorithm can be adopted to train each RBM [20], during which the learned features of one RBM are put into the next RBM as the input ‘data’. After finishing the pre-training procedure and constructing the DBN, the back propagation algorithm is used to optimize the whole DBN, and obtain the final DBN.

2.4 Face Recognition Based on ECSLBP and DBN

However, when pixel level images are import to DBN directly, the recognition performance normally declines since DBN usually ignores the local features of the images. Especially for the face images under complex illumination environment, the computational complexity will increase due to information redundancy, the robustness will also be weaken by the noise. Hence, a novel face recognition method based on the combination of local texture feature and DBN is proposed in this paper. Considering that original LBP has a relatively high dimension and is sensitive to the illumination variation, the presented ECSLBP is adopted to extract the local texture feature. Then the extracted features are used to construct and train the DBN, which can enable the multi-layer network to be more efficient at learning and extracting deep features and improve the recognition rate spontaneously.

It is given that the characteristics of ECSLBP have excellent ability to describe local texture features and have better robustness to severe illumination changes. In order to excavate the deep features of sample images more effectively, local features are used to initial the DBN, including pre-training and fine-tuning in this paper.

Pre-training: The main purpose of pre-training is to obtain the network parameters of each RBM from layer to layer. In this paper, ECSLBP features are imported to the bottom visible layer to initialize the DBN. The weight parameter of the first layer W1 and the hidden layer h1 can be trained through contrastive divergence algorithm. After obtaining the first layer’s network parameters, it is imported to the second layer as the input data. By this way, the whole network parameters of each RBM can be obtained layer by layer. In this paper, the layer number of the DBN is three.

Fine-tuning: After pre-training the multiple layers DBN, some tagged training data are imported to the constructed DBN in the pre-training step, and the back propagation algorithm is used to optimize the network parameters.

The algorithm framework is shown in Fig. 6.

Fig. 6.

Illustration of the DBN framework.

## 3. Experimental Results

In order to verify the effectiveness of the proposed algorithm under severe illumination conditions, the Extended Yale-B and CMU-PIE face databases are used in the following experiments, both of which contain face images captured under severe illumination variation.

3.1 CMU-PIE

The CMU-PIE database contains face images of 68 people with obvious illumination variation, slight pose and expression variation. In this experiment, a total of 200 sample images with severe illumination are selected from CMU-PIE database to test the effectiveness of the proposed algorithm. These images are from 20 persons. And for each person there are 10 kinds of face images with different illumination. We use the entire images with background for recognition without face detection and segmentation. It is more challenging using the entire images with background than pure face images. The reason is that we aim to verify the advantage and robustness of the proposed method for face recognition with background. Sample images utilized in this experiment are shown in Fig. 7.

Fig. 7.

Examples of CMU-PIE face image.

As shown in Fig. 7, face images of one person appear great variations, such as uneven grayscale distribution, severe shadow or reflection due to the change of the ambient light and different background. The first seven images are used to form the training subset, and the remaining three are used to form the test subset. The number of iterations is 50 times as well as the DBN parameters are 130-100-50. To further verify the superiority of the proposed algorithm based on combination of ECSLBP and DBN, other typical algorithms are conducted on this database too. The experimental results are shown as follows.

Fig. 8.

Recognition rate comparison on CMU-PIE database.

The comparison between the proposed methods, DBN, LBP combined with DBN and CSLBP combined with DBN are shown in Fig. 8. It can be seen that the rank1 recognition rate of DBN is 73.33%, the rank1 recognition rate of LBP+DBN is 71.67%, the rank1 recognition rate of CSLBP+DBN is 80%, and the rank1 recognition rate of the proposed ECSLBP combined with DBN method is 95%, which is at least 15% higher than the other three methods. It shows that compared with the three existing algorithms, our proposed method is more robust to illumination variation and can improve the recognition performance effectively.

3.2 Extended Yale-B

The Extended Yale-B face database is the mostly commonly used database for testing the robustness of the algorithm under severe illumination variation. It is established by the Yale University Computing Visual and Control Center. 640 face images of Extended Yale-B are used in this experiment. To clarify the effectiveness of the proposed method under different illumination condition, these face images are divided into 5 subsets according to its lighting condition (subset 1 has 7 images, subset 2 has 12 images, subset 3 has 12 images, subset 4 has 14 images, and subset 5 has 19 images). When constructing the deep belief network, the number of iterations is 50 times as well as the network structure is 50-30-20. In this part two experimental schemes are designed to further demonstrate the effectiveness of the proposed method.

 Experiment One

In the first experiment, select one image (the first of each subset) from each subset to form the training set, and the rest images of each set are used as the test subset respectively, which means that there are one training set and 5 test sets. The examples of the training set and test sets are shown in Fig. 9. To further illustrate the effectiveness of the proposed method, comparison with other algorithms are also conducted. The 5 test sets experimental results are shown in Fig. 10.

Fig. 9.

First experiment scheme on the Extended Yale-B database.

Fig. 10.

Results comparison of the first experiment on Extended Yale-B database.

It can be seen from the experimental results, for the first and second subsets, the recognition rate of the proposed method and CSLBP+DBN are almost equal, which are both higher than LBP+DBN and DBN algorithm. With the illumination condition becoming worse, the algorithm ECSLBP combined with DBN achieve much higher recognition rate on subset 3, 4 and 5. Especially on subset 5, although its recognition rate has declined to nearly 70% compared with the ideal illumination condition, but the recognition rates of the other three algorithms face a sharper drop, which are at least 40% lower than our proposed method. Meaning that the proposed method can adapt to varying illumination compared with other three methods, especially when the face images are under really bad illumination condition, the proposed method shows significant advantage.

 Experiment Two

Since that for practical applications, the registered face images are usually under relatively ideal lighting condition, while face recognition system normally encounters face images under various lighting conditions during the recognition phase. In the second experiment, to verify the effectiveness of the proposed method on practical application, subset 1 under relatively ideal light condition is selected as the training set, while the rest subsets 2–4 are used as the test subsets. The experimental schemes are shown in Fig. 11 and the experimental result are shown in Fig. 12.

Fig. 11.

Second experiment scheme on the Extended Yale-B database.

Fig. 12.

Results comparison of the second experiment on Extended Yale B database.

It can be seen from Fig. 12, for the second subsets, the recognition rate of the proposed method, CSLBP+DBN and LBP+DBN are all 100%, since the illumination condition of subset 2 is very similar to subset 1. For subset 3, the algorithm ECSLBP combined with DBN achieve 94.17% recognition rate, while the recognition rate of the other three algorithm drops to 88.33%, 70.83%, and 40% respectively. With the illumination condition becoming much worse, although the recognition rate of the proposed method also drops, it is still much more robust compared with the other three algorithms which all drop to 20%–30%. This experimental results show that out proposed method is much more robust to illumination variation and can satisfy the demand of practical face recognition application.

## 4. Conclusion

Face recognition performance normally declines significantly due to the inevitable illumination variation. Aiming at this problem, we propose a novel effective and feasible identification method in this paper. Firstly, image fusion based on wavelet decomposition is conducted to eliminate the useless information caused by illumination variation. Then, inspired by the CSLBP, an ECSLBP is obtained by applying the central symmetric encode principle on the fused component images to extract the relative robust features of face image. On the basis of which, the ECSLBP and DBN are combined to compensate the shortage that DBN usually ignores the local information and to extract discriminative and illumination variation robust features. The effectiveness of the proposed method is verified by extensive experiments on both CMU-PIE and Extended Yale-B databases. Two test schemes are designed to further testify the advantages of the proposed method against the DBN, LBP combined with DBN and CSLBP combined with DBN algorithms. The experimental results show that our proposed method makes great improvement on the recognition rate compared with other three methods. Especially for face images with really bad lighting condition, the advantage of our proposed method is significant.

## Acknowledgement

This work is supported by the National Key R&D Program of China under Grant 2017YFB0802300, the National Natural Science Foundation of China under Grant No. 61503005 and Research Project of Beijing Municipal Education Commission under Grant No. SQKM201810009005.

## Biography

##### Chen Li
https://orcid.org/0000-0001-5983-5895

She received B.S. degrees from University of Science and Technology Beijing in 2001, and received her Ph.D. degree in 2013 from University of Science and Technology Beijing. She is currently a lecturer in School of Computer Science and North China University of Technology, Beijing, China. Her research interests include image processing, pattern recognition and 3D reconstruction.

## Biography

##### Shuai Zhao
https://orcid.org/0000-0002-8408-6977

He received B.S. degree from Jinzhong University in 2015. He is currently a postgraduate student in North China University of Technology. His current research interests include image processing and pattern recognition.

## Biography

##### Ke Xiao
https://orcid.org/0000-0002-8654-1339

He received B.S. degree from Jilin University in 2002, and received M.S. degrees from Nankai University in 2005. He received the Ph.D. degree from Beijing University of Posts and Telecommunications in 2008. He is currently an associate professor at School of Computer Science and North China University of Technology. His main research interests include communication security and pattern recognition.

## Biography

##### Yanjie Wang
https://orcid.org/0000-0003-3263-5978

He received B.S. degree from North China University of Technology in 2016. He is currently a postgraduate student in North China University of Technology. His research interests include image processing and pattern recognition.

## References

• 1 M. P. Beham, S. M. M. Roomi, "A review of face recognition methods," International Journal of Pattern Recognition and Artificial Intelligencearticle ID. 1356005, 2013, vol. 27, no. 4. doi:[[[10.1142/S0218001413560053]]]
• 2 P . J. Phillips, W . T . Scruggs, A. J. O'T oole, P . J. Flynn, K. W . Bowyer, C. L. Schott, M. Sharpe, "FA VT 2006 and ICE 2006 large-scale experimental results," IEEE Transactions on Pattern Analysis Machine Intelligence, 2010, vol. 32, no. 5, pp. 831-846. doi:[[[10.1109/TPAMI.2009.59]]]
• 3 P. J. Grother, G. W. Quinn, P. J. Phillips, "Report on the evaluation of 2D still-image face recognition algorithms," NIST Interagency Report 7709, 2010.doi:[[[10.6028/nist.ir.7709]]]
• 4 X. Zhao, Z. He, S. Zhang, S. Kaneko, Y. Satoh, "Robust face recognition using the GAP feature," Pattern Recognition, 2013, vol. 46, no. 10, pp. 2647-2657. doi:[[[10.1016/j.patcog.2013.03.015]]]
• 5 K. C. Lin, X. Wang, Y. Q. Tan, "Adaptive region enhancement based facial feature localization under complex illumination," Chinese Journal of Scientific Instrument, 2014, vol. 35, no. 2, pp. 292-298. custom:[[[https://www.researchgate.net/publication/286995820_Adaptive_region_enhancement_based_facial_feature_localization_under_complex_illumination]]]
• 6 X. Y. Tan, T. Bill, "Enhanced local texture feature sets for face recognition under difficult lighting conditions," IEEE Transactions on Image Processing, 2010, vol. 19, no. 6, pp. 1635-1650. doi:[[[10.1109/TIP.2010.2042645]]]
• 7 L. Lei, D. H. Kim, W. J. Park, S. J. Ko, "Face recognition using LBP Eigenfaces," IEICE Transactions on Information and Systems, 2014, vol. 97, no. 7, pp. 1930-1932. doi:[[[10.1587/transinf.E97.D.1930]]]
• 8 G. Zhang, J. S. Chen, J. Wang, "Local information enhanced LBP," Journal of Central South University, 2013, vol. 20, no. 11, pp. 3150-3155. doi:[[[10.1007/s11771-013-1838-7]]]
• 9 W . S. Li, L. Chen, "The LBP integrating neighbor pixels on face recognition," Advanced Materials Research, 2013, vol. 756, pp. 3834-3840. doi:[[[10.4028/www.scientific.net/amr.756-759.3835]]]
• 10 W . Yuan, L. Huanhuan, W . Kefeng, T. Hengqing, "Fusion with layered features of LBP and HOG for face recognition," Journal of Computer-Aided Design Computer Graphics, 2015, vol. 27, no. 4, pp. 640-650. custom:[[[https://www.researchgate.net/publication/281943286_Fusion_with_layered_features_of_LBP_and_HOG_for_face_recognition]]]
• 11 R. Davarzani, S. Mozaffari, K. Yaghmaie, "Perceptual image hashing using center-symmetric local binary patterns," Multimedia T ools and Applications, 2016, vol. 75, no. 8, pp. 4639-4667. doi:[[[10.1007/s11042-015-2496-6]]]
• 12 Y. Huixian, H. Dilong, L. Fan, L. Y ang, L. Zhao, "Face recognition based on bidirectional gradient center-symmetric local binary patterns," Journal of Computer-Aided Design Computer Graphics, 2017, vol. 29, pp. 130-136. custom:[[[http://en.cnki.com.cn/Article_en/CJFDTOTAL-JSJF201701016.htm]]]
• 13 J. D. Li, Z. X. Chen, C. Y. Liu, "Low-resolution face recognition based on blocking CS-LBP and weighted PCA," International Journal of Pattern Recognition and Artificial Intelligencearticle ID. 1656005, 2016, vol. 30, no. 8. doi:[[[10.16136/j.joel.2016.02.0657]]]
• 14 K. Gu, G. Zhai, X. Yang, W. Zhang, "Deep learning network for blind image quality assessment," in Proceedings of 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 2014;pp. 511-515. doi:[[[10.1109/ICIP.2014.7025102]]]
• 15 J. Wu, Y. Yu, C. Huang, K. Yu, "Deep multiple instance learning for image classification and auto annotation," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015;pp. 3460-3469. doi:[[[10.1109/CVPR.2015.7298968]]]
• 16 X. Ma, J. Geng, H. Wang, "Hyperspectral image classification via contextual deep learning," EURASIP Journal on Image and Video Processingarticle no. 20, 2015, vol. 2015, no. article 20. doi:[[[10.1186/s13640-015-0071-8]]]
• 17 M. Wu, L. Chen, "Image recognition based on deep learning," in Proceedings of the 2015 Chinese Automation Congress (CAC 2015), W uhan, China, 2015;pp. 542-546. doi:[[[10.1109/CAC.2015.7382560]]]
• 18 L, Shao, Z. Cai, L. Liu, K. Lu, "Performance evaluation of deep feature learning for RGB-D image-video classification," Information Sciences, 2017, vol. 385, pp. 266-283. doi:[[[10.1016/j.ins.2017.01.013]]]
• 19 S. F. Liang, Y. H. Liu, L. C. Li, "Face recognition under unconstrained based on LBP and deep learning," Journal on Communications, 2014, vol. 35, no. 6, pp. 154-160. doi:[[[10.3969/j.issn.1000-436x.2014.06.020]]]
• 20 W. Zhang, W. Wang, "Face recognition based on local binary pattern and deep learning," Journal of Computer Applications, 2015, vol. 35, no. 5, pp. 1474-1478. doi:[[[10.11772/j.issn.1001-9081.2015.05.1474]]]
• 21 X. Ma, Q. Sang, "Handwritten signature verification algorithm Based on LBP and deep learning," Chinese Journal of Quantum Electronics, 2017, vol. 34, no. 1, pp. 23-31. custom:[[[http://en.cnki.com.cn/Article_en/CJFDTOTAL-LDXU201701004.htm]]]
• 22 P . Liu, H. Zhang, K. B. Eom, "Active deep learning for classification of hyperspectral image," IEEE Journal of Selected T opics in Applied Earth Observations and Remote Sensing, 2017, vol. 10, no. 2, pp. 712-724. doi:[[[10.1109/JSTARS.2016.2598859]]]
• 23 P . Zhong, Z. Gong, S. Li, C. B. Schonlieb, "Learning to diversify deep belief network for hyperspectral image classification," IEEE Transactions on Geoscience and Remote Sensing, 2017, vol. 55, no. 6, pp. 3516-3530. doi:[[[10.1109/TGRS.2017.2675902]]]
• 24 Y . Chen, X. Zhao, X. Jia, "Spectral-spatial classification of hyperspectral data based on deep belief network," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015, vol. 8, no. 6, pp. 2381-2392. doi:[[[10.1109/JSTARS.2015.2388577]]]
• 25 S. Zhai, Q. Cao, W. Xie, "Hierarchical face recognition framework based on wavelet feature and sparse representation classification," Computer Engineering and Application, 2016, vol. 52, no. 14, pp. 142-145. custom:[[[-]]]
• 26 L. Chang, X. M. Deng, M. Q. Zhou, Z. K. Wu, Y . Y uan, S. Y ang, H. A. W ang, "Convolutional neural networks in image understanding," Acta Automatica Sinica, 2016, vol. 42, no. 9, pp. 1300-1312. custom:[[[-]]]