PDF  PubReader

Han , Wang , Chen , Li , and Piao: Hot Spot Detection of Thermal Infrared Image of Photovoltaic Power Station Based on Multi-Task Fusion

Xu Han , Xianhao Wang , Chong Chen , Gong Li and Changhao Piao

Hot Spot Detection of Thermal Infrared Image of Photovoltaic Power Station Based on Multi-Task Fusion

Abstract: The manual inspection of photovoltaic (PV) panels to meet the requirements of inspection work for large-scale PV power plants is challenging. We present a hot spot detection and positioning method to detect hot spots in batches and locate their latitudes and longitudes. First, a network based on the YOLOv3 architecture was utilized to identify hot spots. The innovation is to modify the RU_1 unit in the YOLOv3 model for hot spot detection in the far field of view and add a neural network residual unit for fusion. In addition, because of the misidentification problem in the infrared images of the solar PV panels, the DeepLab v3+ model was adopted to segment the PV panels to filter out the misidentification caused by bright spots on the ground. Finally, the latitude and longitude of the hot spot are calculated according to the geometric positioning method utilizing known information such as the drone's yaw angle, shooting height, and lens field-of-view. The experimental results indicate that the hot spot recognition rate accuracy is above 98%. When keeping the drone 25 m off the ground, the hot spot positioning error is at the decimeter level.

Keywords: DeepLab v3+ , Geometric Location , Hot Spot Location , Hot Spot Recognition , YOLO v3

1. Introduction

An increasing number of people favor solar energy because it is clean, pollution-free, and long-term. Countries are actively developing solar power technologies globally. However, the overall quality of photovoltaic (PV) power plants is unsatisfactory. Among them, the power decline of PV modules is severe, and the more extended the utilization, the higher the probability of PV module damage. As a core part of a PV power station, even a tiny problem can cause substantial economic losses. In terms of the scale of construction, PV power generation projects are large and remote, and power station inspection is highly complex. Manual inspection makes it difficult to meet the requirements for future inspections of large-scale PV power plants. PV power plants primarily utilize solar panels to generate electricity. Affected by many factors, the panels are easily blocked, resulting in different sunlight intensities for the power generation components, which leads to the local temperature being too high, causing a hot spot effect [1].

Given the many problems caused by hot spots in PV power plants, the following solutions for hot spot detection are presented.

Parallel bypass diode method [2]: This type of method is commonly employed in PV modules. The primary purpose was to reduce the reverse voltage and current of the shelter part of the cell utilizing a bypass diode. However, this approach has some shortcomings because the diode excessively consumes power, which lowers the output efficiency of the plant.

2) Fault-detection methods based on thermal imaging technology [3]: Based on the fact that PV modules have noticeable temperature differences under different working conditions, a fault diagnosis method based on infrared image analysis is proposed. To rectify the problem of deficiencies in identification and localization, this study proposes an optimization strategy.

3) Fault detection method based on current and voltage [4]: This type of method is primarily adopted for the fault diagnosis of PV arrays. Based on the SP structure of PV arrays (series-parallel connection), we utilized the change in voltage and current of PV arrays in the fault state to realize fault detection of PV arrays.

Wen et al. [5] calculated the equivalent series resistance of modules based on the equivalent computational model of PV arrays and then determined their operational status with the parameter comparison method to judge whether the PV arrays were malfunctioning. The images captured by the drone carrying a thermal imager are infrared images with low resolution, and the morphology of the hot spots is usually small, resulting in low accuracy of hot spot identification and difficulty in pinpointing the latitude and longitude of the hot spots. Guo and Xu [6] combined image pre-processing, migration learning, an improved feature extraction network model, and an improved anchor frame selection scheme based on the original faster RCNN to obtain a hot spot defect detection model.

· The UAV-YOLO model was utilized to maximize the front network features to identify far-field hot spots. Moreover, the DeepLab v3+ model was adopted to segment PV panels to filter out misidentification caused by bright spots on the ground.

· By utilizing available information such as the unmanned aerial vehicle (UAV) yaw angle, UAV shooting height, lens field of view angle, lens focal length, image center point latitude and longitude, pixel point coordinates of the hot spot, and center point in the pixel coordinate system, the latitude and longitude of the hot spot can be calculated according to the geometric positioning method.

2. Hot Spot Recognition and Location Model

The entire process can be divided into two parts: identifying hot spots and locating hot spots. The UAV was equipped with a gimbal to obtain visible and infrared images, and the improved YOLOv3 model was employed to identify hot spots. The DeepLab v3+ model was utilized to segment the infrared images to filter out misidentifications caused by bright spots on the ground. Then, the identified hotspots are geometrically located, and the latitude and longitude information of the hot spots is output, which is convenient for the staff to check and repair later.

2.1 Improve YOLOv3 to UAV-YOLO Model

An optimized hot spot detection model was established to rectify the problem of the low accuracy of UAV far-field hot spot detection model from the UAV's perspective. In addition, the structure of the model and hot spot detection process are explained. This section introduces the improved neural network residual unit fusion method and optimized model backbone network structure [7].

2.1.1 Residual unit structure in fusion network

To improve the accuracy of UAV far field hot spot detection, this paper proposes an improved YOLOv3 model based on the perspective of drones. The model we applied is referred to here as the UAV-YOLO model. This model significantly improved the accuracy of UAV hot spot detection in the far-field. The UAV-YOLO model structure proposed here is illustrated in Fig. 1 [8].

Fig. 1.

UAV-YOLO model structure: (a) UAV-YOLO structure, (b) Res unit_1, (c) Resblock_body, and (d) Darknetcon2D_BN_Leaky [ 8].
1.png

A single DBL unit presented in Fig. 1(d) mainly comprises a convolutional layer connected to a batch normalization layer and a leaky activation function. As illustrated in Fig. 1(b), the RU_1 unit is superimposed on the input layer based on the calculation results of two consecutive DBL units. The Resn units are presented in Fig. 1(c). It first performs a zero-padded operation and a DBL unit operation on the picture and then connects n RU_1 units to realize the realization. The residual unit structure of the UAV model is depicted in pink (RU_2 unit), and its detailed diagram is the RU_2 residual unit in Fig. 2. Two RU_1 units were combined into one RU_2 unit by overlaying the network layers of the feature map of the same size. The RU_2 unit adds an additional layer to the network layer compared to the two RU_1 units, and a red network branch is added to connect the first DBL output of the two RU_1 units, as illustrated in Fig. 2 with the red added unit and the network branch above it.

Fig. 2.

RU_2 residual unit fusion diagram [ 8].
2.png

First, we adjust the UAV's perspective image to the size set by the detection model (608×608×3) and import it into the model's backbone network for the first convolution operation. In the Res2 unit, the convolution size was set to 3×3, and the stride was set to two. The picture size became 304×304×3 and was then downsampled twice on the convolution operations in Res3 and Res7. The image size was reduced to 76×76×3. The first branch of the model will appear at ① in Fig. 1(a), and one of the branches will be spliced with (cat module in Fig. 1(a)) the deep features (at ④ in Fig. 1(a)) by convolution and upsampling in the subsequent convolution process in the depth dimension. After stitching, five convolution operations were performed again in addition to a convolution operation to adjust the image size. Finally, the size of the feature map was changed to 76×76×18 utilizing a layer of convolution operations that set the output image size, the output module y3 of the YOLO layer in Fig. 1(a).

The other branch continues the convolution process to the right in the backbone, downsampling through Res1 units and convolution through alternating RU_2 and RU_1 units. The size of the feature map was changed to 38×38×3 pixels. The second branch process occurs at ② in Fig. 1(a), where a branching operation process is similar to the previous y3 branching operation process, utilizing the results of the deep feature information (at ③ in Fig. 1(a)) after convolution upsampling with the feature information of this branch for splicing and convolution operations, and finally obtaining the output Yolo layer y2 module with a feature map size of 38×38×18. One downsampling was performed with Res1. Then, the RU_2 and RU_1 unit convolution operations were executed. Finally, the feature map was resized to 19×19×18 by a multilayer convolution operation, as depicted in the Yolo layer y1 in Fig. 1(a).

2.1.2 Improved backbone network structure of YOLOv3 model

Compared to the YOLOv3 model, the difference in the structure of the UAV-YOLO model is mainly reflected in its backbone network structure. The structure of the UAV-YOLO model is illustrated in Fig. 1(a), and the Darknet-77 Backbone marked by the black dashed box is the backbone network of the UAVYOLO model, while the backbone network of YOLOv3 model is illustrated in Fig. 3. A comparison of YOLOv3 and UAV-YOLO backbone networks is presented in Fig. 4.

Fig. 3.

YOLOv3 model backbone network structure [ 8].
3.png

A comparison of the two backbone network structures demonstrates that the UAV-YOLO backbone network has three additional layers compared to the YOLOv3 network, and the number of RU_1 cells contained in each Resn unit in the backbone changes. As illustrated in Fig. 4, the YOLOv3 backbone network structure does not contain the residual unit structure like RU_2, and the number of backbone network layers is 74. Red dashed boxes indicate Res1 and RU_1 units. The UAV-YOLO backbone network contained three RU_2 units alternately arranged with the RU_1 units. The total number of layers in the backbone network was 77. In the pre-detection period, the number of RU_1 residual units in each Resn unit was changed to two, three, and seven before the first output in the backbone network. The convolution units of the YOLOv3 model backbone convolution sets were 1, 2, 8, 8, and 4. The number of convolution units in the UAV-YOLO model backbone network for adjusting the first three convolution sets are 2, 3, and 7; compared to the first output of the YOLO layer (corresponding to y3 in Fig. 4), the UAV-YOLO model performs one more convolution operation of the convolution unit. The final detection accuracy from the model proves that more convolution operations are more conducive to improving the hot spot detection accuracy. Allocating the convolution units in this manner is also more convenient for the alternating arrangement and distribution of the RU_2 units.

Fig. 4.

Comparison of backbone network between YOLOv3 model (a) and UAV-YOLO model (b) [ 8].
4.png
2.2 PV Panel Segmentation based on DeepLab v3+ Model

Here, the PV panels were segmented according to the DeepLab v3+ model to filter out bright spots that did not belong to the PV panels. In DeepLab v3+, the main design framework is the decoder–encoder model, and there are several major innovations [9]. First, a better backbone network is utilized; the ResNet network is improved to increase the receptive field of the backbone network, all pooling layers are replaced with separate convolutions, and the receptive field is effectively expanded by reducing the number of calculations and improving computing performance [10]. Second, ASPP was utilized to gather granular information. The specific connotation is to utilize hole convolution with different hole intervals, perform convolution operations on the features, and stitch the convolution results together into new features, thereby enhancing the model's ability to recognize the same object of different sizes [11]. Fig. 5 presents a PV panel segmentation diagram based on the DeepLab-v3+ model.

As illustrated in Fig. 5, the upper half of the structure is encoded, and the lower half is decoded [12]. The coding utilizes the ASPP which has four different expansion rates each time, and additional global average pooling is performed. The decoding structure first upsamples the result of the encoder four times, then performs a 3×3 convolution together with the Conv2 feature concat before downsampling in ResNet, and finally upsamples four times to obtain the final result.

Fig. 5.

PV panel segmentation diagram based on DeepLab-v3+ model.
5.png

2.2.1 Experimental steps

Label the PV panel dataset, utilize the LabelMe tool to label all the PV panels in the photos, and obtain the label information for all the pictures. The marked JSON file saves the label information and utilizes the command labelme_json_to_dataset for the batch process of obtaining PNG images. After sorting and labeling, the pictures were randomly divided into training and test sets at a ratio of 10:1. The image was placed in the DeepLab v3+ model for training. The goal was to filter out bright spots on the ground and reduce the misidentification of hot spots. As illustrated in Fig. 6, an infrared image of a PV panel, and Fig. 6(b) is based on the experimental results of DeepLab v3+ model and semantic segmentation.

Fig. 6.

Comparison of (a) an infrared image of PV panel and (b) an image after semantic segmentation.
6.png

3. Hot Spot Location Model

3.1 Establish Mathematical Model for Hot Spot Location

According to the thermal infrared map captured by the quadrotor UAV, by viewing the image properties and parameter information, we can obtain the following relevant information: the size of the image is (608×608), the height at the time of shooting is 2,500 mm, and the latitude and longitude information of the center point of the image. According to actual measurements, the aircraft faces north. When the aircraft is veering to the left, the yaw angle range is ([TeX:] $$0^{\circ} \text { to } 180^{\circ}$$); when the aircraft is veering to the right, the yaw angle range is ([TeX:] $$0^{\circ} \text { to } 180^{\circ}$$). Unit conversion of latitude and longitude information according to Eq. (1). Distances DA and OA were obtained from the pixel differences between Points D and O. The DOA angle can be obtained by a trigonometric function.

(1)
[TeX:] $$\text { degree }+ \text { minute } *(1 / 60)^{\circ}+\operatorname{second} *(1 / 3600)^{\circ}=\text { degree } \text {. }$$

The pixel size of the image was 608×608, and the origin of the pixel point was located in the upperleft corner at the coordinate value (0, 0). Two coordinate systems are established in Fig. 7: the XY coordinate system with the O point as the origin in the center of the image, and the other is the latitude and longitude coordinate system. Suppose that Point D is the hot spot in the image, DA is perpendicular to the X-axis, DC is perpendicular to the north-south axis, and DB is perpendicular to the east-west axis. [TeX:] $$\angle \mathrm{DOC}$$ is the angle between Line OD and the north-south axis, [TeX:] $$\angle \mathrm{COX}$$ is the yaw angle, [TeX:] $$\angle \mathrm{DOA}$$ is the angle between the X-axis and Line OD. According to the pixel difference between Points D and O, the distance between DA and OA is obtained, tan([TeX:] $$\angle \mathrm{DOA}$$) is DA divided by OA, and the angle of [TeX:] $$\angle \mathrm{DOA}$$ is obtained by inverse tangent. For a drone with a fixed height of 25 m, the measured distance represented by a pixel in the image is 3.26 cm. According to the difference between the two OD pixels, the actual distance of the OD line is calculated, and finally, according to [TeX:] $$\mathrm{OC}=\mathrm{OD} * \mathrm{COS} \angle \mathrm{DOC}$$, the actual distance of the hot spot projected on the meridian relative to the center point; [TeX:] $$\mathrm{OB}=\mathrm{OD} * \mathrm{SIN} \angle \mathrm{DOC}$$, get the actual distance of the hot spot projected on the latitude line relative to the center point.

Fig. 7.

Hot spot map.
7.png

Specify [TeX:] $$\angle \mathrm{a}$$ (example [TeX:] $$\angle \mathrm{DOA}$$) as the angle between the line between the hot spot and the center point and the x-axis in the pixel coordinate system. Specify [TeX:] $$\angle \mathrm{b}$$ (example [TeX:] $$\angle \mathrm{DOC}$$) as the angle between the line between the hot spot and the center point and the north-south axis, and ensure that the angle is always acute. The classification is discussed according to the hotspot's different quadrants and yaw angles, and the value of the included angle b is obtained. The determination of [TeX:] $$\angle \mathrm{DOC}$$ in Fig. 7 is related to the different yaw angles of the UAV and the position of the image where the hot spot is located. The idea proposed here is to first determine the yaw angle belonging to the following angle ranges ([TeX:] $$0^{\circ} \text { to } 90^{\circ}$$), ([TeX:] $$90^{\circ} \text { to } 180^{\circ}$$), ([TeX:] $$0^{\circ} \text { to } -90^{\circ}$$) and ([TeX:] $$-90^{\circ} \text { to } -180^{\circ}$$). We then determine which quadrant of the pixel coordinate system the hot spot is in and obtain the desired direction angle according to the geometric drawing.

3.2 Further Positioning Angle

Fig. 8 presents the calculation of angle [TeX:] $$\angle \mathrm{DOC}$$. According to the angle of the yaw angle in the range of [TeX:] $$0-90^{\circ}$$, set up the corresponding coordinate system. Four positive and negative relationships are obtained by subtracting the centroid pixel point coordinate values from the hot spot pixel point coordinate values; they are (+, –) (–, –) (–, +) (+, +). (+, –) represents the hot spot in the first quadrant of the XY coordinate system; (–, –) represents the hot spot in the second quadrant of the XY coordinate system, (–, +) represents the hot spot in the third quadrant of the XY coordinate system, and (+, +) represents the hot spot in the fourth quadrant of the XY coordinate system.

Combined with the east-west north-south coordinate system, the north-south line represents the longitude line, and the east-west line represents the latitude line. The calculated distance is directly added and subtracted from the latitude and longitude to obtain the latitude and longitude of the hot spot. On the same longitude line, the difference of [TeX:] $$1^{\circ}$$ in latitude will cause a distance difference of approximately 111km. On the same line of latitude (assuming the latitude of this line is [TeX:] $$\Phi$$), the actual arc length corresponding to a [TeX:] $$1^{\circ}$$ difference in longitude is approximately 111000 by [TeX:] $$\cos (\Phi)$$[13]. When the hot spot is on the left of the north-south axis, the longitude of the hot spot is equal to the longitude of the center point minus the distance between the two longitudinal lines. When the hot spot is on the right of the northsouth axis, the longitude of the hot spot is equal to the longitude of the center point plus the distance between the two longitudes. When the hot spot is above the east-west axis, the latitude of the hot spot is equal to the latitude of the center point plus the distance between the two lines of latitude. When the hot spot is under the east-west axis, the hot spot latitude is equal to the latitude of the center point minus the distance between the two latitude lines.

Fig. 8.

Schematic of [TeX:] $$\angle \mathrm{DOC}$$ positioning.
8.png

4. Experimental Results

This study utilized the Xiaoguanshan PV power station in Weining City, Guizhou Province, as an experimental site to test the hotspot detection model. The computer configuration was as follows: CPU, Intel i7-7700K; GPU, NVIDIA GeForce GTX 1080 Ti; operating system, Ubuntu16.04. The software and version numbers were as follows: Python3.5, TensorFlow-gpu1.3, and mysql5.7. LabelMe was employed to create hotspot and PV panel segmentation datasets to facilitate model training and testing. Because our method was developed based on YOLOv3, we compared it with the methods presented in Table 1. Fig. 9(a) and 9(b) present the results of hot spot detection utilizing the improved YOLOv3, and the multi-task fusion hotspot location method proposed here, respectively.

The experimental results indicate that the improved YOLOv3 model can improve the hotspot recognition rate. Simultaneously, after utilizing DeepLab v3+ for panel segmentation, the hot ground plate was effectively filtered.

Fig. 10 shows a visualization software for marking the latitude and longitude information. The precision of the positioning algorithm here was verified by comparing the true values of the marked hotspot longitude and latitude with the predicted hotspot longitude and latitude. Table 2 presents the three data groups. Each dataset contained both true and predicted values. The errors in the first, second, and third groups were 18, 22, and 36 cm, respectively.

Table 1.

Performance comparison with other methods
Method mAP (%) IoU (%) FPS
YOLOv3 95.63 90.34 62.1
Our method 97.91 92.56 51.06

mAP=mean average precision, IoU=intersection-over-union, FPS=frame per second.

Fig. 9.

Multi-task fusion hot spot detection map: (a) detection results based on the improved YOLOv3 model and (b) detection results based on multi-task fusion algorithm.
9.png

Fig. 10.

Visualization software for marking latitude and longitude information.
10.png

Table 2.

Analysis of true and predicted values of three groups of hotspots
1 point 2 point 3 point
Predicted value
Longitude 104.7343518 104.7346307 104.7521689
Latitude 25.0452698 25.0452681 24.9989847
True value
Longitude 104.7343535 104.7346327 104.7521656
Latitude 25.0452698 25.0452673 24.9989832
Error (cm) 18 22 36

5. Conclusion

Owing to the large area of PV power stations and the wide coverage of PV, manual inspection of PV panels makes it difficult to meet the requirements of inspection work for large-scale PV power stations. In addition, the effect of manual inspection of PV panels was less than satisfactory. This article proposes a multi-task fusion method for hot spot detection of PV panels in PV power plants based on drones. First, we present an improved YOLOv3 model for detecting the full range of hotspots. Because the model adds a neural network residual unit and fusion, which increase the effective information of feature extraction, it is more suitable for small-target detection, such as hotspots. Given the misrecognition problem in the infrared images of solar PV panels, the DeepLab v3+ model was adopted to segment the PV panels to filter out misrecognition caused by bright spots on the ground. Finally, field data are utilized to test the model. The experimental results indicate that the improved YOLOv3 model can improve the hotspot recognition rate. Simultaneously, after utilizing DeepLab v3+ for panel segmentation, it can effectively filter out hot ground plates, significantly reducing the false recognition rate of hot spots, and the difference between the predicted and true values is at the decimeter level.

Biography

Xu Han
https://orcid.org/0000-0002-0544-4773

He received his B.E. from Weifang College in 2020. In September 2020, he entered Chongqing University of Posts and Telecommunications for a master's degree in transportation engineering. He is currently a student, and his research direction is active safety in unmanned vehicles.

Biography

Xianhao Wang
https://orcid.org/0000-0002-3402-4620

He received his B.E. from Chongqing University of Technology in 2017. Then, he received his M.E. from Chongqing University of Posts and Telecommunications in 2020. He is currently an engineer of Lenovo Future Communication Technology Co. Ltd., engaged in the development of 5G base stations. He has published one SCI paper and three patents.

Biography

Chong Chen
https://orcid.org/0000-0003-2686-7561

He is a deputy general manager and intermediate engineer of Chongqing SPIC ZINENG Science & Technology Co. Ltd., State Power Investment Group, has been engaged in the electric power industry for 20 years, and is currently engaged in digitalization related work in the energy industry.

Biography

Gong Li
https://orcid.org/0000-0002-9344-1862

He received his B.E. in engineering from China Jiliang University in 2018. Then, he received his master’s degree in control engineering from Chongqing University of Posts and Telecommunications in 2021. And he is currently working in the Network Center of the Southwest Regional Air Traffic Administration of Civil Aviation of China. He has published a patent and a software work.

Biography

Changhao Piao
https://orcid.org/0000-0003-3795-2286

He received his B.E. from Xi’an Jiaotong University in 2001, China. Then, he received his M.E. and D.E. from Inha University, Sout Korea in 2007. Now he is a professor at Chongqing University of Posts and Telecommunications, China. His research interests include automobile electronics, energy electronics and EHW. He has published more than 80 publications, including papers, books, and patents.

References

  • 1 W. Liu, L. Zhao, Y . Zhou, S. Zong, and Y . Luo, "Deep learning based unmanned aerial vehicle landcover image segmentation method," Transactions of the Chinese Society of Agricultural Machinery, vol. 51, no. 2, pp. 221-229, 2020.custom:[[[-]]]
  • 2 M. W. Y . Tu, C. Li, H. Y u, and H. Yao, "Non-adiabatic Hall effect at Berry curvature hot spot," 2D Materials, vol. 7, no. 4, article no. 045004, 2020. https://doi.org/10.1088/2053-1583/ab89e8doi:[[[10.1088/-1583/ab89e8]]]
  • 3 D. Li "Research on intelligent patrol inspection technology of photovoltaic power plant based on augmented reality," Automation Application, vol. 2019, no. 6, pp. 120-121+124, 2019. https://doi.org/10.19769/j.zdhy.2019.06.048doi:[[[10.19769/j.zdhy.2019.06.048]]]
  • 4 R. S. Eckard, B. A. Bergamaschi, B. A. Pellerin, T. Kraus, and P . J. Hernes, "Trihalomethane precursors: land use hot spots, persistence during transport, and management options," Science of The Total Environment, vol. 742, article no. 140571, 2020. https://doi.org/10.1016/j.scitotenv.2020.140571doi:[[[10.1016/j.scitotenv.2020.140571]]]
  • 5 J. Wen, K. Guo, H. Wang, D. Y uan, and M. Mao, "Research on the online detection method of photovoltaic array hot spot fault," Electronic Products, vol. 2019, no. 15, pp. 55-58+76, 2019.custom:[[[-]]]
  • 6 M. H. Guo and H. W. Xu, "Research on hot spot defect detection of infrared thermal image based on Faster RCNN," Computer Systems & Applications, vol. 28, no. 11, pp. 265-270, 2019.custom:[[[-]]]
  • 7 J. Redmon and A. Farhadi, "YOLOv3: an incremental improvement," 2018 (Online). Available: https://arxiv.org/abs/1804.02767.custom:[[[https://arxiv.org/abs/1804.02767]]]
  • 8 M. Liu, X. Wang, A. Zhou, X. Fu, Y . Ma, and C. Piao, "UAV-YOLO: small object detection on unmanned aerial vehicle perspective," Sensors, vol. 20, no. 8, article no. 2238, 2020. https://doi.org/10.3390/s20082238doi:[[[10.3390/s2238]]]
  • 9 L. C. Chen, Y . Zhu, G. Papandreou, F. Schroff, and H. Adam, "Encoder-decoder with Atrous separable convolution for semantic image segmentation," 2018 (Online). Available: https://arxiv.org/abs/1802.02611.custom:[[[https://arxiv.org/abs/1802.02611]]]
  • 10 D. Wu, X. Yin, B. Jiang, M. Jiang, Z. Li, and H. Song, "Detection of the respiratory rate of standing cows by combining the Deeplab V3+ semantic segmentation model with the phase-based video magnification algorithm," Biosystems Engineering, vol. 192, pp. 72-89, 2020. https://doi.org/10.1016/j.biosystemseng.2020.01.012doi:[[[10.1016/j.biosystemseng.2020.01.012]]]
  • 11 M. T. N. Truong and S. Kim, "A study on visual saliency detection in infrared images using Boolean map approach," Journal of Information Processing Systems, vol. 16, no. 5, pp. 1183-1195, 2020. https://doi.org/10.3745/JIPS.02.0145doi:[[[10.3745/JIPS.02.0145]]]
  • 12 X. Feng and K. Hu, "Perceptual fusion of infrared and visible image through variational multiscale with guide filtering," Journal of Information Processing Systems, vol. 15, no. 6, pp. 1296-1305, 2019. https://doi.org/10.3745/JIPS.04.0144doi:[[[10.3745/JIPS.04.0144]]]
  • 13 D. Cao, Z. Chen, and L. Gao, "An improved object detection algorithm based on multi-scaled and deformable convolutional neural networks," Human-centric Computing and Information Sciences, vol. 10, article no. 14, 2020. https://doi.org/10.1186/s13673-020-00219-9doi:[[[10.1186/s13673-020-00219-9]]]