Wang* , Wang* , and Jiang*: A Method of Image Identification in Instrumentation

# A Method of Image Identification in Instrumentation

Abstract: Smart city is currently the main direction of development. The automatic management of instrumentation is one task of the smart city. Because there are a lot of old instrumentation in the city that cannot be replaced promptly, how to makes low-cost transformation with Internet of Thing (IoT) becomes a problem. This article gives a low-cost method that can identify code wheel instrument information. This method can effectively identify the information of image as the digital information. Because this method does not require a lot of memory or complicated calculation, it can be deployed on a cheap microcontroller unit (MCU) with low read-only memory (ROM). At the end of this article, test result is given. Using this method to modify the old instrumentation can achieve the automatic management of instrumentation and can help build a smart city.

Keywords: Code Wheel Instrument , Image Identification , IoT , Smart City , Template Matching

## 1. Introduction

Smart city is currently the main direction of development. The Internet of Thing (IoT) of instrumentation is one task of the smart city. However, there is much old instrumentation such as the gas meter in the city. The quantity of them is very large and the cost of replacing them is very high. It becomes important to transform the old instrumentation with low-cost to make them automatic with IoT. Image recognition is a viable method. However, there is still no image recognition method for lowcost reconstruction of instrumentation. Other industries, such as medical and precision machining, have applied image recognition methods. But among the existing methods, the deployment costs are too high [1-4]. The method of texture recognition costs a high read-only memory (ROM). The method of convolution neural network has a complicated calculation [5-7]. All these are not conducive to lowcost transformation. The old code wheel instruments all have the feature of relatively fixed image area and relatively fixed pattern of number. This article gives a method that can effectively identify the information of image as the digital information based on the above feature. The method can identify the area with valid information and can cut it out. With the template matching method, it can effectively identify the meaning of image information. This method use with memory less than 128k and can be deployed on a cheap microcontroller unit (MCU) such as STM32 series or MK series. With simple calculation, this method can make the running time as short as 1s with 8 numbers of the code wheel instrumentation to identify. This method can make the old instruments identify information and upload the data automatically. Section 2 gives the hardware of image recognition device. Section 3 gives the image segmentation method. Section 4 gives image identification method. Section 5 gives a serious test results. And Section 6 is a conclusion of this article.

## 2. Hardware Design

The hardware of the image identification device has five modules. They are the power module, the camera module, the liquid crystal display (LCD) module, the wireless module and the MCU module. The structure of the modules is shown in Fig. 1.

Fig. 1.

System of image identification device.

Fig. 2.

Circuit design of camera module (a), MCU module (b), LCD module (c), and wireless module (d).

Fig. 3.

Image taken by camera module.

The MCU module uses MLK17 as the MCU. The KL17 series is optimized for cost-sensitive and battery powered applications requiring low-power general-purpose connectivity. The camera module takes photo of the code wheel of instrumentation and sends it to MCU. The wireless module uses the LSD4RF. It works in the 470 MHz band and communicates with the server. The LCD module is responsible for real-time display of the information. The circuit design of these modules is shown in Fig. 2 and the image of instrument’s code wheel taken by camera is shown in Fig. 3.

## 3. Image Cutting Method

In order to effectively identify the image information and reduce the amount of computation, it needs to cut the image to extract the valid part of the information. The original image shown in Fig. 3 is consisted of three colors, red, black, and white. In order to avoid other light interference and reduce the computational complexity, it removes the red layer information to make the new picture. Then it makes the new picture binarized and inverted to reduce the computational complexity. The method given by this article firstly looks for the vertices of the image. By calculating the sum of nine pixels, the formula for the vertices is as Eq. (1).

##### (1)
[TeX:] $$\sum _ { i - 3 , j - 3 } ^ { i , j } P I C ( i , j ) > 1040$$

where PIC(i, j) is new picture’s image matrix. If the sum is larger than 1040, it means that at least 8 white pixels are found. In order to reduce the calculation, the step to find four vertexes is as follow.

Step 1. Search the image from up to down and left to right, to find the first vertex called up-left vertex.

Step 2. The second vertex is called down-left vertex. It needs search its X-axis coordinate near the Xaxis coordinate of up-left vertex. And search its Y-axis coordinate from the bottom of the image to the top of the image. Then it determines the down-left vertex’s coordinate position.

Step 3. The third vertex is called up-right vertex. It needs search its X-axis coordinate near the Y-axis coordinate near the Y-axis coordinate of up-left vertex. And search its X-axis coordinate from the right of the image to the left of the image. Then it determines the up-right vertex’s coordinate position.

Step 4. The forth vertex is called down-right vertex. It needs search its X-axis coordinate near the Xaxis coordinate of up-right vertex. And search its Y-axis coordinate near the Y-axis coordinate of left-down vertex.

After determining the four vertices, it cuts the original image by four vertices. Then it needs to calculate the image rotation angle by Eq. (2).

##### (2)
[TeX:] $$\theta = \arctan \left( \frac { x _ { L D } - x _ { L U } } { y _ { L U } - y _ { L D } } \right)$$

where θ is the tilt angle. The xLD and yLD are the left-down vertex’s X-axis coordinate and Y-axis coordinate. The xLU and yLU are the left-up vertex’s X-axis coordinate and Y-axis coordinate.

After determining the image’s tilt angle, it needs to rotate the image with tilt angle using the method of bilinear interpolation.

It needs to cut every number’s image. The method given by this article calculates the image by Eq. (3).

##### (3)
[TeX:] $$A ( i ) = \sum _ { 1 } ^ { j } y _ { j }$$

where A(i) is a one dimensional vector and yj is the Y-label coordinate of one column of the image.

This method differentiates the vector and dramatic changes part is the edge of each number. Then it needs to cut the image with the certain edge.

Then here comes the image of each number and can do further process with matching its value.

## 4. Image Matching Method

This article gives a simple method to identify the number in the image. The old code wheel instruments all have a common feature that the fixed mechanical structure of the code wheel allows the position of the image to be relatively fixed as well. Each mechanical position of the wheel code has its corresponding image. The mechanical structure is shown in Fig. 4.

Fig. 4.

Mechanical structure of wheel code.

The method given by this article gives an image matching method that has the minimum amount of calculation. The image templates are generated by the image cutting method given by Section 3. All these image templates are the same size with image that needs to be identified. Also the templates are the bitmap after being binarized. Each number has 6 positions of mechanical structure. There are 60 image templates for all the numbers in each position. The image templates of the number are shown in Fig. 5.

When it starts to match the image with the image templates, it uses the Eq. (4).

##### (4)
[TeX:] $$\operatorname { Sum } ( n ) = \sum _ { i = 1 , j = 1 } ^ { i , j } \left( I ( i , j ) - T _ { n } ( i , j ) \right)$$

where I(i, j) is one pixel of the image to be identified and the i and j is the pixel’s X-axis coordinate and Y-axis coordinate. The Tn(i,j) is one pixel of the image template and the subscript n is the number of the image template. The Sum(n) is the sum of differences for each image pixel and template pixel of the number.

Fig. 5.

Image templates of number.

This method needs to calculate the sum of differences for the identified image with all the image template one by one with Eq. (4). Then if the Sum(n) is equal to 0, it shows that the identified image is completely match the image template of the number. Then matching the number of image template with true number and here comes the identified result. If the Sum(n) is not equal to 0, it needs to choose the minimum result and match the number n image template. Then matching the number of image template with true number.

## 5. Test Results

In order to ensure that the experimental test result is closer to the real environment, the test picture is manually added noise. The test image is shown as Fig. 6.

Then it processes the test image with the method given by this article. The images after binarized, cut rotated and inverted are shown in Fig. 7.

Fig. 6.

Test image.

Fig. 7.

(a) Image after binarized, (b) cut image, (c) cut image after rotated, and (d) cut image after inverted.

Then it starts matching cutting single number’s image with image template. The result is shown in Fig. 8.

Fig. 8.

Result of image identification.

The result is completely right. The running time of this method on the hardware platform given by Section 2 is about 1.1 seconds. This running time is within reasonable limits. The method of image identification is reliable and feasible.

## 6. Conclusion

This article gives a method can identify the number information. It can detect the information area of the image and cut it out. Then the method can find the small image with single number and cut it out. It uses a method of template matching to identify the number in the image. It only need run with low memory. It also has very small amount of computation that makes it can be deployed at a chip MCU. It makes easy to transform the old code wheel instruments to make them can do automatic meter reading and send the data up to the server. It can transform the existing old code wheel instruments with low cost and is suitable for large-scale. It can help accelerate the pace of smart city construction. This article also gives the hardware design of the image identification device. The method of code wheel image identification and the hardware design of the image identification device can help accelerate the smart city construction.

## Biography

##### Xiaoli Wang
https://orcid.org/0000-0002-7420-321X

He received his B.S. degree in electronic information science and technology and his M.E. degree in circuits and systems from Shandong University, Weihai, China, in 2002 and 2008, respectively. He is currently a senior experimentalist in the School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China. His current research interests include micro-grid and embedded system development.

## Biography

##### Shilin Wang
https://orcid.org/0000-0002-2873-9240

He received the B.S. degree in electronic information science and technology from Shandong University, China in 2015. Now he is a postgraduate in the School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China. His research interests cover signal processing and embedded development.

## Biography

##### Baochen Jiang
https://orcid.org/0000-0002-6055-2355

He received his B.S. degree in radio electronics from Shandong University, China, in 1983 and his M.E. degree in communication and electronic systems from Tsinghua University, China, in 1990. He is currently a professor and supervisor of postgraduate students at Shandong University, Weihai, China. His current research interests include signal and information processing, digital image/video processing and analysis, and smart grid technology.

## References

• 1 A. Vishnuvarthanan, M. P . Rajasekaran, V . Govinadaraj, Y. Zhang, A. Thiyagarajan, "Development of a combinational framework to concurrently perform tissue segmentation and tumor identification in T1-W , T2-W , FLAIR and MPR type magnetic resonance brain images," Expert Systems with Applications, 2018, vol. 95, pp. 280-311. doi:[[[10.1016/j.eswa.2017.11.040]]]
• 2 T. Go, H. Byeon, S. J. Lee, "Label-free sensor for automatic identification of erythrocytes using digital in-line holographic microscopy and machine learning," Biosensors and Bioelectronics, 2018, vol. 103, pp. 12-18. doi:[[[10.1016/j.bios.2017.12.020]]]
• 3 X. Ma, M. Tian, J. Zhang, L. Tang, F. Liu, "Flow pattern identification for two-phase flow in a U-bend and its contiguous straight tubes," Experimental Thermal and Fluid Science, 2018, vol. 93, pp. 218-234. doi:[[[10.1016/j.expthermflusci.2017.12.024]]]
• 4 H. Z. M, Shah, M. Sulaiman, A. Z. Shukor, Z. Kamis, A. Ab Rahman, "Butt welding joints recognition and location identification by using local thresholding," Robotics and Computer-Integrated Manufacturing, 2018, vol. 51, pp. 181-188. doi:[[[10.1016/j.rcim.2017.12.007]]]
• 5 Q. Zhao, F. Sun, W . Li, P . Liu, "Flame detection using generic color model and improved block-based PCA in active infrared camera," International Journal of Pattern Recognition and Artificial Intelligencearticle no. 1850014, 2018, vol. 32, no. article 1850014. doi:[[[10.1142/S0218001418500143]]]
• 6 Z. Liu, S. Wang, M. Zhang, "Improved sparse 3D transform-domain collaborative filter for screen content image denoising," International Journal of Pattern Recognition and Artificial Intelligencearticle no. 1854006, 2018, vol. 32, no. article 1854006. doi:[[[10.1142/S021800141854006X]]]
• 7 L. Lambert, G. Grusova, A. Buraetova, P. Matras, A. Lambertova, P. Kuchynaka, "The predictive value of computed tomography in the detection of reflux esophagitis in patients undergoing upper endoscopy," Clinical Imaging, 2018, vol. 49, pp. 97-100. doi:[[[10.1016/j.clinimag.2017.11.009]]]