Fang-li Guan* ** , Ai-jun Xu* ** and Guang-yu Jiang*An Improved Fast Camera Calibration Method for Mobile TerminalsAbstract: Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals. Keywords: Camera Calibration Technique , Camera Distortion Correction , Close-Range Photogrammetry , Machine Vision , Mobile Terminals , Pinhole Model 1. IntroductionCamera calibration is a crucial step during 3D information retrieval process from 2D images which has extensive application in the field of photogrammetry, machine vision and 3D reconstruction [1]. Since the inception of two-stage calibration method raised by Tsai [2,3] in 1986, camera calibration technology has been a research hotpot in the field of machine vision and 3D reconstruction [4,5]. Up to now, the mainstream camera calibration methods include traditional calibration method, active vision-based calibration method and self-calibration method [6]. The active vision-based calibration method uses a simple algorithm to obtain ideal linear solution with high robustness. However, limited by its high systematic costs and expensive experiment equipment, it cannot be popularized in practical production and application [7,8]. The self-calibration method has strong flexibility, which can realize on-line calibration [9-11]. However, it has poor robustness due to the repetitive dual image indefinite and least square solution restraints in absolute dual conic curve [12]. As for traditional calibration method, the two-stage calibration method (proposed by Tsai [2]) and the Zhang’s calibration method [13,14] are most classical methods. Zhang [13,14] introduces a method which uses multiple planar patterns to calibrate overall internal and external camera parameters based on Tsai’s two-stage method. Pursuant to planar pattern statistics and image statistics, this method calculates the homography matrix between images and patterns to restrain internal camera parameters and meanwhile adopts absolute conic principle to calculate internal and external parameters. Although Zhang’s calibration method is complex and computationally intensive, it has good robustness, which directly promotes machine vision technology from laboratory to practical application. With the proposal of Zhang’s calibration method, many scholars have made further research in this field. Aimed at the problems of camera lens distortion, Liu and Li [15] proposed an improved two-stage calibration method based on Zhang’s calibration method, and improved the analytical solution by using the least square method, which improved the robustness of the initial value. Liu and Xie [16] raised a camera calibration method based on coplanar points. Through the study on camera model and distortion model, this calibration method uses the nonlinear least square method to optimize internal and external calculation processes. In addition, many scholars have applied camera calibration technology to machine vision and 3D reconstruction fields and achieved favorable achievements [17-20]. Considering the continuous development of camera calibration technology, Li and Zhao [21] point out that the existing camera calibration methods, which relying on complex software systems and operating procedures. According to relevant literature review, the current camera calibration methods are all operated on PC terminals. There are few researches on camera calibration methods based on mobile terminals. Besides, corner detection occupies significant amount of computation with low time efficiency, and tedious internal and external parameter iteration process, which hinders the application of visual measurement and 3D reconstruction technology in the field of mobile terminals. In order to realize the fast camera calibration on mobile terminal, and solve the problem that the traditional calibration method can’t adapt to the mobile terminal environment with limited computing resource. This paper proposes an improved fast camera calibration method based on Zhang’s calibration method. The method introduces camera lens radial distortion and tangential distortion, and elevates calibration precision and simultaneously optimizes internal and external parameter iteration process through nonlinear least square Levenberg-Marquardt algorithm so as to obtain the ideal linear solution more efficiently. Fast camera calibration on mobile terminals can expand the application range of machine vision and closerange photogrammetry technology. 2. Design of Improved Camera Calibration2.1 Overview of Camera CalibrationCamera calibration includes two coordinate conversion processes. The first is the transformation from 3D world coordinate system to the 3D camera coordinate system. The corresponding parameters during this transformation include rotation matrix, rotation vector and translation vector. The second is the transformation from the 3D camera coordinate system to the 2D plane coordinate system (pixel coordinate system). The corresponding parameters during this transformation include focal distance and principal point. Since existing camera calibration methods are not suitable for fast and accurate calibration of internal and external camera parameters on mobile terminals with limited computing resource, this paper presents an improved fast camera calibration method, which focus on the influence of camera lens distortion on calibration precision and the low efficiency of initial estimate parameter iteration. 2.2 Overview of the DesignIn light of the constancy of phone camera (hereinafter referred to as ‘camera’ ) lens focal distance and lens distortion with varying degrees, this paper attempts to improve the Zhang’s planar calibration method based on pinhole imaging model (hereinafter referred to as pinhole model). Firstly, for camera lens distortion, this method introduces lens radial distortion and tangential distortion and establishes a nonlinear distortion camera model. Secondly, this method resorts to the calibration pattern to retrieve calibration pictures and test corner information in calibration planar images during the camera calibration process. Subsequently, internal and external initial estimate camera parameters are calculated according to the homography relationship between the world coordinate system and the camera coordinate system in the camera model and optimized by the Levenberg-Marquardt algorithm to derive ideal internal and external camera parameters. Finally, calculate lens distortion parameters and improve parameter precision with the nonlinear least square multi-iteration calculation method. The camera calibration technical route shows in Fig. 1. 3. Improved Camera Calibration ModelAccording to the linear relationship between spatial coordinates and image coordinates in pinhole model, the problem of solving camera parameters is transformed into a linear formula. Since the calibration methods mentioned in literature review [2-3,13] simply take radial distortion into consideration but excludes tangential distortion, this paper proposes an improved fast camera calibration method to address camera distortion problem. 3.1 Camera Imaging ModelCamera image shooting process is actually optical imaging process which involves world coordinate system, camera coordinate system, image coordinate system, pixel coordinate system, and the transition among the four systems. The linear camera model solves the problem of the correspondence between the world coordinate system and pixel coordinate system in 3D scene, and provides the basis for the calculation of internal and external parameters of the camera. The relationship between coordinate systems in linear pixel model is shown in Fig. 2. According to the characteristics of coplanar points, a camera calibration model based on pinhole model is established. The relationship between pixel point coordinate [TeX:] $$m=\left[\begin{array}{ll}{u} & {v}\end{array}\right]^{\mathrm{T}}$$ of 2D image and scene point coordinate [TeX:] $$M=\left[\begin{array}{lll}{X} & {Y} & {Z}\end{array}\right]^{\mathrm{T}}$$ of 3D space can be seen in the linear pinhole model. In the pinhole model, the homography relationship between planar pattern and [TeX:] $$\tilde{x}$$ is augmented vector and the homogeneous coordinate of m and M (the last element + 1) can be represented as [TeX:] $$\widetilde{m}=[u, v, 1]^{\mathrm{T}}$$ and [TeX:] $$\widetilde{M}=[X, Y, 1]^{T}.$$ By reference to the pinhole model, the relationship between 3D point M and image project point m is shown as follows:
(1)[TeX:] $$s \widetilde{m}=K[R \quad t] \widetilde{M} \quad K=\left[\begin{array}{ccc}{\frac{f}{d_{x}}} & {c} & {u_{0}} \\ {0} & {\frac{f}{d_{y}}} & {v_{0}} \\ {0} & {0} & {1}\end{array}\right]$$where s is an arbitrary scale factor, K is the internal camera parameter matrix, [R t] is the external camera parameter matrix, [TeX:] $$\left(u_{0}, v_{0}\right)$$ is the principal point coordinate, f is the camera lens focal distance, [TeX:] $$d_{x}$$ and [TeX:] $$d_{y}$$ are respectively the physical sizes of each pixel in the image plane in x and y directions (because of technical limitations, each physical pixel point is rectangular, but not strictly square), and c is the parameter which describes the angle inclination of two coordinate axis. 3.2 Nonlinear Distortion Optimization ModelIdeal camera model is the pinhole model, which is a kind of linear model. In the pinhole model, objects and images show similar triangle relation. The derivation of above coordinate system formula conforms to the linear pinhole model. However, due to the limitations of camera processing technology and installation, the image points, the projection center and the space points of images are not collinear. Pixel offset (i.e., image distortion) mainly includes the radial distortion, tangential distortion and thin prism distortion (thin prism distortion is very small, so it is not considered). Image distortion exists in the nonlinear form, which not only causes the irregular image distortion, but also affects the accurate calibration of internal and external camera parameters. Therefore, it is necessary to correct the distorted images. The distortion correction model can be expressed as:
(2)[TeX:] $$\left\{\begin{array}{l}{x_{u}=x+\delta_{x}(x, y)} \\ {y_{u}=x+\delta_{y}(x, y)}\end{array}\right.$$where [TeX:] $$\left(x_{u}, y_{u}\right)$$ is the ideal point coordinate, which is calculated by the linear pinhole model. (x, y) is the coordinate of actual image point. [TeX:] $$\delta_{x} \text { and } \delta_{y}$$ represent nonlinear distortion value related with the position of image point in the image pattern. According to the characteristics of mobile terminal camera, radial distortion and tangential distortion are introduced respectively. Radial distortion is caused by the form imperfection of camera lens. Formula (3) is the radial distortion model function without high-order terms:
(3)[TeX:] $$\left\{\begin{array}{l}{\delta_{x r}=k_{1} x\left(x^{2}+y^{2}\right)} \\ {\delta_{y r}=\mathrm{k}_{2} x\left(x^{2}+y^{2}\right)}\end{array}\right.$$Tangential distortion of the lens is caused by different eccentricity of optical system. Eccentricity means that the optical center of the lens assembly is not completely in a straight line. Formula (4) is the tangential distortion model function without high-order terms:
(4)[TeX:] $$\left\{\begin{array}{l}{\delta_{x d}=p_{1}\left(3 x^{2}+y^{2}\right)+2 p_{2} x y} \\ {\delta_{y d}=p_{2}\left(x^{2}+3 y^{2}\right)+2 p_{1} x y}\end{array}\right.$$Distortion correction function model can be derived from formula (2), (3), and (4), [TeX:] $$k_{1}, k_{2}, p_{1}, p_{2}$$ in the model are four nonlinear distortion coefficients:
(5)[TeX:] $$\left\{\begin{array}{l}{\delta_{x}=k_{1} x\left(x^{2}+y^{2}\right)+p_{1}\left(3 x^{2}+y^{2}\right)+2 p_{2} x y} \\ {\delta_{y}=k_{2} x\left(x^{2}+y^{2}\right)+p_{2}\left(x^{2}+3 y^{2}\right)+2 p_{1} x y}\end{array}\right.$$Under ideal conditions, the intersection of the optical axis and the image pattern on physical image coordinate system should be located in the center of the image. However, deviation might occur due to the limitations of camera processing technique. If the initial point of image in the physical coordinate system [TeX:] $$(x, y) \text { is }\left(u_{0}, v_{0}\right)$$ on the image coordinate system [TeX:] $$(u, v),$$ then the physical size of each pixel point on the image plane in x and y directions are [TeX:] $$d_{x} \text { and } d_{y},$$ respectively. Limited by the technique, the side length of these physical pixel cannot be consistent. Therefore, each pixel of the image in the two coordinate systems meets the following conditions:
(6)[TeX:] $$\left\{\begin{array}{l}{u=x_{u} / d_{x}+u_{0}} \\ {v=y_{u} / d_{y}+v_{0}}\end{array}\right.$$The corresponding homogeneous coordinate and matrix form is:
(7)[TeX:] $$\mathrm{Z}_{\mathrm{C}}\left[\begin{array}{l}{\mathrm{u}} \\ {\mathrm{v}} \\ {1}\end{array}\right]=\left[\begin{array}{ccc}{\frac{1}{\mathrm{d}_{\mathrm{x}}}} & {0} & {\mathrm{u}_{0}} \\ {0} & {\frac{1}{\mathrm{d}_{\mathrm{x}}}} & {\mathrm{v}_{0}} \\ {0} & {0} & {1}\end{array}\right]\left[\begin{array}{cccc}{\mathrm{f}} & {0} & {0} & {0} \\ {0} & {\mathrm{f}} & {0} & {0} \\ {0} & {0} & {1} & {0}\end{array}\right] \ \left(\begin{array}{cc}{\mathrm{R}} & {\mathrm{T}} \\ {0^{\mathrm{T}}} & {1}\end{array}\right)\left[\begin{array}{c}{\mathrm{x}_{\mathrm{w}}} \\ {\mathrm{y}_{\mathrm{w}}} \\ {\mathrm{z}_{\mathrm{w}}} \\ {1}\end{array}\right]=\left[\begin{array}{cccc}{\frac{\mathrm{f}}{\mathrm{d}_{\mathrm{x}}}} & {0} & {\mathrm{u}_{0}} & {0} \\ {0} & {\frac{\mathrm{f}}{\mathrm{d}_{\mathrm{y}}}} & {\mathrm{v}_{0}} & {0} \\ {0} & {0} & {1} & {0}\end{array}\right]\left(\begin{array}{cc}{\mathrm{R}} & {\mathrm{T}} \\ {0^{\mathrm{T}}} & {1}\end{array}\right)\left[\begin{array}{c}{\mathrm{x}_{\mathrm{w}}} \\ {\mathrm{y}_{\mathrm{w}}} \\ {\mathrm{z}_{\mathrm{w}}} \\ {1}\end{array}\right]=M_{1} M_{2} \widetilde{x_{w}}$$[TeX:] $$M_{1}$$ is internal camera parameter while M2 is external camera parameter. The formula contains rotation matrix and translation matrix. 4. Parameter Calculation and Optimization MethodIn light of the homography relationship between calibration planar pattern and image and the constraint conditions of internal and external parameters [13], this method utilizes the homography relationship between 3D coordinate in calibration planar pattern and pixel coordinate in imaging pattern to solve internal and external camera initial estimate coefficients and find optimal solution through iteration. For obtaining more accurate internal and external parameters, this method uses nonlinear least square Levenberg-Marquardt algorithm to optimize the iterative calculation of internal and external camera parameters. 4.1 Homography Relationship and Internal Parameter ConstraintInternal and external parameter calculation is a key step to achieve accurate camera calibration. Taking the homography relationship between planar pattern and image as the constraint condition, this method supposes the Z coordinate of planar pattern in world coordinate system is 0, then : [TeX:] $$s \check{m}=H \check{M} \text { and } H=K\left[\begin{array}{lll}{r_{1}} & {r_{2}} & {t}\end{array}\right].$$ In addition, as stipulated by internal parameter constraint condition, if [TeX:] $$\mathrm{H}= \left[\begin{array}{lll}{h_{1}} & {h_{2}} & {h_{3}}\end{array}\right],$$ then the following formula can be accordingly derived: [TeX:] $$\left\{\begin{array}{c}{h_{1}^{\mathrm{T}} K^{-\mathrm{T}} K^{-1} h_{2}=0} \\ {h_{1}^{\mathrm{T}} K^{-\mathrm{T}} K^{-1} h_{1}=h_{2}^{\mathrm{T}} K^{-\mathrm{T}} K^{-1} h_{2}}\end{array}\right..$$ For a given homography matrix, there are eight degrees of freedom and six external parameters (there are three parameters in the rotation matrix and another three parameters in the translation vector), from which the two basic constraints of internal parameters can be derived. In the process of parameter calculation, a closed-form solution [13] is firstly given and the initial estimates internal parameter matrix is calculated on this basis. In addition, the external parameter matrix can be calculated by the internal parameter matrix. The second step is to estimate the nonlinear optimal solution according to maximum likelihood estimation. The last step is to detect the radial distortion of lens and derive the analytic and nonlinear solution. 4.2 Nonlinear Optimization of Initial Estimate ParametersSince the closed-form solution is calculated by the minimum algebraic distance without any physical meanings, it needs to be improved by the maximum likelihood estimation theory. Assuming that the noise of pixels in the image follows the same independent distribution, the maximum likelihood estimation can be summed by the formula (8), wherein n is the number of patterns and m is the number of corner points in each pattern picture:
(8)[TeX:] $$\sum_{i=1}^{n} \sum_{j=1}^{m}\left\|m_{i j}-\check{m}\left(K, R_{i}, t_{i}, M_{j}\right)\right\|^{2}$$The minimum value of formula (8), namely the nonlinear optimization problem, can be solved by the algorithm of Levenberg-Marquardt algorithm [22,23]. 4.3 Distortion Correction and Parameter OptimizationThe classic Zhang’s calibration method only considers the two directions of radial distortion, and ignores tangential distortion. Thus, there are a certain degree of errors in calibration results. The method mentioned in literature [16] among the other existing methods, considers radial distortion and tangential distortion, and improves the calibration precision, but it involves large amount of calculation and cannot be effectively implemented on mobile terminals with limited computing resources. This paper optimizes the parameters iteration process by considering radial distortion and tangential distortion, and improves the efficiency and accuracy. High precision calibration results can be obtained by applying the distortion to the correction of the linear method. Based on camera parameters in Section 3.2 and distortion model derived from formula (2)–(5), the optimization problem of the distortion model is transformed into the least squares problem. The distortion parameters of the checkerboard corner are obtained by selecting the 3D coordinates of the checkerboard corner and calculating the corresponding image coordinates. In addition, aiming at the internal and external camera parameters solved in Section 4.2 as the initial estimation, this paper uses the nonlinear least square Levenberg-Marquardt algorithm to solve the minimum value of the objective function F to further estimate more accuracy internal and external camera parameters. In n calibration pattern images, there are n×m corner points. The objective function F is established by using the minimal residual to optimize calibration parameters in formula (8).
(9)[TeX:] $$\mathrm{F}=\sum_{i=1}^{n} \sum_{j=1}^{m}\left\|m_{i j}-\widetilde{m}\left(K, k_{1}, k_{2}, p_{1}, p_{2}, R_{i}, T_{i}, M_{j}\right)\right\|^{2}$$wherein m represents the number of controlling points in the image [TeX:] $$i, M_{i j}$$ represents the corresponding model points in the world coordinate system, [TeX:] $$\widetilde{m}\left(K, k_{1}, k_{2}, p_{1}, p_{2}, R_{i}, T_{i}, M_{j}\right)$$ represents the projection of [TeX:] $$M_{i j}$$ on image i, and [TeX:] $$R_{i} \text { and } T_{i}$$ represent the external parameters of image i. 5. Algorithm Implementation and Results Analysis5.1 Algorithm ImplementationBased on aforementioned principle, workflow and algorithms, this paper utilizes Android system as the development platform and uses Java language to develop a fast calibration applicable for Android camera. The equipment module in the experiment is Millet 3 (Nvidia Tegra4 CPU@1.8 GHz, 2 GB memory, 4.4 (API 23) Android version, rear camera with 13 million pixel and camera lens focal distance of 29 mm). Calibration processes and results show in Figs. 3 and 4 (the data in Fig. 4 is described in detail in Table 1). Table 1.
5.2 Analysis of Experimental ResultsIn order to verify the feasibility and accuracy of the proposed calibration method, calibration experiments are carried out on Android devices, and the camera calibration accuracy evaluation method [24] was adopted to test its feasibility. Then, the accuracy is verified by visual reconstruction. Experiment Ⅰ. In the feasibility verification experiment of camera calibration, the calibration model is checkerboard- calibration model (9×9 array and 10 mm×10 mm check). The test equipment retrieved 100 calibration model images from different angles and randomly divided them into five groups for experiments. Some model images after retrieving corner information according to Harris corner test algorithm [25] are shown in Fig. 5. In these figures, each corner has arrows of different lengths and directions, representing pixel offset (distortion) corresponding to different sizes and directions. Fig. 6 shows part of the model images processed by distortion correction, in which the shadow region is distortion and stretched to eliminate part of the image distortion. Projection error of each corner point can be calculated through the pixel errors as shown in Figs. 7 and 8. Fig. 7 shows the corner coordinate error of back-projected by each corner point in checkerboard. The reprojection error ranges from 0–0.6 pixel, and the maximum error is less than 1 pixel. Fig. 8 shows that the average reprojection error of all corner points in the model image is 0.44 pixel. The reprojection error diagram directly expresses the error of reprojection image of each corner point, indicating the calibration accuracy. After corner point retrieval, this paper derives camera parameter and achieves distortion correction optimization results in Table 1. Experiment Ⅰ results show that the method can realize fast online camera calibration on Android devices, with average pixel error is less than 0.14 pixel. Experiment Ⅱ. The accuracy verification experiment of camera calibration method is realized by MATLAB simulation on PC. The first step is to randomly select a group of calibration models in Experiment 1, and successively use the Zhang’s calibration method and the method in literature [16] to conduct the simulation experiments. The second step is to use the checkerboard corner point automatic test algorithm [26] to retrieve the corner point pixel coordinate of all calibrated model images. The third step is to perform visual reconstruction according to the homography relationship between pixel coordinates and the world coordinates in camera model, and then restore the 2D pixel coordinates to 3D world coordinates, and compare them with 3D coordinates in the ideal checkerboard calibration model. The camera parameters, calibration accuracy and calibration time efficiency of the three calibration methods are shown in Table 2. Table 2.
Table 3 shows the pixel coordinates [TeX:] $$\left(u_{i}, v_{i}\right)$$ of all corner points in the calibration model calculated by checkerboard corner automatic test algorithm. Using the three sets of camera calibration parameters in Table 2, the corresponding world coordinates are reconstructed from angular pixel coordinates in Table 3, and then the relative position of the three groups of experimental world coordinates and the ideal coordinates are obtained, as shown in Fig. 9. Table 3.
Fig. 9(a) shows the comparison between the rebuilt coordinates based on the parameters obtained by Zhang’s calibration method which considering the radial distortion and the ideal coordinates. Fig. 9(b) shows the comparison between the rebuilt coordinates based on the parameters obtained through the calibration method in literature [16], which considering the radial distortion and tangential distortion. Fig. 9(c) shows the comparison between the parameter reconstruction coordinates obtained based on this method and the ideal coordinates. For the three sets of coordinates in Fig 9, "" represents the ideal coordinate point; the red "" from the corresponding "" intuitively shows the errors between the reconstructed coordinates and ideal coordinates of the three calibration methods. It can be clearly seen from Fig. 9 that the real world coordinates obtained by the proposed calibration method based on camera parameters are closer to the ideal coordinates. It can be seen that the calibration method proposed in this paper has high calibration accuracy. Fig. 9 visually illustrates the errors of three groups of reconstruction coordinates and ideal coordinate. The camera parameters derived by this calibration method will be used for coordinate reconstruction of pixel coordinate system and 3D world coordinate system. The result shows that real world coordinate system is closer to ideal world coordinate system. As indicated by the results of Experiment Ⅱ, the proposed calibration method has improved in both efficiency and accuracy. The iteration time of internal and external parameters was reduced from 0.22 seconds to 0.13 seconds (0.234 seconds on mobile terminals). The average reprojection error was reduced from 0.25 pixel to 0.15 pixel. Research shows that the calibration method is superior to the Zhang’s calibration method and the literature review [16] method. Through the world coordinate reconstruction experiment, the world coordinate system and ideal coordinate system are reconstructed by the parameters obtained from the calibration method, and the results show that the errors obtained are smaller than the error of other two reconstruction methods. In addition, this method has certain advantages in algorithm timeliness. 6. ConclusionsDue to the current calibration methods cannot obtain camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration methods, this paper builds twoorder radial distortion and tangential distortion models to describe the features of phone camera lens and establishes a nonlinear camera model with distortion item. Then, a homography matrix is established build on the features of coplanar points and nonlinear least square Levenberg-Marquardt algorithm is used to optimize efficiency and precision of initial estimate parameter iteration. Through the comparison of Android equipment experiment and MATLAB simulation experiments, we find out that this method is proper for the fast camera calibration with poor calculation abilities than the computer on mobile terminals. In comparison with calibration methods in available literature review, the proposed method has similar the higher efficiency and fewer parameter errors. As a consequence, this paper offers a precise and practical method for camera calibration on mobile terminals and commands favorable application prospect in the field of mobile terminals image measurement and machine vision measurement. Our future work will focus on optimizing corner point retrieval algorithm, reducing spatial complexity, improving calibration efficiency of the internal and external parameters precision and developing image measurement and machine vision measurement application products. AcknowledgementThis work was supported by the National Natural Science Foundation of China (No. 31670641, The research of tree’s height and DBH measurement method based on the intelligent mobile terminals), and Zhejiang Science and Technology Innovation program for college students (Project supported by the Xinmiao Talents Program; No. 2016R412044, Research and extension of intelligent tree measurement system based on Android platform). This work was also supported by the Scientific Research Fund of Zhejiang Provincial Education Department (No. Y201432809). BiographyAi-jun Xuhttps://orcid.org/0000-0001-6789-6938He was born in 1976 in Anhui Province, China. He received the Ph.D. degree in Photogrammetry and Remote Sensing from Wuhan University in 2007. He is currently working as a Professor in School of Information Engineering, Zhejiang Agriculture and Forestry University, Hangzhou, China. His current research interest includes computer application technology, computer vision, etc. BiographyGuang-yu Jianghttps://orcid.org/0000-0001-9704-3063He received the B.S. degree from Henan University of Technology, Henan, China, in 2005, and the M.S. degree from Zhejiang Agriculture and Forestry University, Zhejiang, China, in 2013. He is currently working as a experimenter in School of Information Engineering, Zhejiang Agriculture and Forestry University, Hangzhou, China. His current research interests include camera calibration and close-range photogrammetry, and remote sensing. References
|