GLIBP: Gradual Locality Integration of Binary Patterns for Scene Images Retrieval

Salah Bougueroua* and Bachir Boucheham*

Abstract

Abstract: We propose an enhanced version of the local binary pattern (LBP) operator for texture extraction in images in the context of image retrieval. The novelty of our proposal is based on the observation that the LBP exploits only the lowest kind of local information through the global histogram. However, such global Histograms reflect only the statistical distribution of the various LBP codes in the image. The block based LBP, which uses local histograms of the LBP, was one of few tentative to catch higher level textural information. We believe that important local and useful information in between the two levels is just ignored by the two schemas. The newly developed method: gradual locality integration of binary patterns (GLIBP) is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure. The used framework is comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The framework of ellipses is in fact derived from a simple parameterized generator. In addition, the elliptic forms allow targeting texture directionality, which is a very useful property in texture characterization. In addition, the general framework of ellipses allows for taking into account the spatial information (specifically rotation). The effectiveness of GLIBP was investigated on the Corel-1K (Wang) dataset. It was also compared to published works including the very effective DLEP. Results show significant higher or comparable performance of GLIBP with regard to the other methods, which qualifies it as a good tool for scene images retrieval.

Keywords: CBIR , Elliptic-Region , Global Information , LBP , Local Information , Texture

1. Introduction

The accumulation of multimedia information on web servers and specifically images due to increasing uploading from different sources and for different aims in large amounts, made the development of efficient algorithms for exploring and searching of images an important and mandatory task. The first generation of such techniques associated with each image a set of descriptive words. Accordingly, searching for images similar to a query in this paradigm is guided by textual information (keywords). This type of search is referred to by the acronym TBIR (text based image retrieval). Although this technique is widely used by existing search engines, it suffers from a set of disadvantages. Firstly, annotation, which is mostly performed manually on large databases, is a hard and time- consuming task. Secondly, it is largely admitted that manual annotation is subjective and relative to the human perception of images content and to human feelings, culture and so on. To overcome these limitations, a new image search paradigm based on the visual content of the images was proposed. This technique is designated in the literature by the acronym CBIR (content based image retrieval). It has known an extensive research activity since the 90s [1]. It mainly consists in searching for pertinent images based on visual attributes that are extracted from images, essentially, color, texture and shape. The color attribute has been exploited successfully since the early years of CBIR. For instance, it has been used by Swain and Ballard [2] through color histograms for indexing large image databases. Despite its simplicity and robustness against affine transformations (such as rotation, translation and scaling), histograms are a coarse characterization of the image and lead to loss of location information. For sake of higher robustness, Striker and Orengo [3] proposed cumulative color histograms. The use of major features of the distributions is another approach proposed in [3] to increase the robustness of the retrieval. Pass and Zabih [4] proposed the Color Coherent Vector (CCV). That it is a color histogram refinement under some conditions that partially incorporates local information about colors.

From another perspective, texture is a ubiquitous property in objects and images. Indeed, many objects can be distinguished mainly by their texture. However, texture features are generally hard to extract and exploit. There seems to be even a difficulty to come up with a formal and precise definition of texture [5]. A quite modern and distinguished texture features extraction method was introduced by Ojala et al. [6] is the local binary pattern (LBP). Due to its strong ties with our approach, the LBP will be described briefly in the next section. The combination of the features of many attributes is an alternative followed by some authors for boosting their retrieval system effectiveness. For instance, Murala et al. [7] combined color and texture features extracted by a histogram and Gabor wavelets, respectively. Instead of color histograms, color moments are combined with Gabor wavelets by Singh and Hemachandran [8]. In [9], the authors combined these two properties (color and texture), but they used the Haar wavelet for extracting texture features. In [10] and [11], the authors have combined all the three properties. Another alternative for boosting retrieval systems effectiveness is the use of machine learning techniques. For instance, Wan et al. [12] tackled the problem of CBIR with deep learning technique, and obtained encouraging results. In the same context, Xu et al. [13] investigated different deep learning methods for image retrieval with region and global features, and they concluded that the first one is more effective.

Another interesting descriptor which proved its effectiveness in indexing large databases is the scale invariant feature transform (SIFT) [14]. This descriptor is calculated within a square grid (neighborhood) centered at each point of interest. These points of interest are detected through staged filtering approach. To handle location information lacked by the SIFT descriptor, Mikolajczyk and Schmid [15] proposed the gradient location-orientation histogram (GLOH), which uses a log-polar grid. Square, log-polar and other grid forms are compared by Winder and Brown [16]. Among the tested grid forms, there is one that outperformed the others and has some resemblance with our framework. However, this framework uses rather circular shapes. In fact, a deeper look reveals many differences between this framework and ours. For instance, in that paper, the forms are centered at each interesting point, whereas our proposed framework is centered in the middle of the image. Furthermore, we establish a histogram of LBP, whereas Winder and Brown [16] are interested in the gradient around the interest points. Even the design is also different. Daisy method [17] is another technique that uses a framework resembling to ours. But in fact, there are many significant differences here also. For instance, and as mentioned by the authors, Daisy method is designed for effective dense computation (i.e., at each pixel location), which is used for dense wide-baseline matching. In contrast, our method centers the framework in the middle of the image and establishes an LBP histogram for each elliptic region, as will be explained below.

The technique proposed in this paper addresses the limitations engendered by global histograms of the LBP. It uses a distinguished design inspired from the Gabor filter bank to determinate regions of elliptic forms, organized in circular-concentric manner of increasing size. Compared to circle-shaped forms, the elliptic ones are more flexible in the sense that they allow targeting directionality through some of various ellipses possessing the same direction as a specific texture. The aim is to determine one local histogram for each so-selected region. Therefore, the method inherits many advantages. Firstly, it is able to capture high and low local features (in regions). Secondly, the gradual fashion of these regions and the selective manner when matching, give the method the ability to deal with the spatial information (specifically rotation). Thirdly, the manner of generating the regions of the same class (scale) by rotation gives the method the ability to take into account the oriented textured regions. Indeed, the experimental setup demonstrates the effectiveness of the proposed method and its outperformance over some published highly competitive methods.

The main contributions of this paper are then as follows:

1. The current work is much more maturated as compared to the published work in [18].

2. An enhanced design, which yielded a novel more effective and more efficient version (technical comparison is made in the Subsection 3.2).

3. Extensive comparison with some additional and significant published works is performed in the current work. Particularly, GLIBP is compared to the quite recent and very effective DLEP method of Murala et al. [19].

The organization of the remainder of this paper is as follows. In Section 2, we will overview some related works including the original LBP. In Section 3, we will describe the proposal: gradual locality integration of binary patterns (GLIBP). The experiments and results will be reported and discussed in Section 4. Finally, this work is terminated with a conclusion in Section 5.

2. Related Works

2.1 Texture-Based Approaches for CBIR

The existing texture methods can be categorized according to the domain in which they were developed. These are: spatial domain, frequency domain or spatial-frequency domain. For instance, the “Co-occurrence Matrices” of Haralick et al. [20] is one of the earliest, yet, most significant methods, that operates in the spatial domain. The computed matrices represent each the apparition frequency of gray levels in a specified direction and distance. Although this method was introduced in the 1970’s, it is considered as one of the most important methods for local information integration. On the other hand, some other methods operate in the frequency domain, such as the Fourier transform [21,22] and the discrete cosine transform (DCT) [23]. Both the spatial and the frequency approaches miss the power, each, that of the other. For instance, the frequency methods lack the spatial information and the spatial approaches lack the frequency one. Wavelets are the basis of another class of methods that operate on spatial-frequency domain. Many works in the literature exploited the power of these methods and investigated the effectiveness of different wavelet types, e.g., Haar wavelet [24], Daubechies wavelet [25], and Gabor wavelet [26].

2.2 Local Binary Patterns

The LBP is a very simple, yet a very effective method for texture characterization. This statistical method has been originally introduced by Ojala et al. [6]. Later, the method was extended by Ojala et al. [27] to yield rotation invariant LBP and other derivatives. The basic version of the LBP uses a neighborhood of 8 pixels (P=8) in a 3×3 block to capture texture information. The central pixel is used as a threshold value (Fig. 1(a)). The values of neighbors less than that of the central pixel are set to 0, whereas, those that are greater than the center pixel are set to 1 (Fig. 1(b)). The LBP code of the central pixel is then the sum of the calculated values of the neighbors as a power of 2, as illustrated in Fig. 1(c)–(d).

Fig. 1.
Original LBP: an illustration of the texture information coding at the central pixel (153).
2.3 Extensions and Variants of LBP

The LBP operator has known many successful applications in many fields, where many enhanced versions were proposed. These methods include improved LBP (ILBP) [28] for face detection. In this method and in addition to the 8 neighbors (3×3 patch) considered by LBP, the central pixel is also considered by giving it the largest weight. The mean of all pixels (9 pixels for 3×3 patch) is taken as a threshold. Local Ternary Patterns (LTP) [29] is another variant, which associates at each patch a 3- valued code (-1, 0, 1) instead of the binary code of LBP (0, 1). For this aim, values between pc±t (t is some threshold and pc is the central pixel value) are quantized to 0, the values above this interval are quantized to 1 and the values below it are quantized to -1. Another variant is the local derivative patterns (LDP) for face recognition also [30] and content based image retrieval [31]. On the contrary, of LBP, which catches only the non-directional first-order local pattern, the LDP tries to catch the higherorder derivative information. Recently, the local spatial binary patterns (LSBP) was proposed by Xia et al. [32] for content based image retrieval. This method encodes the relationship between the referenced pixel and its neighbors using gray-level variation patterns (GVP) for each direction from the four directions considered. After that, the spatial information between these variations and those of the 4 neighbors are calculated using the LSBP. Additionally, the magnitudes of these variations are also calculated using the mean and the standard deviation.

2.4 LBP and Local Histograms

Generally, works that used the LBP or one of its derivatives or enhanced versions have used the global histograms as a descriptor. However, the global histograms lack two types of information. The first is the local information and the second is the spatial information. An alternative to address these two fails is the use of the segmentation to delimit regions. However, the segmentation in itself is a problematic and difficult task. Another alternative followed by some authors, is the division of the image to blocks (sub-images). To the best of our knowledge, the work of Takala et al. [33] is the only work which handles the texture feature extracted by the LBP with local histograms. In their interesting work, Takala et al. [33] match image blocks of a given size in order to integrate spatial information.

In the following, we present an enhanced version of LBP, called GLIBP (gradual locality integration of binary patterns). GLIBP is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The complex framework of ellipses is in fact derived through a simple parameterized generator.

The fact that the GLIBP calculates histograms within regions delimited by ellipses means that it exploits the local information (high and low locality). Secondly, the fact that the comparison between the histograms is allowed only between histograms calculated within regions delimited by ellipses of the same scale means that it exploits some spatial information.

3. Gradual Locality Integration of Binary Patterns

Let I(l,m) be an LBP map. The generation of the series of ellipses used in GLIBP is performed as follows. A pixel p(i,j) i=1..l, j=1..m, belongs to a region delimited by the ellipse e if it satisfies the following inequality:

(1)
[TeX:] $$\left( \frac { i ^ { 2 } } { a ^ { 2 } } + \frac { j ^ { 2 } } { b ^ { 2 } } \right) \leq 1$$

where a and b are the semi major and semi minor axes of the ellipse e, respectively.

Fig. 2.
Ellipses positioning S=3, N=7.

Let es,n be an ellipse of scale s, where s=0…S-1 and orientation n, where n=0...2*N-1.

The internal ellipses e0,n (Fig. 2) are generated by rotation of the ellipse e0,0 with respect to the image center, then with respect to the ellipse center. The external ellipses es,n (s>0) are created by dilation of the ellipse e0,n. For each region, one normalized histogram is calculated, as shown in Fig. 3. For comparison, each normalized histogram of a region delimited by e's,n of the query image is compared with all histograms of regions delimited by the ellipses with the same scale s in e''s,j of the target image, where j=0..2*N-1. Only the minimum distance is finally kept. The final distance between the query image Q and the database images B is the sum of these minimum distances. The method is further explained in the following.

Fig. 3.
Illustration of elliptic regions on LBP grayscale image S=3; N=7. (a) Original image and (b) elliptic regions on LBP image.
3.1 Ellipses Rotation

For calculating the internal ellipses e0,n, with parameters a0 and b0, we proceed as follows. Firstly, we rotate the center c0(cx0, cy0) of e0,0 using the two Equations below, with respect to image center (i0, j0). Secondly, we determine the pixels belonging to the ellipse with this rotated center using Eq. (1). Finally, the ellipse’s pixels are rotated with respect to its center.

(2)
[TeX:] $$c x _ { 0 } ^ { \prime } = \left( c x _ { 0 } - i _ { 0 } \right) * \cos ( \Theta ) + \left( c y _ { 0 } - j _ { 0 } \right) * \sin ( \Theta )$$

(3)
[TeX:] $$c y _ { 0 } ^ { \prime } = - \left( c x _ { 0 } - i _ { 0 } \right) ^ { * } \sin ( \Theta ) + \left( c y _ { 0 } - j _ { 0 } \right) * \cos ( \Theta )$$

(4)
[TeX:] $$\Theta = n ^ { * } \pi / N$$

In equations, Θ represents ellipse orientation.

3.2 Scale and Ellipses Chaining

For generating the ellipses of other scales, which will have bigger sizes, we must first calculate their parameters as and bs, where s>0. For this, we used the following equations:

(5)
[TeX:] $$a _ { s } = A ^ { s } * a _ { 0 }$$

(6)
[TeX:] $$b _ { s } = A ^ { s } * b _ { 0 }$$

where

(7)
[TeX:] $$A = U ^ { 1 / S - 1 }$$

U is an empiric value.

Fig. 4 shows the ellipses chaining, the center cs+1,0 of an ellipse es+1,0 is calculated as follows:

(8)
[TeX:] $$C _ { s + 1,0 } = C _ { s , 0 } + a _ { s + 1,0 }$$

The proposed method presents the following differences against the ElLBP [18]:

1. The chaining of ellipses in GLIBP is by their centers rather than the linear eccentricity in ElLBP (i.e., more simplicity). Therefore, Eq. (8) in this paper replaces the 5th one of ElLBP.

2. In ElLBP a simple histogram is established. In the current work, we used normalized histograms. This allows more stabilization against the rounding.

3. The internal regions are bigger than that of ElLBP, and thus allow better capturing the local regions texture.

Fig. 4.
Position of ellipses.
3.3 The Comparison Phase

The distance D between two images, a query Q and a target image B is calculated as follows:

(9)
[TeX:] $$D ( Q , B ) = \sum _ { s = 0 } ^ { S - 1 } \sum _ { n = 0 } ^ { 2 N - 1 } \min _ { k } d \left( H q _ { s , n } , H b _ { s , k } \right)$$

where Hqs,n (respectively, Hbs,k) is the normalized histogram of a region delimited by the ellipse es,n (es,k) in image Q (respectively, B), and d is one of distance metrics. For this work, we used the Manhattan and d1 distance, defined by the following equations (respectively):

(10)
[TeX:] $$d \left( H q _ { s , n } , H b _ { s , k } \right) = \sum _ { i } \left| H q _ { s , n } [ i ] - H b _ { s , k } [ i ]|\right.$$

(11)
[TeX:] $$d \left( H q _ { s , n } , H b _ { s , k } \right) = \sum _ { i } \left| \frac { H q _ { s , n } [ i ] - H b _ { s , k } [ i ] } { 1 + H q _ { s , n } [ i ] + H b _ { s , k } [ i ] } \right|$$

The different steps of our method are illustrated by the descriptive organigram of Fig. 5.

Fig. 6 shows an illustrative comparison between two images from the Corel dataset (subset, http://vision.stanford.edu/resources_links.html). The region delimited by the red ellipse in the query image will be matched with all regions of its class (regions delimited by the green ellipses) in the target image. Accordingly, a distance is calculated at each comparison as shown in the image (note that the distances indicated are real distances). Finally, the smallest value of the distance is kept (the red distance is the distance with the region delimited by the red ellipse in the target image, and it is the smallest value among all the distances (some distances only are displayed). The procedure is repeated for all regions delimited by the green ellipses in the query image and for regions delimited by the brown ellipses in the query image. Of course, they should be compared with the regions of the same class in the target image (i.e., with the regions delimited with the brown ellipses, in the target image).

Fig. 5.
Descriptive organigram of the general algorithm.
Fig. 6.
An illustrative comparison with (a, b, U, N, S) = (50, 25, 1.4, 10, 2).

4. Experimentation

For testing our method (GLIBP), we used the Corel 1k database (Wang) [34]. This dataset is composed of 1,000 images, classified into 10 categories: Africans, beaches, buildings, buses, dinosaurs, elephants, flowers, horses, mountains and food. All the images are of size 256×384 or 384×256. One sample from each class is shown in Fig. 7.

Fig. 7.
Some sample from Corel database.

For the evaluation phase, we used visual inspection of individual queries and the precision criteria defined by:

(12)
[TeX:] $$p ( i , w ) = \frac { \text { No of releventimagesretrieved } } { \text { No of images retrieved } }$$

Note that the number of retrieved images depends on the window size (w).

The average precision AP over all images of a class I of size m images is defined as:

(13)
[TeX:] $$A P ( I , k ) = \frac { 1 } { m } \sum _ { i = 1 } ^ { m } p ( i , k )$$

The average precision over the (w) first retrieved images is defined by (14). This measure is also called “weighted precision” [35].

(14)
[TeX:] $$W P ( i , w ) = \frac { 1 } { w } \sum _ { k = 1 } ^ { w } p ( i , k )$$

Therefore, the average weighted precision (AWP) is:

(15)
[TeX:] $$A W P ( I , w ) = \frac { 1 } { w } \sum _ { k = 1 } ^ { w } A P ( I , k )$$

We have implemented the proposed method using C# programming language. For the LBP, we used the function available on: http://www.cse.oulu.fi/CMV/Downloads/LBPMatlab.

We performed experimentations on a personal laptop, with Intel Core 2 duo, 2 GHz and 3 GB of RAM, running Vista sp1 operating system.

We conducted two experiments. In the first, we fixed the parameters (a, b, U, S) to (50, 25, 1.4, 2), then to (30, 15, 2, 3) with different values for N for each, using Manhattan and d1 distance metrics (Tables 1–3). In the second, we investigated the contribution of regions delimited by ellipses of each scale separately (Table 4).

Table 1.
The results of the GLIBP with the parameters (a, b, U, S) = (50, 25, 1.4, 2) and different values for N using Manhattan distance metrics in terms of average weighted precision ( w=20)
Table 2.
The results of the GLIBP with the parameters (a, b, U, S) = (50, 25, 1.4, 2) and different values for N using d1 distance metrics in terms of average weighted precision ( w=20)
Table 3.
The results of the GLIBP with the parameters (a, b, U, S) = (30, 15, 2, 3) and different values for N using Manhattan and d1 distance metrics in terms of average weighted precision ( w=20)
Table 4.
The contribution of regions delimited by ellipses of each scale separately, using d1 distance metric in terms of average weighted precision (w=20)
4.1 Results and Discussion

Improvements are observed from Table 1 in the terms of AWP (+0.13%, +0.22% and +0.22%) with N equal to (9, 10, and 11 respectively) compared to N=8. For the set of parameters (a, b, U, S) = (30, 15, 2, 3) in Table 3, we obtained the improvements by (+0.15% and +0.32%) with N equal to (8 and 9 respectively) compared to N=7, using the results obtained by d1 metric. Results analysis on class basis shows that the classes “Dinosaurs,” “Buses,” “Horses” and “Flowers” are the classes that registered an AWP over 90%. On the other hand, the classes “Africans,” “Mountains” and “Foods” are the classes that registered an AWP lower than 65%. The lowest AWP is about 48% registered by the class “Mountains.”

Table 4 shows the contribution of regions delimited by ellipses of each scale separately. For instance, when using the regions delimited by ellipses of scale s=0 and s=1 for the set (a, b, U, N, S) = (30, 15, 2, 9, 3), we can observe classes that maintain high precision (over 90%). Typically, these classes are “Buses” and “Dinosaurs.” These classes share one property, that is, the existence of one central object (bus and dinosaur). However, different types of background distinguish them. When all images of class “Dinosaurs” have the same simple background, the images of class “Buses” have different backgrounds. This explains why the “Dinosaurs” keeps a high precision while the “Buses” precision drops to 91%, when using the regions delimited by ellipses of scale s=2. An example on this point is reported in Fig. 8. In that figure, we see that 11 images from the 12 retrieved are relevant when using the scales s=0.1. Also, and as expected, this precision drops to 7/12, but one can remark that the retrieved images share a common characteristic, that is, the background (grass).

Fig. 8.
The impact on retrieval using some scales separately, with the parameters (a, b, U, N, S) = (30, 15, 2, 9, 3). (a) Using only the scales s=0.1 and (b) using only the scale s=2.

Remark also the significant precision improvement (+12%) of “Horses” using s=2, because all images share a common background (grass). For the other classes, generally they showed high (or comparable) precisions when using only s=2 compared when using s=0 and s=1. This can be explained by the size of external regions that have bigger size than the internal regions. Accordingly, they will catch more information. The same remarks hold for the set (a, b, U, N, S) = (50, 25, 1.4, 10, 2), but with different precisions.

The improvements using s=2 and s=1 for the sets (30, 15, 2, 9, 3) and (50, 25, 1.4, 10, 2), compared when using (s=0 & s=1 only) and (s=0 only) respectively, are not negligible for the classes “Africans” and “Food.” This can be interpreted by two reasons. Firstly, the images of these classes do not contain a specific central object like the images of buses class. Secondly, the images of these two classes do not contain different regions with different textures like beach images (sand and water). Furthermore, these two reasons clarify why the results of the LBP, which uses the global histograms, have given better or comparable AWP compared to GLIBP (Table 5), for these two classes only.

The effectiveness of our method is clearly shown in Table 5, where we compared it versus ElLBP of Bougueroua and Boucheham [18], LBP of Ojala et al. [6] and Improved LBP of Jin et al. [28] in terms of AWP.

In Tables 6 and 7, we compared the proposal with ‘ElLBP’ [18], ‘DLEP’ [19], ‘DCT-Based’ (with different filters) [23], ‘LDP16,2’ [31] and the results of the block-based ‘BLK-based’ method [33] as reported by Murala et al. [19]. These comparisons also show clearly the effectiveness of our method in terms of average precision (AP). As an exception, we see that the class “Africans” has a good result with LBP compared to our method, and this can be explained by the fact that the class “Africans” consists of peoples' images, wherein there is not a variety of textured regions; thus, the global histograms are more convenient than local ones.

Screenshots of the results obtained using the proposed method GLIBP and LBP method are shown in Figs. 9–12.

Table 5.
Comparison of the proposed method (a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods using Manhattan distance metric in term of average weighted precision
Table 6.
Comparison of the proposed method (a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods in terms of average precision (w=9)
4.2 GLIBP Efficiency vs. ElLBP [ 18]

To prove the high performance of the proposed GLIBP method with respect to that of the EILBP [18] method, we have taken the number of regions as a metric. The best average precision obtained by EILBP in [18] was for the set of parameters (a, b, U, N, S) = (50, 25, 20, 10, 4). So, the number of histograms used is: 2×N×S, which yields 2×10×4=80 histograms. However, the proposed GLIBP method reached the best average precision with the set (a, b, U, N, S) = (50, 25, 1.4, 10, 2), i.e., with 40 histograms only. In other words, GLIBP used only half the number of histograms used by ElLBP. Furthermore, the AWP in the proposal is better than that of the ElLBP (GLIBP 76.36% vs. EILBP 74.64%).

Table 7.
Comparison of the proposed method ( a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods in terms of average precision
Fig. 9.
Results obtained by querying with image #127 (Class: Beaches). (a) GLIBP (precision=12/12) and (b) LBP (precision=9/12).
Fig. 10.
Results obtained by querying with image #232 (Class: Building). (a) GLIBP (precision=12/12) and (b) LBP (precision=7/12).
Fig. 11.
Results obtained by querying with image #548 (Class: Elephants). (a) GLIBP (precision=9/12) and (b) LBP (precision=4/12).
Fig. 12.
Results obtained by querying with image #787 (Class: Horses). (a) GLIBP (precision=12/12) and (b) LBP (precision=9/12).

5. Conclusions

In this work, we have proposed an improved version of LBP. The improvement is achieved by better exploiting the locality property, spatial information, and the orientation of regions in an original fashion. The effectiveness of the proposal is clearly shown through the obtained precisions on Corel dataset, using the two-distance metrics Manhattan and d1. The obtained results in terms of average weighted precision and average precision have shown significant higher or comparable performance of GLIBP with regard to some published methods, which qualifies it as a good tool for scene images retrieval.

For future works, we will investigate further the contribution of the regions of each scale separately. This will eventually enable improving the proposed method effectiveness through an appropriate weighing schema of different scales. In addition, we plan the generalization of the presented approach to other histogram-based methods for texture feature extraction and color feature extraction. Advanced tools like learning and deep learning are also interesting enough to be considered in the future.

Biography

Salah Bougueroua
https://orcid.org/0000-0003-0921-2037

He received the Bachelor and the Master degrees in Computer Science from the University of Skikda, Algeria, in 2010 and 2012, respectively. In 2017, he obtained Ph.D. degree in Computer Science from the same university. Currently, he is an Assistant Professor at the Department of Computer Science, University of Skikda, Algeria. His current research interests include information processing and retrieval, image processing and retrieval, texture analysis, features extraction and data reduction.

Biography

Bachir Boucheham
https://orcid.org/0000-0001-7286-659X

He received the Doctor of Science (Ph.D.) and the HDR (post-doctoral degree for research supervision), respectively in 2005 and 2009, both in Computer Science from Mentouri University of Constantine, Algeria. And he is a full professor of Computer Science at the Department of Informatics, University of Skikda, Algeria and a member of the LRES research laboratory within the same university. His main areas of interest and expertise include pattern recognition, computer vision, image processing and retrieval, time series and signals processing and retrieval, data reduction and compression. Prof. Boucheham authored/co-authored several papers in various high impact international journals and conferences.

References

  • 1 A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, R. Jain, "Content-based image retrieval at the end of the early years," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, vol. 22, no. 12, pp. 1349-1380. doi:[[[10.1109/34.895972]]]
  • 2 M. J. Swain, D. H. Ballard, "Color indexing," International Journal of Computer Vision, 1991, vol. 7, no. 1, pp. 11-32. doi:[[[10.1007/BF00130487]]]
  • 3 M. A. Stricker, M. Orengo, "Similarity of color images," in Proceedings of the SPIE 2420: Storage and Retrieval for Image and Video Databases III. Bellingham, WA: International Society for Optics and Photonics, 1995;pp. 381-393. doi:[[[10.1117/12.205308]]]
  • 4 G. Pass, R. Zabih, "Histogram refinement for content-based image retrieval," in Proceedings of the 3rd IEEE Workshop on Applications of Computer Vision, Sarasota, FL, 1996;pp. 96-100. doi:[[[10.1109/ACV.1996.572008]]]
  • 5 R. M. Haralick, "Statistical and structural approaches to texture," in Proceeding of the IEEE, 1979;vol. 67, no. 5, pp. 786-804. doi:[[[10.1109/proc.1979.11328]]]
  • 6 T. Ojala, M. Pietikainen, D. Harwood, "A comparative study of texture measures with classification based on feature distributions," Pattern Recognition, 1996, vol. 29, no. 1, pp. 51-59. doi:[[[10.1016/0031-3203(95)00067-4]]]
  • 7 S. Murala, A. B. Gonde, R. P. Maheshwari, "Color and texture features for image indexing and retrieval," in Proceedings of the IEEE International Advance Computing Conference, Patiala, India, 2009;pp. 1411-1416. doi:[[[10.1109/iadcc.2009.4809223]]]
  • 8 S. M. Singh, K. Hemachandran, "Content-based image retrieval using color moment and Gabor texture feature," International Journal of Computer Science Issues, 2012, vol. 9, no. 5, pp. 299-309. doi:[[[10.1109/icmlc.2010.5580566]]]
  • 9 M. Singha, K. Hemachandran, "Content based image retrieval using color and texture," Signal Image Processing, 2012, vol. 3, no. 1, pp. 39-57. doi:[[[10.5120/ijca2016909633]]]
  • 10 X. Y. Wang, Y. J. Yu, H. Y. Yang, "An effective image retrieval scheme using color, texture and shape features," Computer Standards Interfaces, 2011, vol. 33, no. 1, pp. 59-68. doi:[[[10.1016/j.csi.2010.03.004]]]
  • 11 M. V. Lande, P. Bhanodiya, P. Jain, "An effective content-based image retrieval using color, texture and shape feature," in Intelligent ComputingNetworking, and Informatics. New Delhi, India: Springer, 2014,, pp. 1163-1170. doi:[[[10.1007/978-81-322-1665-0_119]]]
  • 12 J. Wan, D. Wang, S. C. H. Hoi, P. Wu, J. Zhu, Y. Zhang, J. Li, "Deep learning for content-based image retrieval: a comprehensive study," in Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, 2014;pp. 157-166. doi:[[[10.1145/2647868.2654948]]]
  • 13 Q. Xu, S. Jiang, W. Huang, F. Ye, S. Xu, "Feature fusion based image retrieval using deep learning," Journal of Information Computational Science, 2015, vol. 12, no. 6, pp. 2361-2373. doi:[[[10.12733/jics20105681]]]
  • 14 D. G. Lowe, "Object recognition from local scale-invariant features," in Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999;pp. 1150-1157. doi:[[[10.1109/iccv.1999.790410]]]
  • 15 K. Mikolajczyk, C. Schmid, "A performance evaluation of local descriptors," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, vol. 27, no. 10, pp. 1615-1630. doi:[[[10.1109/TPAMI.2005.188]]]
  • 16 S. A. J. Winder, M. Brown, "Learning local image descriptors," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, 2007;pp. 1-8. doi:[[[10.1109/cvpr.2007.382971]]]
  • 17 E. Tola, V. Lepetit, P. Fua, "DAISY: an efficient dense descriptor applied to wide-baseline stereo," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, vol. 32, no. 5, pp. 815-830. doi:[[[10.1109/TPAMI.2009.77]]]
  • 18 S. Bougueroua, B. Boucheham, "Ellipse based local binary pattern for color image retrieval," in Proceedings of the 4th International Symposium on ISKO-Maghreb: Concepts and Tools for Knowledge Management, Algiers, Algeria, 2014;pp. 1-8. doi:[[[10.1109/isko-maghreb.2014.7033459]]]
  • 19 S. Murala, R. P. Maheshwari, R. Balasubramanian, "Directional local extrema patterns: a new descriptor for content based image retrieval," International Journal of Multimedia Information Retrieval, 2012, vol. 1, no. 3, pp. 191-203. doi:[[[10.1007/s13735-012-0008-2]]]
  • 20 R. M. Haralick, K. Shanmugam, I. Dinstein, "Textural features for image classification," IEEE Transactions on SystemsMan, and Cybernetics, , 1973, vol. 3, no. 6, pp. 610-621. doi:[[[10.1109/TSMC.1973.4309314]]]
  • 21 F. Zhou, J. F. Feng, Q. Y. Shi, "Texture feature based on local Fourier transform," in Proceedings of the International Conference on Image Processing, Thessaloniki, Greece, 2001;pp. 610-613. doi:[[[10.1109/icip.2001.958567]]]
  • 22 H. B. Kekre, D. Mishra, "CBIR using upper six FFT sectors of color images for feature vector generation," International Journal of Engineering and Technology, 2010, vol. 2, no. 2, pp. 49-54. custom:[[[https://www.semanticscholar.org/paper/CBIR-using-Upper-Six-FFT-Sectors-of-Color-Images-Kekre-Mishra/d9d05a2721264059b5004a2ea6f3a23d033e08ca]]]
  • 23 F. Malik, B. Baharudin, "The statistical quantized histogram texture features analysis for image retrieval based on median and Laplacian filters in the DCT domain," The International Arab Journal of Information Technology, 2013, vol. 10, no. 6, pp. 1-9. custom:[[[http://www.philadelphia.edu.jo/newlibrary/articls/460-computer/36707-7248]]]
  • 24 H. B. Kekre, D. Mishra, "Content based image retrieval using full Haar sectorization," International Journal of Image Processing, 2011, vol. 5, no. 1, pp. 1-12. custom:[[[http://www.cscjournals.org/library/manuscriptinfo.php?mc=IJIP-334]]]
  • 25 J. Z. Wang, G. Wiederhold, O. Firschein, S. X. Wei, "Content-based image indexing and searching using Daubechies' wavelets," International Journal on Digital Libraries, 1998, vol. 1, no. 4, pp. 311-328. doi:[[[10.1007/s007990050026]]]
  • 26 B. S. Manjunath, W. Y. Ma, "Texture features for browsing and retrieval of image data," IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996, vol. 18, no. 8, pp. 837-842. doi:[[[10.1109/34.531803]]]
  • 27 T. Ojala, M. Pietikainen, T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, vol. 24, no. 7, pp. 971-987. doi:[[[10.1109/TPAMI.2002.1017623]]]
  • 28 H. Jin, Q. Liu, H. Lu, X. Tong, "Face detection using improved LBP under Bayesian framework," in Proceedings of the 3rd International Conference on Image and Graphics, Hong Kong, China, 2004;pp. 306-309. doi:[[[10.1109/icig.2004.62]]]
  • 29 X. Tan, B. Triggs, "Enhanced local texture feature sets for face recognition under difficult lighting conditions," in Analysis and Modeling of Faces and Gestures. Heidelberg: Springer2007,, pp. 168-182. doi:[[[10.1109/TIP.2010.2042645]]]
  • 30 B. Zhang, Y. Gao, S. Zhao, J. Liu, "Local derivative pattern versus local binary pattern: face recognition with high-order local pattern descriptor," IEEE Transactions on Image Processing, 2010, vol. 19, no. 2, pp. 533-544. doi:[[[10.1109/TIP.2009.2035882]]]
  • 31 P. V. N. Reddy, K. S. Prasad, "Content based image retrieval using local derivative patterns," Journal of Theoretical and Applied Information Technology, 2011, vol. 28, no. 2, pp. 95-103. custom:[[[https://www.researchgate.net/publication/289025874_Content_based_image_retrieval_using_local_derivative_patterns]]]
  • 32 Y. Xia, S. Wan, L. Yue, "Local spatial binary pattern: a new feature descriptor for content-based image retrieval," in Proceedings of the 5th International Conference on Graphic and Image Processing, Hong Kong , China, 2013;doi:[[[10.1117/12.2049916]]]
  • 33 V. Takala, T. Ahonen, M. Pietikainen, "Block-based methods for image retrieval using local binary patterns," in Image Analysis. Heidelberg: Springer2005,, pp. 882-891. doi:[[[10.1007/11499145_89]]]
  • 34 J. Z. Wang, (Online). Available:, http://wang.ist.psu.edu/docs/related/
  • 35 J. Z. Wang, J. Li, G. Wiederhold, "SIMPLIcity: semantics-sensitive integrated matching for picture libraries," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, vol. 23, no. 9, pp. 947-963. doi:[[[10.1109/34.955109]]]

Table 1.

The results of the GLIBP with the parameters (a, b, U, S) = (50, 25, 1.4, 2) and different values for N using Manhattan distance metrics in terms of average weighted precision ( w=20)
N=8 N=9 N=10 N=11
Africans 62.66 63.25 63.50 63.71
Beaches 68.69 68.58 69.02 68.63
Buildings 70.51 70.60 70.45 70.63
Buses 97.12 97.14 97.30 97.07
Dinosaurs 99.46 99.48 99.50 99.44
Elephants 73.43 73.18 73.40 74.10
Flowers 90.25 90.44 90.40 90.50
Horses 90.63 91.02 91.00 90.36
Mountains 47.46 47.50 47.12 47.08
Food 61.21 61.53 61.92 62.12
Average 76.14 76.27 76.36 76.36

Table 2.

The results of the GLIBP with the parameters (a, b, U, S) = (50, 25, 1.4, 2) and different values for N using d1 distance metrics in terms of average weighted precision ( w=20)
N=8 N=9 N=10 N=11
Africans 63.18 63.62 63.93 64.17
Beaches 69.15 69.16 69.67 69.25
Buildings 71.44 71.71 71.49 71.64
Buses 97.35 97.33 97.46 97.35
Dinosaurs 99.51 99.54 99.48 99.48
Elephants 73.89 73.49 74.00 74.52
Flowers 90.47 90.79 90.68 90.70
Horses 90.72 91.13 91.06 90.50
Mountains 47.71 47.97 47.49 47.81
Food 61.63 62.02 62.38 62.59
Average 76.50 76.67 76.76 76.80

Table 3.

The results of the GLIBP with the parameters (a, b, U, S) = (30, 15, 2, 3) and different values for N using Manhattan and d1 distance metrics in terms of average weighted precision ( w=20)
a=30, b=15, u=2, N=7, S=3 a=30, b=15, u=2, N=8, S=3 a=30, b=15, u=2, N=9, S=3
Manhattan d1 Manhattan d1 Manhattan d1
Africans 59.77 60.15 59.90 60.34 60.63 60.99
Beaches 66.49 67.30 67.58 68.64 67.26 68.09
Buildings 69.69 70.66 70.32 71.43 70.57 71.47
Buses 96.72 96.98 96.57 96.86 96.96 97.19
Dinosaurs 99.54 99.59 99.61 99.64 99.67 99.70
Elephants 70.38 70.94 69.63 70.21 70.31 70.66
Flowers 88.54 88.91 88.95 89.44 89.05 89.38
Horses 90.81 90.88 90.38 90.53 90.78 90.78
Mountains 48.04 48.29 48.34 48.50 48.06 47.99
Food 58.03 58.35 57.78 57.89 58.66 58.93
Average 74.80 75.20 74.90 75.35 75.19 75.52

Table 4.

The contribution of regions delimited by ellipses of each scale separately, using d1 distance metric in terms of average weighted precision (w=20)
a=30, b=15, u=2, N=9, S=3 a=50, b=25, u=1.4, N=10, S=2
(s=0 & s=1) only (s=2) only All (s=0,1,2) (s=0) only (s=1) only All (s=0,1)
Africans 50.22 59.62 60.99 50.84 62.72 63.93
Beaches 65.58 59.79 68.09 65.56 63.54 69.67
Buildings 67.97 66.97 71.47 64.71 69.83 71.49
Buses 97.29 91.44 97.19 96.87 94.87 97.46
Dinosaurs 98.25 99.36 99.70 96.41 99.68 99.48
Elephants 62.33 68.07 70.66 63.70 70.02 74.00
Flowers 76.56 92.70 89.38 75.11 92.92 90.68
Horses 81.17 89.41 90.78 79.82 89.92 91.06
Mountains 43.93 44.70 47.99 43.83 44.03 47.49
Food 50.64 62.08 58.93 48.39 66.47 62.38
Average 69.39 73.41 75.52 68.52 75.40 76.76

Table 5.

Comparison of the proposed method (a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods using Manhattan distance metric in term of average weighted precision
(w=10) (w=20)
GLIBP (proposed) ElLBP [18] LBP [6] ILBP [28] GLIBP (proposed) ElLBP [18] LBP [6] ILBP [28]
Africans 71.40 66.61 75.19 73.44 63.50 59.24 67.17 66.43
Beaches 76.67 74.40 67.01 68.63 69.02 66.62 58.75 61.46
Buildings 77.69 76.47 68.57 70.58 70.45 69.17 59.23 61.33
Buses 97.89 97.37 96.85 97.77 97.30 96.05 95.19 96.95
Dinosaurs 99.56 99.88 98.36 98.16 99.50 99.64 97.71 97.53
Elephants 82.33 78.28 63.16 74.01 73.40 68.72 51.51 63.00
Flowers 94.06 92.88 92.15 93.94 90.40 88.89 89.12 90.62
Horses 94.78 94.46 87.24 90.29 91.00 90.34 79.25 84.31
Mountains 56.66 51.62 52.61 53.66 47.12 43.26 43.86 44.70
Food 70.24 72.49 68.79 71.05 61.92 64.45 60.04 62.44
Average 82.13 80.45 76.99 79.15 76.36 74.64 70.18 72.88

Table 6.

Comparison of the proposed method (a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods in terms of average precision (w=9)

GLIBP

(proposed)

ElLBP [18] DCT
Median filter [23] Median with edge extraction [23] Laplacian filter [23]
Africans 62.1 56.6 100 100 100
Beaches 68.3 65 67 78 80
Buildings 70.4 67.3 56 56 57
Buses 97.7 96 79 89 82
Dinosaurs 99.4 99.6 100 100 100
Elephants 74 67.7 51 57 57
Flowers 90.4 89.3 61 62 78
Horses 91.6 90.3 80 89 92
Mountains 43.9 39.1 39 34 42
Food 60.9 62.3 31 30 51
Average 75.87 73.32 66.4 69.5 73.9

Table 7.

Comparison of the proposed method ( a, b, U, N, S) = (50, 25, 1.4, 10, 2) with the results of some existing methods in terms of average precision
(w=10) (w=20)
GLIBP (proposed) ElLBP [18] DLEP [19] BLK_based [33] LBP [6] ILBP [28] LDP16.2 [31] GLIBP (proposed) ElLBP [18] LBP [6] ILBP [28]
Africans 60.9 56.1 69.3 53.7 63.7 63.1 66 52.6 49.5 56.2 57
Beaches 66.6 64.2 60.5 52.8 55.2 58.2 58 59.5 55.7 47.3 51.3
Buildings 69.7 66.2 72.0 64.3 55.9 58.4 66 61.1 58 45.4 48.6
Buses 97.3 95.8 97.9 89.8 95 96.9 95.4 96.5 93.5 92.1 95.5
Dinosaurs 99.4 99.4 98.5 99.6 97.8 97.1 96.6 99.5 99.4 96.4 96.6
Elephants 72.4 65.5 55.9 58.4 46.4 59.9 60 60.6 54.9 36.3 47.2
Flowers 89.6 88.1 91.9 91.9 87.9 89.9 89.1 84.5 82.9 84.2 85.9
Horses 91.1 89.6 76.9 83.8 78.9 82.9 75.8 84.2 83.3 66.5 73.8
Mountains 42.4 38.2 42.7 44.5 38.8 39 44.3 35.2 32.3 32.6 33.4
Food 59.6 60.5 82 62.2 56.3 59.5 76.9 51.5 54 47.8 50.5
Average 74.9 72.36 74.8 70.1 67.59 70.49 72.81 68.52 66.35 60.48 63.98
Original LBP: an illustration of the texture information coding at the central pixel (153).
Ellipses positioning S=3, N=7.
Illustration of elliptic regions on LBP grayscale image S=3; N=7. (a) Original image and (b) elliptic regions on LBP image.
Position of ellipses.
Descriptive organigram of the general algorithm.
An illustrative comparison with (a, b, U, N, S) = (50, 25, 1.4, 10, 2).
Some sample from Corel database.
The impact on retrieval using some scales separately, with the parameters (a, b, U, N, S) = (30, 15, 2, 9, 3). (a) Using only the scales s=0.1 and (b) using only the scale s=2.
Results obtained by querying with image #127 (Class: Beaches). (a) GLIBP (precision=12/12) and (b) LBP (precision=9/12).
Results obtained by querying with image #232 (Class: Building). (a) GLIBP (precision=12/12) and (b) LBP (precision=7/12).
Results obtained by querying with image #548 (Class: Elephants). (a) GLIBP (precision=9/12) and (b) LBP (precision=4/12).
Results obtained by querying with image #787 (Class: Horses). (a) GLIBP (precision=12/12) and (b) LBP (precision=9/12).