1. Introduction
Zadeh [1] proposed fuzzy set (FS) to solve uncertain information. After that, it has been successfully used in different areas [2-7]. Atanassov [8] extended FS to the intuitionistic fuzzy set (IFS) which is very effective to deal with vagueness of the information. The IFS has the ability to express fuzziness and uncertainty of practical situations which each element concludes membership degree, non-membership degree and hesitation degree and they are presented by exact numbers. The sum of these three functions is number 1. In general, the degrees may be not exact numbers but interval values. Then the theory of IFS was extended to interval-valued IFS [9]. With the development of the theory of FS and its extension, they have been handled many uncertainties in different real-life problems. But, there are many phenomena cannot be dealt by the FS and its extension. For example, when we ask an expert about a complex statement, he or she may not give the exact answer of the problem and say that the possibility that the statement is true is 0.6, that is false is 0.5 and the degree that he or she is not sure is 0.3. This situation cannot be expressed by FS and IFS, and some new set is needed to express the condition. Neutrosophic set (NS) was first proposed by Smarandache [10,11] from philosophical point of view. A NS is a set that each element has truth membership degree, indeterminacy membership degree and falsity membership degree.
As we known, it was difficult to apply the NS in real science and engineering fields. Wang et al. [12,13] proposed single valued neutrosophic set (SVNS) and interval valued neutrosophic set (IVNS). SVNS and IVNS are the subclasses of simplified neutrosophic set which was proposed by Ye [14] and he presented the operators and relationships of the sets and gave their different properties. Therefore, some new results of SNS appeared [15,16]. Similarity measure is used to estimate the degree of similarity of two sets. The similarity measures have been applied to various areas, such as personnel assessment, ecology, medical diagnosis, psychology and clustering analysis, and so on. Broumi and Smarandache [17] defined several similarity measures based on the Hausdorff distance measures of neutrosophic sets. Majumdar and Samanta [18] presented several similarity measures of SVNSs based on distance measures. Ye [14,19-22] also presented the similarity measures of IVNSs and SVNSs and applied them to multi-attribute decision-making problems. Based on tangent function, Prarnaik and Mondal [23] proposed the weighted fuzzy similarity measure and used it to the area of medical diagnosis. On the basis of existing similarity measures, we propose the cotangent similarity measures of SNSs and the weighted cotangent similarity measures of SNSs. Then we apply the proposed similarity measures to pattern recognition in the framework of SVNS and multi-criteria decision-making in the framework of IVNS. The results show that our proposed similarity measure methods are effective and reasonable.
The remainder of the paper is constructed as follows: Section 2 gives some preliminaries. Section 3 gives new cotangent similarity measures of SVNSs. Section 4 gives new cotangent similarity measures of IVNSs. Section 5 gives the applications of the proposed similarity measures in pattern recognition and multi-criteria decision-making. Section 6 concludes the paper.
2. Preliminaries
2.1 Neutrosophic Set (NS)
Suppose X is a universe of discourse, and its generic element is x . A neutrosophic set A in X is expressed by truth-membership function TA(x) , indeterminacy-membership function IA(x) , and falsity-membership function FA(x) , respectively. The functions TA(x), IA(x), FA(x) in X are real standard or nonstandard subsets of ]-0,1+[, i.e.,
where ]-0,1+[ is the nonstandard unit interval, which is an extension of the standard interval [0,1], [TeX:] $$- 0 = 0 - \varepsilon , 1 ^ { + } = 1 + \varepsilon , \varepsilon > 0$$. Then, the sum of [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ has no restriction, that is [TeX:] $$- 0 \leq \sup T _ { A } ( x ) + \sup I _ { A } ( x ) + \sup F _ { A } ( x ) \leq 3 ^ { + }$$.
Since it is difficult to apply NSs to practical problems, Ye [20] reduced NSs of non-standard intervals into standard intervals that still remain the operations of NSs. If the functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ are single subintervals/subsets in the real standard [0,1]. Then, a simplification of the NS A is denoted by
It is a subclass of the NS and contains SVNS and IVNS.
For one thing, the membership functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ in a SNS A are exact numbers in the real unit interval [0,1], and the sum of the three functions satisfies the inequality [TeX:] $$0 \leq T _ { A } ( x ) + I _ { A } ( x ) + F _ { A } ( x ) \leq 3$$, For every x in X . In this case, the SNS changes to the SVNS. Let A and B be two SVNSs, A is contained in B and denoted by A ⊆ B, if and only if
for every x in X .
For another, the membership functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ in SNS A are the subunit intervals of the real unit interval [0,1]. We have that
for any point x in X. In this case, the SNS reduces to the IVNS.
For two IVNSs A and B, A is contained in B, that is A ⊆ B, if and only if
for every x in X.
2.2 Similarity Measure of SNSs
Similarity measure is supposed to depict the similarity of a set of alternatives. We use fuzzy evaluation theory to determine the evaluating criteria and get the similarity measure of each alternative and obtain the best alternative. Let A, B, C be three SNSs, a real valued function S: SNS(X)XSNS(X)→[0,1], if it satisfies the following axiomatic conditions:
Then we call S a similarity measure of SNS(X).
3. Cotangent Similarity Measure of SVNSs
The cosine similarity measure [14] is based on the inner product of two vectors divided by the product of their models. The similarity measure from [22] is based on the tangent function. We first give these two formulas, and give examples to show these results are unreasonable.
Example 1. Let [TeX:] $$A _ { 1 } = \langle 0.4,0.2,0.6 \rangle , B _ { 1 } = \langle 0.2,0.1,0.3 \rangle$$ be two SVNSs, we use the Eq. (1) and (2) to calculate similarity measure of A1 and B1.
By computing, we get [TeX:] $$C \left( A _ { 1 } , B _ { 1 } \right) = 1 , T \left( A _ { 1 } , B _ { 1 } \right) = 0.6897$$, the result is quite different from each other. We find the membership functions of the two sets exist some differences, but the result shows that their similarity measure is 1. If the similarity measure is 1, it shows that the sets are almost the same sets, which contradict with our intuition.
Example 2. Let [TeX:] $$A _ { 2 } = \langle 0.3,0.2,0.4 \rangle , B _ { 2 } = \langle 0.4,0.2,0.3 \rangle$$ be two SVNSs, we use the Eq. (1) and (2) to calculate similarity measure of A2 and B2.
By computing, we get [TeX:] $$C \left( A _ { 2 } , B _ { 2 } \right) = 0.9945 , T \left( A _ { 2 } , B _ { 2 } \right) = 0.9958$$, the result is almost the same with each other. We found the membership functions of the two sets exist some differences, but the result shows that their similarity measure is very similar, it shows that the sets are almost the same sets, which contradict with our intuition.
From the above examples, we see that the existing similarity measure of SVNSs sometimes will not consistent with our idea. Motivated by the existing similarity measures, we propose a cotangent similarity measure as follows.
Let
be two SVNSs in [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots x _ { n } \right\}$$. We propose the cotangent similarity measure as follows:
where the symbol “V” is the maximum operation. Then, we can easily prove Eq. (3) satisfies definition of similarity measure. (S1), (S2), (S3) are easily get. We just prove (S4).
If A ⊆ B ⊆ C , then
then we have the following inequalities:
Combining these inequalities, since the cotangent function is a decreasing function within the interval [0, π/2]. Hence, S(A,C)≤S(A,B) and S(A,C)≤S(B,C). Therefore we complete the proof.
Usually, the differences of elements’ importance are considered. Thus, we need to take the weight of each element xi(i=1,2,...,n) into account.
In the following, we develop the weighted similarity measure between SNSs. Let [TeX:] $$\omega _ { i } , ( i = 1,2 , \cdots , n )$$be the weight of each element [TeX:] $$x _ { i } ( i = 1,2 , \cdots , n ) , \omega _ { i } \in [ 0,1 ] , \sum _ { i = 1 } ^ { n } \omega _ { i } = 1$$.Then, the weighted similarity measure is obtained as follows:
Especially if [TeX:] $$\omega _ { j } = \frac { 1 } { n } , j = 1,2 , \cdots , n$$.Eq. (4) reduces to (3).
Now we use the proposed Eq. (3) to calculate the similarity measure of Example 1, then we get S(A1, B1) = 0.6128.
Similarly, we use the proposed Eq. (3) to calculate the similarity measure of Example 2, then we get S(A2, B2) = 0.8540 .
From the two examples, we found that the proposed formula is coincided with our intuition and effective to calculate the similarity measure of SVNSs.
4. Cotangent Similarity Measure of IVNSs
With the same method, we present the formula of similarity measures of IVNSs.
Let
be two IVNSs in X = {x1, x2,...,xn}. Then we propose the new similarity measure of IVNSs.
where the symbol "V" is the maximum operator. Then, we can easily prove Eq. (5) satisfies the definition of the similarity measure. Here we omitted.
Let [TeX:] $$\omega _ { i } , ( i = 1,2 , \cdots , n )$$ be the weight of each element [TeX:] $$x _ { i } ( i = 1,2 , \cdots , n ) , \omega _ { i } \in [ 0,1 ] , \sum _ { i = 1 } ^ { n } \omega _ { i } = 1$$. Then, we have the following weighted similarity measure.
5. Applications
SNS is a very suitable tool to process the incomplete, uncertainty and inconsistent information. Similarity measure is an important mathematical tool to deal with pattern recognition, medical diagnosis, clustering analysis and decision-making. In the following part, we will use the proposed similarity measures to pattern recognition under the SVN environment and multi-attribute decisionmaking under the IVN environment.
5.1 Application to Pattern Recognition under the SVN Environment
Here is a pattern recognition problem. Suppose that there are m patterns and they are expressed by SVNSs. Suppose [TeX:] $$A _ { i } = \left\{ \left\langle x _ { j } ; T _ { A } \left( x _ { j } \right) , I _ { A } \left( x _ { j } \right) , F _ { A } \left( x _ { j } \right) \right\rangle \right\} , ( i = 1,2 , \cdots , m )$$ are m patterns in the feature space [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots , x _ { n } \right\}$$. Let [TeX:] $$B = \left\{ \left\langle x _ { j } ; T _ { B } \left( x _ { j } \right) , I _ { B } \left( x _ { j } \right) , F _ { B } \left( x _ { j } \right) \right\rangle \right\}$$ be a sample needed to be recognized. Our aim is to classify the pattern B to one of the patterns A1,A2,...,Am Aaccording to the principle of the maximum similarity measures. The bigger the similarity measure of Ai and B is, the more similar Ai and B is. Now, we give the steps for the pattern recognitions. First, calculate the similarity measure of Ai and B,that is S(Ai,B),i=1,2,...,m. Second, choose the largest one S(Ak,B) from S(Ai,B),i=1,2,...,m.Third, conclude that the sample B belongs to the pattern Ak.
In the following, a pattern recognition problem about the classification of building material [24] is used to illustrate effective of the proposed similarity measures. Suppose that there are five classes of building materials which are represented by SVNSs Ai,i = 1,2,...,5, the feature space is [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots , x _ { 5 } \right\}$$, and B is an unknown material. They are listed as follows:
We use Eq. (3) to calculate the similarity measure of Ai and B, i = 1,2,...,5
Since S(A2,B) is the biggest, we conclude that the unknown material B belongs to A2.
In this section, we use the similarity measure to the application of pattern recognition, by the similarity measure method, we classify the unknown material B to A2. Now we use the Method of reference [14] and [22] to show that our result is more feasible.
Method of reference [14]: If we use Eq. (1) to calculate the similarity measure, we get
Since S(A1,B) is the biggest, we conclude that the unknown material B belongs to A2.
Method of reference [22]: If we use Eq. (2) to calculate the similarity measure, we get
Since S(A2,B) is the biggest, we conclude that the unknown material B belongs to A2.
Comparison and analysis: The result of method of reference [14] is different from ours, In Section 3, Example 1 showed its drawbacks. The result of the method of reference [22] is the same as ours. But the calculation of T(Ai,B) is very close to each other which showed that the unknown material B is almost close to all Ai. The result of our given formula shows that our method is effective and reasonable.
5.2 Application to Multi-Criteria Decision-Making under IVN Environment
In this section, we apply the similarity measures to multi-criteria decision-making problems in IVNSs.
Let [TeX:] $$X = \left\{ a _ { 1 } , a _ { 2 } , \cdots , a _ { n } \right\}$$ be a set of alternatives and [TeX:] $$C = \left\{ c _ { 1 } , c _ { 2 } , \cdots , c _ { m } \right\}$$ be a set of criteria. Assume that the weight of the criterion [TeX:] $$c _ { j } \text { is } w _ { j } , w _ { j } \geq 0 , \sum _ { j = 1 } ^ { m } w _ { i } = 1$$ The characteristic of the alternative ai is represented by IVNS.
If there is only one element in the IVNS, for the sake of simplicity, the interval neutrosophic set is denoted by the interval valued neutrosophic value (IVNV), we denote it as:
In multi-criteria decision-making problems, we suppose the ideal point is existed and use it to help identifying the best alternative in the decision set. Then we construct the ideal point to evaluate the alternatives. Generally speaking, the evaluation criteria can be classified by benefit criteria and cost criteria. If the criterion belongs to benefit criterion, we set the ideal point as below:
If the criterion belongs to cost criterion, we set the ideal point as below:
We calculate the similarity measure S(ai,a*) by Eq. (5) and (6), the ranking order of the alternatives can be determined and the best one can be chosen as well.
Here an example adapted from [19] is utilized to illustrate the applicability and validity of the proposed similarity measure. There is a financial company that wants to invest four companies and they are considered as the four potential alternatives: (1) car company a1; (2) food company a2; (3) computer company a3; (4) arms company a4. The invest company must make a decision making considering the following three criteria: c1(risk analysis); c2(growth analysis); c3(environmental impact analysis). Among the three criteria, both c1 and c2 are benefit criteria, and c3 is a cost criterion. Assume that the criteria weight vector is [TeX:] $$w = [ 0.35,0.25,0.40 ] ^ { T }$$ The four possible alternatives are to be evaluated under these three criteria and are in the form of IVNSs, as shown in the following neutrosophic decision matrix D.
From the interval valued neutrosophic matrix, we get the ideal alternative as below:
According to Eq. (6), we get [TeX:] $$S \left( a _ { 1 } , a ^ { * } \right) = 0.6000 , S \left( a _ { 2 } , a ^ { * } \right) = 0.8906 , S \left( a _ { 3 } , a ^ { * } \right) = 0.7016 , S \left( a _ { 4 } , a ^ { * } \right) = 0.8451$$.
Then we rank the alternatives as[TeX:] $$a _ { 2 } > a _ { 4 } > a _ { 3 } > a _ { 1 }$$. Obviously, a2 is the best alternative.
Comparison and analysis: In this section, we have proposed a method to solve the MADM problem expressed with IVNSs. From the above example and comparison with the method in [18], we see that their result is the same as ours. In [19], they use two formulas to calculate the similarity measure which is given by the Hamming and Euclidean distance measures; while our method is based on the cotangent functions which satisfy the definition of similarity measure. We use cotangent functions to calculate the similarity measure and the result shows that our method is effective and reasonable.
6. Conclusion
In this paper, we commented on the similarity measure of SVNSs and IVNSs. We gave the new cotangent functions to calculate the similarity measures. They all satisfied the definition of similarity measures. Then the weighted cotangent similarity measures were introduced by considering the importance of each element. At last, we applied the similarity measure to pattern recognition under the single valued neutrosophic environment and multi-criteria decision-making under the interval valued neutrosophic environment. The results show that our methods are effective and reasonable.
Acknowledgement
This work was supported by the Fundamental Research Funds for the Central Universities (No. 2572017CB29) and Harbin Science Technology Innovation Talent Research Fund (No. 2016RQQXJ230).