New Similarity Measures of Simplified Neutrosophic Sets and Their Applications

Chunfang Liu*

Abstract

Abstract: The simplified neutrosophic set (SNS) is a generalization of fuzzy set that is designed for some practical situations in which each element has truth membership function, indeterminacy membership function and falsity membership function. In this paper, we propose a new method to construct similarity measures of single valued neutrosophic sets (SVNSs) and interval valued neutrosophic sets (IVNSs), respectively. Then we prove that the proposed formulas satisfy the axiomatic definition of the similarity measure. At last, we apply them to pattern recognition under the single valued neutrosophic environment and multi-criteria decision-making problems under the interval valued neutrosophic environment. The results show that our methods are effective and reasonable.

Keywords: Multi-Criteria Decision-Making , Pattern Recognition , Similarity Measure , Simplified Neutrosophic Sets

1. Introduction

Zadeh [1] proposed fuzzy set (FS) to solve uncertain information. After that, it has been successfully used in different areas [2-7]. Atanassov [8] extended FS to the intuitionistic fuzzy set (IFS) which is very effective to deal with vagueness of the information. The IFS has the ability to express fuzziness and uncertainty of practical situations which each element concludes membership degree, non-membership degree and hesitation degree and they are presented by exact numbers. The sum of these three functions is number 1. In general, the degrees may be not exact numbers but interval values. Then the theory of IFS was extended to interval-valued IFS [9]. With the development of the theory of FS and its extension, they have been handled many uncertainties in different real-life problems. But, there are many phenomena cannot be dealt by the FS and its extension. For example, when we ask an expert about a complex statement, he or she may not give the exact answer of the problem and say that the possibility that the statement is true is 0.6, that is false is 0.5 and the degree that he or she is not sure is 0.3. This situation cannot be expressed by FS and IFS, and some new set is needed to express the condition. Neutrosophic set (NS) was first proposed by Smarandache [10,11] from philosophical point of view. A NS is a set that each element has truth membership degree, indeterminacy membership degree and falsity membership degree.

As we known, it was difficult to apply the NS in real science and engineering fields. Wang et al. [12,13] proposed single valued neutrosophic set (SVNS) and interval valued neutrosophic set (IVNS). SVNS and IVNS are the subclasses of simplified neutrosophic set which was proposed by Ye [14] and he presented the operators and relationships of the sets and gave their different properties. Therefore, some new results of SNS appeared [15,16]. Similarity measure is used to estimate the degree of similarity of two sets. The similarity measures have been applied to various areas, such as personnel assessment, ecology, medical diagnosis, psychology and clustering analysis, and so on. Broumi and Smarandache [17] defined several similarity measures based on the Hausdorff distance measures of neutrosophic sets. Majumdar and Samanta [18] presented several similarity measures of SVNSs based on distance measures. Ye [14,19-22] also presented the similarity measures of IVNSs and SVNSs and applied them to multi-attribute decision-making problems. Based on tangent function, Prarnaik and Mondal [23] proposed the weighted fuzzy similarity measure and used it to the area of medical diagnosis. On the basis of existing similarity measures, we propose the cotangent similarity measures of SNSs and the weighted cotangent similarity measures of SNSs. Then we apply the proposed similarity measures to pattern recognition in the framework of SVNS and multi-criteria decision-making in the framework of IVNS. The results show that our proposed similarity measure methods are effective and reasonable.

The remainder of the paper is constructed as follows: Section 2 gives some preliminaries. Section 3 gives new cotangent similarity measures of SVNSs. Section 4 gives new cotangent similarity measures of IVNSs. Section 5 gives the applications of the proposed similarity measures in pattern recognition and multi-criteria decision-making. Section 6 concludes the paper.

2. Preliminaries

2.1 Neutrosophic Set (NS)

Suppose X is a universe of discourse, and its generic element is x . A neutrosophic set A in X is expressed by truth-membership function TA(x) , indeterminacy-membership function IA(x) , and falsity-membership function FA(x) , respectively. The functions TA(x), IA(x), FA(x) in X are real standard or nonstandard subsets of ]-0,1+[, i.e.,

[TeX:] $$T _ { A } ( x ) : X \rightarrow ] { ^{-} 0,1^{ + } [ } \\ I _ { A } ( x ) : X \rightarrow ] { ^{-} 0,1 ^{ + }[ } \\ F _ { A } ( x ) : X \rightarrow ] { ^{-} 0,1 ^{ + } [ }$$

where ]-0,1+[ is the nonstandard unit interval, which is an extension of the standard interval [0,1], [TeX:] $$- 0 = 0 - \varepsilon , 1 ^ { + } = 1 + \varepsilon , \varepsilon > 0$$. Then, the sum of [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ has no restriction, that is [TeX:] $$- 0 \leq \sup T _ { A } ( x ) + \sup I _ { A } ( x ) + \sup F _ { A } ( x ) \leq 3 ^ { + }$$.

Since it is difficult to apply NSs to practical problems, Ye [20] reduced NSs of non-standard intervals into standard intervals that still remain the operations of NSs. If the functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ are single subintervals/subsets in the real standard [0,1]. Then, a simplification of the NS A is denoted by

[TeX:] $$A = \left\{ < x , T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x ) > | \quad x \in X \right\}$$

It is a subclass of the NS and contains SVNS and IVNS.

For one thing, the membership functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ in a SNS A are exact numbers in the real unit interval [0,1], and the sum of the three functions satisfies the inequality [TeX:] $$0 \leq T _ { A } ( x ) + I _ { A } ( x ) + F _ { A } ( x ) \leq 3$$, For every x in X . In this case, the SNS changes to the SVNS. Let A and B be two SVNSs, A is contained in B and denoted by A ⊆ B, if and only if

[TeX:] $$T _ { A } ( x ) \leq T _ { B } ( x ) , I _ { A } ( x ) \geq I _ { B } ( x ) , F _ { A } ( x ) \geq F _ { B } ( x )$$

for every x in X .

For another, the membership functions [TeX:] $$T _ { A } ( x ) , I _ { A } ( x ) , F _ { A } ( x )$$ in SNS A are the subunit intervals of the real unit interval [0,1]. We have that

[TeX:] $$T _ { A } ( x ) = \left[ \inf T _ { A } ( x ) , \sup T _ { A } ( x ) \right] \\ I _ { A } ( x ) = \left[ \inf I _ { A } ( x ) , \sup I _ { A } ( x ) \right] \\ F _ { A } ( x ) = \left[ \inf F _ { A } ( x ) , \sup F _ { A } ( x ) \right]$$

for any point x in X. In this case, the SNS reduces to the IVNS.

For two IVNSs A and B, A is contained in B, that is A ⊆ B, if and only if

[TeX:] $$\inf T _ { A } ( x ) \leq \inf T _ { B } ( x ) \sup T _ { A } ( x ) \leq \sup T _ { B } ( x ), \\ \inf I _ { A } ( x ) \geq \inf I _ { B } ( x ) , \sup I _ { A } ( x ) \geq \sup I _ { B } ( x ), \\ \inf F _ { A } ( x ) \geq \inf F _ { B } ( x ) , \sup F _ { A } ( x ) \geq \sup F _ { B } ( x )$$

for every x in X.

2.2 Similarity Measure of SNSs

Similarity measure is supposed to depict the similarity of a set of alternatives. We use fuzzy evaluation theory to determine the evaluating criteria and get the similarity measure of each alternative and obtain the best alternative. Let A, B, C be three SNSs, a real valued function S: SNS(X)XSNS(X)→[0,1], if it satisfies the following axiomatic conditions:

[TeX:] $$( \mathrm { S } 1 ) 0 \leq S ( A , B ) \leq 1;$$
[TeX:] $$( \mathrm { S } 2 ) S ( A , B ) = 1 \text { if and only if } A = B;$$
[TeX:] $$( \mathrm { S } 3 ) S ( A , B ) = S ( B , A );$$
[TeX:] $$( S 4 ) A \subseteq B \subseteq C , \text { then } S ( A , C ) \leq S ( A , B ) \text { and } S ( A , C ) \leq S ( B , C ).$$

Then we call S a similarity measure of SNS(X).

3. Cotangent Similarity Measure of SVNSs

The cosine similarity measure [14] is based on the inner product of two vectors divided by the product of their models. The similarity measure from [22] is based on the tangent function. We first give these two formulas, and give examples to show these results are unreasonable.

(1)
[TeX:] $$C ( A , B ) = \frac { 1 } { n } \sum _ { j = 1 } ^ { n } \frac { T _ { A } \left( x _ { j } \right) T _ { B } \left( x _ { j } \right) + I _ { A } \left( x _ { j } \right) I _ { B } \left( x _ { j } \right) + F _ { A } \left( x _ { j } \right) F _ { B } \left( x _ { j } \right) } { \sqrt { T _ { A } \left( x _ { j } \right) ^ { 2 } + I _ { A } \left( x _ { j } \right) ^ { 2 } + F _ { A } \left( x _ { j } \right) ^ { 2 } } \sqrt { T _ { B } \left( x _ { j } \right) ^ { 2 } + I _ { B } \left( x _ { j } \right) ^ { 2 } + F _ { B } \left( x _ { j } \right) ^ { 2 } } }$$

(2)
[TeX:] $$T ( A , B ) = 1 - \frac { 1 } { n } \sum _ { j = 1 } ^ { n } \tan \left[ \frac { \pi } { 4 } \max \left\{ \left| T _ { A } \left( x _ { j } \right) - T _ { B } \left( x _ { j } \right) \right| , \left| I _ { A } \left( x _ { j } \right) - I _ { B } \left( x _ { j } \right) \right| , \left| F _ { A } \left( x _ { j } \right) - F _ { B } \left( x _ { j } \right) \right| \right\} \right]$$

Example 1. Let [TeX:] $$A _ { 1 } = \langle 0.4,0.2,0.6 \rangle , B _ { 1 } = \langle 0.2,0.1,0.3 \rangle$$ be two SVNSs, we use the Eq. (1) and (2) to calculate similarity measure of A1 and B1.

By computing, we get [TeX:] $$C \left( A _ { 1 } , B _ { 1 } \right) = 1 , T \left( A _ { 1 } , B _ { 1 } \right) = 0.6897$$, the result is quite different from each other. We find the membership functions of the two sets exist some differences, but the result shows that their similarity measure is 1. If the similarity measure is 1, it shows that the sets are almost the same sets, which contradict with our intuition.

Example 2. Let [TeX:] $$A _ { 2 } = \langle 0.3,0.2,0.4 \rangle , B _ { 2 } = \langle 0.4,0.2,0.3 \rangle$$ be two SVNSs, we use the Eq. (1) and (2) to calculate similarity measure of A2 and B2.

By computing, we get [TeX:] $$C \left( A _ { 2 } , B _ { 2 } \right) = 0.9945 , T \left( A _ { 2 } , B _ { 2 } \right) = 0.9958$$, the result is almost the same with each other. We found the membership functions of the two sets exist some differences, but the result shows that their similarity measure is very similar, it shows that the sets are almost the same sets, which contradict with our intuition.

From the above examples, we see that the existing similarity measure of SVNSs sometimes will not consistent with our idea. Motivated by the existing similarity measures, we propose a cotangent similarity measure as follows.

Let

[TeX:] $$A = \left\{ \left\langle x _ { j } ; T _ { A } \left( x _ { j } \right) , I _ { A } \left( x _ { j } \right) , F _ { A } \left( x _ { j } \right) \right\rangle | x _ { j } \in X \right\} \\ B = \left\{ \left\langle x _ { j } ; T _ { B } \left( x _ { j } \right) , I _ { B } \left( x _ { j } \right) , F _ { B } \left( x _ { j } \right) \right\rangle | x _ { j } \in X \right\}$$

be two SVNSs in [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots x _ { n } \right\}$$. We propose the cotangent similarity measure as follows:

(3)
[TeX:] $$S ( A , B ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \cot \frac { \pi } { 4 } \left[ 1 + \left( \begin{array} { l } { \left| T _ { A } \left( x _ { j } \right) - T _ { B } \left( x _ { j } \right) \right| Ｖ } \\ { \left| I _ { A } \left( x _ { j } \right) - I _ { B } \left( x _ { j } \right) \right| Ｖ \left| F _ { A } \left( x _ { j } \right) - F _ { B } \left( x _ { j } \right) \right| } \end{array} \right) \right]$$

where the symbol “Ｖ” is the maximum operation. Then, we can easily prove Eq. (3) satisfies definition of similarity measure. (S1), (S2), (S3) are easily get. We just prove (S4).

If A ⊆ B ⊆ C , then

[TeX:] $$T _ { A } \left( x _ { j } \right) \leq T _ { B } \left( x _ { j } \right) \leq T _ { C } \left( x _ { j } \right) \\ I _ { A } \left( x _ { j } \right) \geq I _ { B } \left( x _ { j } \right) \geq I _ { C } \left( x _ { j } \right) \\ F _ { A } \left( x _ { j } \right) \geq F _ { B } \left( x _ { j } \right) \geq F _ { C } \left( x _ { j } \right)$$

then we have the following inequalities:

[TeX:] $$\left| T _ { A } \left( x _ { j } \right) - T _ { B } \left( x _ { j } \right) \right| \leq \left| T _ { A } \left( x _ { j } \right) - T _ { c } \left( x _ { j } \right) \right|, \left| T _ { B } \left( x _ { j } \right) - T _ { C } \left( x _ { j } \right) \right| \leq \left| T _ { A } \left( x _ { j } \right) - T _ { C } \left( x _ { j } \right) \right| \\ \left| I _ { A } \left( x _ { j } \right) - I _ { B } \left( x _ { j } \right) \right| \leq \left| I _ { A } \left( x _ { j } \right) - I _ { c } \left( x _ { j } \right) \right|, \left| I _ { B } \left( x _ { j } \right) - I _ { C } \left( x _ { j } \right) \right| \leq \left| I _ { A } \left( x _ { j } \right) - I _ { c } \left( x _ { j } \right) \right| \\ \left| F _ { A } \left( x _ { j } \right) - F _ { B } \left( x _ { j } \right) \right| \leq \left| F _ { A } \left( x _ { j } \right) - F _ { C } \left( x _ { j } \right) \right|, \left| F _ { B } \left( x _ { j } \right) - F _ { C } \left( x _ { j } \right) \right| \leq \left| F _ { A } \left( x _ { j } \right) - F _ { C } \left( x _ { j } \right) \right|$$

Combining these inequalities, since the cotangent function is a decreasing function within the interval [0, π/2]. Hence, S(A,C)≤S(A,B) and S(A,C)≤S(B,C). Therefore we complete the proof.

Usually, the differences of elements’ importance are considered. Thus, we need to take the weight of each element xi(i=1,2,...,n) into account.

In the following, we develop the weighted similarity measure between SNSs. Let [TeX:] $$\omega _ { i } , ( i = 1,2 , \cdots , n )$$be the weight of each element [TeX:] $$x _ { i } ( i = 1,2 , \cdots , n ) , \omega _ { i } \in [ 0,1 ] , \sum _ { i = 1 } ^ { n } \omega _ { i } = 1$$.Then, the weighted similarity measure is obtained as follows:

(4)
[TeX:] $$W S ( A , B ) = \sum _ { i = 1 } ^ { n } w _ { j } \cot \frac { \pi } { 4 } \left[ 1 + \left( \begin{array} { l } { \left| T _ { A } \left( x _ { j } \right) - T _ { B } \left( x _ { j } \right) \right| \vee } \\ { \left| I _ { A } \left( x _ { j } \right) - I _ { B } \left( x _ { j } \right) \right| \vee \left| F _ { A } \left( x _ { j } \right) - F _ { B } \left( x _ { j } \right) \right| } \end{array} \right]\right.$$

Especially if [TeX:] $$\omega _ { j } = \frac { 1 } { n } , j = 1,2 , \cdots , n$$.Eq. (4) reduces to (3).

Now we use the proposed Eq. (3) to calculate the similarity measure of Example 1, then we get S(A1, B1) = 0.6128.

Similarly, we use the proposed Eq. (3) to calculate the similarity measure of Example 2, then we get S(A2, B2) = 0.8540 .

From the two examples, we found that the proposed formula is coincided with our intuition and effective to calculate the similarity measure of SVNSs.

4. Cotangent Similarity Measure of IVNSs

With the same method, we present the formula of similarity measures of IVNSs.

Let

[TeX:] $$A = \left\{ \left\langle x _ { j } ; T _ { A } \left( x _ { j } \right) , I _ { A } \left( x _ { j } \right) , F _ { A } \left( x _ { j } \right) \right\rangle | x _ { j } \in X \right\} \\ B = \left\{ \left\langle x _ { j } ; T _ { B } \left( x _ { j } \right) , I _ { B } \left( x _ { j } \right) , F _ { B } \left( x _ { j } \right) \right\rangle | x _ { j } \in X \right\}$$

be two IVNSs in X = {x1, x2,...,xn}. Then we propose the new similarity measure of IVNSs.

[TeX:] $$I S ( A , B ) = \frac{1}{n} \sum _ { j = 1 } ^ { n } \cot \frac { \pi } { 4 } \left[ 1 + \left( \begin{array} { c } { | \inf T _ { A } \left( x _ { j } \right) - \inf T _ { B } \left( x _ { j } \right) | \vee | \inf I _ { A } \left( x _ { j } \right) - \inf I _ { B } \left( x _ { j } \right) | } \\ { \vee \left| \inf F _ { A } \left( x _ { j } \right) - \inf F _ { B } \left( x _ { j } \right) \right| \vee \left| \sup T _ { A } \left( x _ { j } \right) - \sup T _ { B } \left( x _ { j } \right) \right| } \\ { \vee \left| \sup I _ { A } \left( x _ { j } \right) - \sup I _ { B } \left( x _ { j } \right) \right| \vee \left| \sup F _ { A } \left( x _ { j } \right) - \sup F _ { B } \left( x _ { j } \right) \right| } \end{array} \right]\right.$$

where the symbol "Ｖ" is the maximum operator. Then, we can easily prove Eq. (5) satisfies the definition of the similarity measure. Here we omitted.

Let [TeX:] $$\omega _ { i } , ( i = 1,2 , \cdots , n )$$ be the weight of each element [TeX:] $$x _ { i } ( i = 1,2 , \cdots , n ) , \omega _ { i } \in [ 0,1 ] , \sum _ { i = 1 } ^ { n } \omega _ { i } = 1$$. Then, we have the following weighted similarity measure.

(6)
[TeX:] $$W I S ( A , B ) = \sum _ { j = 1 } ^ { n } \omega _ { j } \cot \frac { \pi } { 4 } \left[ 1 + \left( \begin{array} { c } { | \inf T _ { A } \left( x _ { j } \right) - \inf T _ { B } \left( x _ { j } \right) | \vee | \inf I _ { A } \left( x _ { j } \right) - \inf I _ { B } \left( x _ { j } \right) | } \\ { \vee \left| \inf F _ { A } \left( x _ { j } \right) - \inf F _ { B } \left( x _ { j } \right) \right| \vee \left| \sup T _ { A } \left( x _ { j } \right) - \sup T _ { B } \left( x _ { j } \right) \right| } \\ { \vee \left| \sup I _ { A } \left( x _ { j } \right) - \sup I _ { B } \left( x _ { j } \right) \right| \vee \left| \sup F _ { A } \left( x _ { j } \right) - \sup F _ { B } \left( x _ { j } \right) \right| } \end{array} \right]\right.$$

5. Applications

SNS is a very suitable tool to process the incomplete, uncertainty and inconsistent information. Similarity measure is an important mathematical tool to deal with pattern recognition, medical diagnosis, clustering analysis and decision-making. In the following part, we will use the proposed similarity measures to pattern recognition under the SVN environment and multi-attribute decisionmaking under the IVN environment.

5.1 Application to Pattern Recognition under the SVN Environment

Here is a pattern recognition problem. Suppose that there are m patterns and they are expressed by SVNSs. Suppose [TeX:] $$A _ { i } = \left\{ \left\langle x _ { j } ; T _ { A } \left( x _ { j } \right) , I _ { A } \left( x _ { j } \right) , F _ { A } \left( x _ { j } \right) \right\rangle \right\} , ( i = 1,2 , \cdots , m )$$ are m patterns in the feature space [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots , x _ { n } \right\}$$. Let [TeX:] $$B = \left\{ \left\langle x _ { j } ; T _ { B } \left( x _ { j } \right) , I _ { B } \left( x _ { j } \right) , F _ { B } \left( x _ { j } \right) \right\rangle \right\}$$ be a sample needed to be recognized. Our aim is to classify the pattern B to one of the patterns A1,A2,...,Am Aaccording to the principle of the maximum similarity measures. The bigger the similarity measure of Ai and B is, the more similar Ai and B is. Now, we give the steps for the pattern recognitions. First, calculate the similarity measure of Ai and B,that is S(Ai,B),i=1,2,...,m. Second, choose the largest one S(Ak,B) from S(Ai,B),i=1,2,...,m.Third, conclude that the sample B belongs to the pattern Ak.

In the following, a pattern recognition problem about the classification of building material [24] is used to illustrate effective of the proposed similarity measures. Suppose that there are five classes of building materials which are represented by SVNSs Ai,i = 1,2,...,5, the feature space is [TeX:] $$X = \left\{ x _ { 1 } , x _ { 2 } , \cdots , x _ { 5 } \right\}$$, and B is an unknown material. They are listed as follows:

[TeX:] $$\begin{array} { r } { A _ { 1 } = \left\{ \left\langle x _ { 1 } , 0.4,0.6,0.0 \right\rangle , \left\langle x _ { 2 } , 0.3,0.2,0.5 \right\rangle , \left\langle x _ { 3 } , 0.1,0.3,0.7 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.4,0.3,0.3 \right\rangle , \left\langle x _ { 5 } , 0.1,0.2,0.7 \right\rangle \} } \end{array}$$
[TeX:] $$\begin{array} { r } { A _ { 2 } = \left\{ \left\langle x _ { 1 } , 0.7,0.3,0.0 \right\rangle , \left\langle x _ { 2 } , 0.2,0.2,0.6 \right\rangle , \left\langle x _ { 3 } , 0.0 .0 .1,0.9 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.7,0.3,0.0 \right\rangle , \left\langle x _ { 5 } , 0.1,0.1,0.8 \right\rangle \} } \end{array}$$
[TeX:] $$\begin{array} { r } { A _ { 3 } = \left\{ \left\langle x _ { 1 } , 0.3,0.4,0.3 \right\rangle , \left\langle x _ { 2 } , 0.6,0.3,0.1 \right\rangle , \left\langle x _ { 3 } , 0.2,0.1,0.7 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.2,0.2,0.6 \right\rangle , \left\langle x _ { 5 } , 0.1,0,0.9 \right\rangle \} } \end{array}$$
[TeX:] $$\begin{array} { r } { A _ { 4 } = \left\{ \left\langle x _ { 1 } , 0.1,0.2,0.7 \right\rangle , \left\langle x _ { 2 } , 0.4,0.4,0.4 \right\rangle , \left\langle x _ { 3 } , 0.8,0.2,0 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.2,0.2,0.7 \right\rangle , \left\langle x _ { 5 } , 0.2,0.1,0.7 \right\rangle \} } \end{array}$$
[TeX:] $$\begin{array} { c } { A _ { 5 } = \left\{ \left\langle x _ { 1 } , 0.1,0.1,0.8 \right\rangle , \left\langle x _ { 2 } , 0,0.2,0.8 \right\rangle , \left\langle x _ { 3 } , 0.2,0,0.8 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.2,0,0.8 \right\rangle , \left\langle x _ { 5 } , 0.8,0.1,0.1 \right\rangle \} } \end{array}$$
[TeX:] $$\begin{array} { r l } { B = \left\{ \left\langle x _ { 1 } , 0.8,0.2,0.1 \right\rangle , \left\langle x _ { 2 } , 0.6,0.3,0.1 \right\rangle , \left\langle x _ { 3 } , 0.2,0.1,0.8 \right\rangle \right. } \\ { \left\langle x _ { 4 } , 0.6,0.5,0.1 \right\rangle , \left\langle x _ { 3 } , 0.1,0.4,0.6 \right\rangle \} } \end{array}$$

We use Eq. (3) to calculate the similarity measure of Ai and B, i = 1,2,...,5

[TeX:] \begin{aligned} S \left( A _ { 1 } , B \right) = 0.6397 , S \left( A _ { 2 } , B \right) = 0.7150 , S \left( A _ { 3 } , B \right) = 0.6383 \\ S \left( A _ { 4 } , B \right) = 0.3211 , S \left( A _ { 5 } , B \right) = 0.3628 \end{aligned}

Since S(A2,B) is the biggest, we conclude that the unknown material B belongs to A2.

In this section, we use the similarity measure to the application of pattern recognition, by the similarity measure method, we classify the unknown material B to A2. Now we use the Method of reference [14] and [22] to show that our result is more feasible.

Method of reference [14]: If we use Eq. (1) to calculate the similarity measure, we get

[TeX:] $$\begin{array} { r l } { S \left( A _ { 1 } , B \right) } { = 0.8692 , S \left( A _ { 2 } , B \right) = 0.8661 , S \left( A _ { 3 } , B \right) = 0.8206 } \\ { } { S \left( A _ { 4 } , B \right) = 0.5621 , S \left( A _ { 5 } , B \right) = 0.4244 } \end{array}$$

Since S(A1,B) is the biggest, we conclude that the unknown material B belongs to A2.

Method of reference [22]: If we use Eq. (2) to calculate the similarity measure, we get

[TeX:] $$\begin{array} { r l } { T \left( A _ { 1 } , B \right) = } { 0.9863 , T \left( A _ { 2 } , B \right) = 0.9961 , T \left( A _ { 3 } , B \right) = 0.9860 } \\ { } { T \left( A _ { 4 } , B \right) = 0.9926 , T \left( A _ { 5 } , B \right) = 0.9920 } \end{array}$$

Since S(A2,B) is the biggest, we conclude that the unknown material B belongs to A2.

Comparison and analysis: The result of method of reference [14] is different from ours, In Section 3, Example 1 showed its drawbacks. The result of the method of reference [22] is the same as ours. But the calculation of T(Ai,B) is very close to each other which showed that the unknown material B is almost close to all Ai. The result of our given formula shows that our method is effective and reasonable.

5.2 Application to Multi-Criteria Decision-Making under IVN Environment

In this section, we apply the similarity measures to multi-criteria decision-making problems in IVNSs.

Let [TeX:] $$X = \left\{ a _ { 1 } , a _ { 2 } , \cdots , a _ { n } \right\}$$ be a set of alternatives and [TeX:] $$C = \left\{ c _ { 1 } , c _ { 2 } , \cdots , c _ { m } \right\}$$ be a set of criteria. Assume that the weight of the criterion [TeX:] $$c _ { j } \text { is } w _ { j } , w _ { j } \geq 0 , \sum _ { j = 1 } ^ { m } w _ { i } = 1$$ The characteristic of the alternative ai is represented by IVNS.

[TeX:] $$a _ { i } = \left\{ \left\langle c _ { j } , T _ { a _ { i } } \left( c _ { j } \right) , I _ { a _ { i } } \left( c _ { j } \right) , F _ { a _ { i } } \left( c _ { j } \right) \right\rangle | c _ { j } \in C \right\} \\ T _ { a _ { i } } \left( c _ { j } \right) = \left[ \inf T _ { a _ { i } } \left( c _ { j } \right) , \sup T _ { a _ { i } } \left( c _ { j } \right) \right] \\ I _ { a _ { i } } \left( c _ { j } \right) = \left[ \inf I _ { a _ { i } } \left( c _ { j } \right) , \sup I _ { a _ { i } } \left( c _ { j } \right) \right] \\ F _ { a _ { i } } \left( c _ { j } \right) = \left[ \inf F _ { a _ { i } } \left( c _ { j } \right) , \sup F _ { a _ { i } } \left( c _ { j } \right) \right]$$

If there is only one element in the IVNS, for the sake of simplicity, the interval neutrosophic set is denoted by the interval valued neutrosophic value (IVNV), we denote it as:

[TeX:] $$\alpha _ { i j } = \left\langle \left[ a _ { i j } , b _ { i j } \right] , \left[ c _ { i j } , d _ { i j } \right] , \left[ e _ { i j } , f _ { i j } \right] \right\rangle$$

In multi-criteria decision-making problems, we suppose the ideal point is existed and use it to help identifying the best alternative in the decision set. Then we construct the ideal point to evaluate the alternatives. Generally speaking, the evaluation criteria can be classified by benefit criteria and cost criteria. If the criterion belongs to benefit criterion, we set the ideal point as below:

[TeX:] $$a ^ { * } = \left\langle \left[ \max _ { i } \left( a _ { i j } \right) , \max _ { i } \left( b _ { i j } \right) \right] \left[ \min _ { i } \left( c _ { i j } \right) , \min _ { i } \left( d _ { i j } \right) \right] , \left[ \min _ { i } \left( e _ { i j } \right) , \min _ { i } \left( f _ { i j } \right) \right] \right\rangle$$

If the criterion belongs to cost criterion, we set the ideal point as below:

[TeX:] $$a ^ { * } = \left\langle \left[ \min _ { i } \left( a _ { i j } \right) , \min _ { i } \left( b _ { i j } \right) \right] \left[ \max _ { i } \left( c _ { i j } \right) , \max _ { i } \left( d _ { i j } \right) \right] \cdot \left[ \max _ { i } \left( e _ { i j } \right) , \max _ { i } \left( f _ { i j } \right) \right] \right\rangle$$

We calculate the similarity measure S(ai,a*) by Eq. (5) and (6), the ranking order of the alternatives can be determined and the best one can be chosen as well.

Here an example adapted from [19] is utilized to illustrate the applicability and validity of the proposed similarity measure. There is a financial company that wants to invest four companies and they are considered as the four potential alternatives: (1) car company a1; (2) food company a2; (3) computer company a3; (4) arms company a4. The invest company must make a decision making considering the following three criteria: c1(risk analysis); c2(growth analysis); c3(environmental impact analysis). Among the three criteria, both c1 and c2 are benefit criteria, and c3 is a cost criterion. Assume that the criteria weight vector is [TeX:] $$w = [ 0.35,0.25,0.40 ] ^ { T }$$ The four possible alternatives are to be evaluated under these three criteria and are in the form of IVNSs, as shown in the following neutrosophic decision matrix D.

[TeX:] $$D = \left( \begin{array}{c} \langle [ 0.4,0.5 ] , [ 0.2,0.3 ] , [ 0.3,0.4 ] \rangle \langle [ 0.4,0.6 ] , [ 0.1,0.3 ] , [ 0.2,0.4 ] \rangle \langle [ 0.7,0.9 ] , [ 0.2,0.3 ] , [ 0.4,0.5 ] \rangle \\ \langle [ 0.6,0.7 ] , [ 0.1,0.2 ] , [ 0.2,0.3 ] \rangle \langle [ 0.6,0.7 ] , [ 0.1,0.2 ] , [ 0.2,0.3 ] \rangle \langle [ 0.3,0.6 ] , [ 0.3,0.5 ] , [ 0.8,0.9 ] \rangle\\ \langle [ 0.3,0.6 ] , [ 0.2,0.3 ] , [ 0.3,0.4 ] \rangle \langle [ 0.5,0.6 ] , [ 0.2,0.3 ] , [ 0.3,0.4 ] \rangle \langle [ 0.4,0.5 ] , [ 0.2,0.4 ] , [ 0.7,0.9 ] \rangle\\ \langle [ 0.7,0.8 ] , [ 0,0.1 ] , [ 0.1,0.2 ] \rangle \langle [ 0.6,0.7 ] , [ 0.1,0.2 ] , [ 0.1,0.3 ] \rangle \langle [ 0.6,0.7 ] , [ 0.3,0.4 ] , [ 0.8,0.9 ]\rangle \end{array}\right)$$

From the interval valued neutrosophic matrix, we get the ideal alternative as below:

[TeX:] $$a ^ { * } = \{ \langle [ 0.7,0.8 ] , [ 0,0.1 ] , [ 0.1,0.2 ] \rangle, \langle [ 0.6,0.7 ] , [ 0.1,0.2 ] , [ 0.1,0.3 ] \rangle,\\ \langle [ 0.3,0.5 ] , [ 0.3,0.5 ] , [ 0.8,0.9 ] \rangle \}$$

According to Eq. (6), we get [TeX:] $$S \left( a _ { 1 } , a ^ { * } \right) = 0.6000 , S \left( a _ { 2 } , a ^ { * } \right) = 0.8906 , S \left( a _ { 3 } , a ^ { * } \right) = 0.7016 , S \left( a _ { 4 } , a ^ { * } \right) = 0.8451$$.

Then we rank the alternatives as[TeX:] $$a _ { 2 } > a _ { 4 } > a _ { 3 } > a _ { 1 }$$. Obviously, a2 is the best alternative.

Comparison and analysis: In this section, we have proposed a method to solve the MADM problem expressed with IVNSs. From the above example and comparison with the method in [18], we see that their result is the same as ours. In [19], they use two formulas to calculate the similarity measure which is given by the Hamming and Euclidean distance measures; while our method is based on the cotangent functions which satisfy the definition of similarity measure. We use cotangent functions to calculate the similarity measure and the result shows that our method is effective and reasonable.

6. Conclusion

In this paper, we commented on the similarity measure of SVNSs and IVNSs. We gave the new cotangent functions to calculate the similarity measures. They all satisfied the definition of similarity measures. Then the weighted cotangent similarity measures were introduced by considering the importance of each element. At last, we applied the similarity measure to pattern recognition under the single valued neutrosophic environment and multi-criteria decision-making under the interval valued neutrosophic environment. The results show that our methods are effective and reasonable.

Acknowledgement

This work was supported by the Fundamental Research Funds for the Central Universities (No. 2572017CB29) and Harbin Science Technology Innovation Talent Research Fund (No. 2016RQQXJ230).

Biography

Chunfang Liu
https://orcid.org/0000-0003-0396-4115

She received M.S. degree in School of Harbin Institute of Technology, China, in 2003. She is studying for her doctorate in Harbin Engineering University. She is currently working at College of Science, Northeast Forestry University. Her research interests includes system engineering, intelligent control and fuzzy systems.

References

• 1 L. A. Zadeh, "Fuzzy sets," Information and Control, 1965, vol. 8, no. 3, pp. 338-353. doi:[[[10.1016/S0019-9958(65)90241-X]]]
• 2 L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoning (I)," Information Science, 1975, vol. 8, no. 3, pp. 199-249. doi:[[[10.1016/0020-0255(75)90036-5]]]
• 3 L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoning (II)," Information Science, 1975, vol. 8, no. 4, pp. 301-357. doi:[[[10.1016/0020-0255(75)90046-8]]]
• 4 L. A. Zadeh, "The concept of a linguistic variable and its application to approximate reasoning (III)," Information Science, 1975, vol. 9, no. 1, pp. 43-80. doi:[[[10.1016/0020-0255(75)90017-1]]]
• 5 L. A. Zadeh, "Toward a generalized theory of uncertainty (GTU): an outline," Information Sciences, 2005, vol. 172, no. 1-2, pp. 1-40. doi:[[[10.1016/j.ins.2005.01.017]]]
• 6 J. Ye, "A multicriteria decision-making method using aggregation operators for simplified neutrosophic set," Journal of Intelligent Fuzzy Systems, 2014, vol. 26, no. 5, pp. 2459-2466. doi:[[[10.3233/IFS-130916]]]
• 7 C. Liu, Y. Luo, "Correlated aggregation operators for simplified neutrosophic set and their application in multi-attribute group decision making," Journal of Intelligent Fuzzy Systems, 2016, vol. 30, no. 3, pp. 1755-1761. doi:[[[10.3233/IFS-151886]]]
• 8 K. Atanassov, "Intuitionistic fuzzy sets," Fuzzy Sets and Systems, 1986, vol. 20, no. 1, pp. 87-96. doi:[[[10.1007/978-3-7908-1870-3_1]]]
• 9 K. T. Atanassov, G. Gargov, "Interval valued intuitionistic fuzzy sets," Fuzzy Sets and Systems, 1989, vol. 31, no. 3, pp. 343-349. doi:[[[10.1007/978-3-7908-1870-3_2]]]
• 10 F. Smarandache, Neutrosophy: Neutrosophic Probability, Set and Logic. RehobothDE: American Research Press, 1999.custom:[[[-]]]
• 11 F. Smarandache, A Unifying Field in Logics: Neutrosophic Logic: Neutrosophy, Neutrosophic Set and Neutrosophic Probability. RehobothDE: American Research Press, 2003.custom:[[[-]]]
• 12 H. Wang, F. Smarandache, Y. Zhang, R. Sunderraman, "Single valued neutrosophic sets," Multispace Multistruction, 2010, no. 4, pp. 410-413. custom:[[[https://www.researchgate.net/publication/262047656_Single_valued_neutrosophic_sets]]]
• 13 H. Wang, F. Smarandache, Y. Zhang, R. Sunderraman, Interval Neutrosophic Sets and Logic: Theory and Applications in Computing. Phoenix, AZ: Hexis, 2005.custom:[[[-]]]
• 14 J. Y e, "Vector similarity measures of simplified neutrosophic sets and their application in multicriteria decision making," International Journal of Fuzzy Systems, 2014, vol. 16, no. 2, pp. 204-211. doi:[[[The project of neutrosophic theory, decision making, and applications sponsored by the National Natural Science Foundation of P.R. China (No. 71471172)]]]
• 15 C. Li and, S. Luo, "The weighted distance measure based method to neutrosophic multi-attribute group decision making," Mathematical Problems in Engineeringarticle ID 3145341, 2016, vol. 2016. doi:[[[10.1155/2016/3145341]]]
• 16 P . Liu, L. Shi, "The generalized hybrid weighted average operator based on interval neutrosophic hesitant set and its application to multiple attribute decision making," Neural Computing and Applications, 2015, vol. 26, no. 2, pp. 457-471. doi:[[[10.1007/s00521-014-1736-4]]]
• 17 S. Broumi, F. Smarandache, "Several similarity measures of neutrosophic sets," Neutrosophic Sets Systems, 2013, vol. 1, pp. 54-62. doi:[[[10.6084/M9.FIGSHARE.1502610]]]
• 18 P. Majumdar, S. K. Samanta, "On similarity and entropy of neutrosophic sets," Journal of Intelligent Fuzzy Systems, 2014, vol. 26, no. 3, pp. 1245-1252. doi:[[[10.3233/IFS-130810]]]
• 19 J. Ye, "Similarity measures between interval neutrosophic sets and their applications in multicriteria decision-making," Journal of Intelligent Fuzzy Systems, 2014, vol. 26, no. 1, pp. 165-172. doi:[[[10.3233/IFS-120724]]]
• 20 J. Ye, "Multiple attribute group decision-making method with completely unknown weights based on similarity measures under single valued neutrosophic environment," Journal of Intelligent Fuzzy Systems, 2014, vol. 27, no. 6, pp. 2927-2935. doi:[[[10.3233/IFS-141252]]]
• 21 J. Ye, "Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses," Artificial Intelligence in Medicine, 2015, vol. 63, no. 3, pp. 171-179. doi:[[[10.1016/j.artmed.2014.12.007]]]
• 22 J. Ye, J. Fu, "Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function," Computer Methods and Programs in Biomedicine, 2016, vol. 123, pp. 143-149. doi:[[[10.1016/j.cmpb.2015.10.002]]]
• 23 S. Pramanik, K. Mondal, "Weighted fuzzy similarity measure based on tangent function and its application to medical diagnosis," International Journal of Innovative Research in ScienceEngineering and Technology, , 2015, vol. 4, no. 2, pp. 158-164. doi:[[[10.15680/ijirset.2015.0402023]]]
• 24 W. Wang, X. Xin, "Distance measure between intuitionistic fuzzy set," Pattern Recognition Letters, 2005, vol. 26, no. 13, pp. 2063-2069. custom:[[[10.1016/j.patrec.2005.03.018]]]