2. Related Research
Studies by many researchers have continued to focus on protecting data in order to prevent malicious data leakage by members of the organization, or by insiders certified by the system. Typical data protection methods include intrusion detection systems and intrusion prevention systems. An intrusion detection system is an automated system that monitors events occurring in a computer or network in order to detect intrusions. An intrusion prevention system is a security solution that prevents intrusion in real time and uses various security technologies to block harmful network traffic [10].
These systems are divided into network-based intrusion detection system (NIDS)/network intrusion prevention system (NIPS) or host-based intrusion detection system (HIDS)/host-based intrusion prevention system (HIPS), depending on whether the system is operated based on the network or host. NIDS/NIPS detects abnormal traffic appearing on the network, then blocks or responds to attacks in advance. The HIDS/HIPS monitors system logs, critical system files, and other resources that occur on the host computer itself in order to prevent anomalous processes in advance.
IDS/IPS monitors the behavior of insiders based on signature-based detection systems and abnormal-based intrusion detection systems. A signature-based detection system involves recording the characteristics of a data leak event that has already occurred in a database, and then blocking the pattern when an intrusion pattern stored in the database is detected afterwards. This method shows a high detection rate when the inputted intrusion pattern database shows good performance, but a low detection rate when the intrusion pattern is not properly inputted into the database. The abnormal-based intrusion detection system records normal activity, and when an action out of the reference value is detected, blocks the corresponding process and traffic. However, in the case of an abnormal-based intrusion detection system, there is a risk that the normal process can be handled abnormally.
Enterprises use data loss prevention (DLP) to prevent data leakage by insiders in advance. DLP is a data protection solution that manages security logs output from security devices such as firewalls and anti-virus programs, then blocks access to important information by setting certain rules. In order to protect critical information, the DLP solution applies digital rights management (DRM) technology, which is an encryption technology, to each important document. When abnormal access to the data with no access right is detected, it protects the data by blocking such access, after granting access rights according to the level of each insider. In addition, DLP defines use rules for each important item of information, and if a rule violation is detected, blocks access to data.
Event data and logs output from each security device are all collected in the integrated log analysis management system SIEM. The SIEM solution analyzes the security log history of the insiders based on the input log analysis rule list, and outputs the log list showing abnormal symptoms to the administrator, so that the administrator can analyze the log and respond appropriately. The SIEM solution mainly consists of a rule-based detection method that determines whether the insider has violated the log analysis rule list, and a method in which the value derived by a previously defined formula is judged for each abnormal behavior item, in order to analyze the log. Rule-based detection methods and score-based detection methods have disadvantages in that if they show behavior patterns that have not been input previously, they cannot respond properly to the system.
Further, research is currently being conducted into a method for detecting not only existing invasion patterns, but also related patterns through the use of an association analysis algorithm. Wuu et al. [11] carried out research to update the signature rules used by Snort, one of the NIDS tools, to monitor and analyze network traffic through the Apriori algorithm. The authors [12] applied the Apriori algorithm to normal network packets so as to create a normal behavior pattern list, and if a traffic item is detected showing a pattern different from the normal behavior pattern, such traffic is blocked.
Such research progresses in the following order. First, a transaction to define the network behavior is defined. In order to create a normal behavior pattern, a transaction consists of several packet groups. In this case, to prevent the packets corresponding to different network actions from being defined as the same transaction group, a transaction is generated by grouping packets whose difference between the time when the packet is generated, and the time when the previous packet is completed, is within some predetermined value. When the transaction has been defined, a group of packets of size 2 or more is extracted from the frequently occurring packet list using the Apriori algorithm, and this group is used to define the normal behavior pattern. Then, when a new network action is input, it is compared with the normal action pattern in order to determine whether the action is abnormal.
When abnormal behavior is determined by the normal behavior pattern defined through the association of the packets constituting the network behavior with the Apriori algorithm, it faces a problem that it can be mistaken for abnormal behavior even when the normal behavior is inputted, since the kinds of behavior patterns that can be performed by the insider are various.
As the amount of log generated during the day grows exponentially, research into detecting abnormal insiders by analyzing the log through the use of a machine learning algorithm is currently underway. Julisch [13] used the clustering algorithm for which the unsupervised model showed better results than supervised learning. The authors [14] show a new attempt to detect insider threat using graph learning. Das et al. [15] used the principal component analysis algorithm applied to the information data on the network for detecting abnormal packets in order to prevent intrusion.
Recently, the deep learning algorithm has been developed, and studies have been conducted to detect abnormal packets by learning normal network packets through the use of a deep neural network [16]. However, these studies have trained the normal behavior of insiders in the learning model, and when another pattern of behavior is detected on the trained normal activity, that behavior is regarded as abnormal. Therefore, there is a problem that the accuracy of detection can greatly change according to the learning data.
In order to overcome these problems, we propose a new algorithm that can detect malicious insiders that leak data by using associative analysis algorithms that exhibit high performance for extracting related data, and deep learning algorithms that exhibit high performance for processing a large amount of data.
In the proposed system, the association analysis algorithm is applied to the security log history shown in the data leakage security incident history, and the series of behaviors that can appear to leak the data are defined as the leakage detection scenarios. Therefore, data leakage can be detected more flexibly using the proposed system than existing intrusion detection systems. In addition, since logs are created in bar graphs according to each scenario rule and analyzed through a deep learning algorithm called a CNN, it is possible to detect abnormalities with high accuracy. It is also possible to determine the type and time of the action.
3. A System for Improving Data Leakage Detection Based on Association Relationship between Data Leakage Patterns
At present, as data leakage accidents are frequently caused by insiders, companies are applying several security solutions that monitor activity logs and restrict the use of data as appropriate. However, these systems are dependent on existing data leakage patterns; when data is leaked in a manner that has not been defined by an administrator, data leakage cannot be properly determined. This study suggests a system that can detect data leakage by an insider, even if the data is leaked through a pattern that has not been input to the system.
Fig. 1 shows that the proposed system consists of a log collection unit, a leakage detection scenario establishment unit, and a data leakage definition unit. The log collection unit collects security logs that are meaningful in determining whether data has been leaked from the security solution handled by the enterprise. The leakage detection scenario establishment unit defines the leakage detection scenario list in order to analyze the log history and judge the data leakage, and stores it in the database. The data leakage definition unit collects all log details that may appear during business activities on each security device. It judges the data leakage by classifying the collected security log as a behavior log group that constitutes each leakage detection scenario. Then, graphs are inputted into the CNN, and it searches for an insider with high similarity to the security log graph of the malicious insider.
Data leakage detecting system based on association relationship.
3.1 Log Collection Unit
The behavior of each insider in the enterprise is recorded and stored in the form of a security log in the security solution, and abnormal behavior can be detected by analyzing the history of the security log collected from each insider. Security logs can be collected from the security devices applied by the enterprise, and different types of security logs are collected depending on the type of security device.
The security solution can largely be divided into a network security solution, system security solution, and contents/information leakage prevention solution according to the security device environment. Network security solutions mainly block unauthorized network ports and abnormal network traffic on the network, or attempts to break into an internal network used by an enterprise in an external network, and they save the traffic information and access time in a type of log. The system security solution stores and manages the events appearing in the host PC used by insiders. The content/information leakage prevention solution stores all of the records that are accessed to information protected by DRM, which is an encryption technology. Specifically, it stores the information of an insider who attempts access by blocking the access in advance, if an attempt is made to access data that does not fit the insider’s authority, or if an attempt is made to access data through an action unrelated to the business.
The proposed data leakage detection system collects the log output from each security device, and makes it possible to judge data leakage based on the data re-processed by the graph image according to the leakage detection scenario.
3.2 Leakage Detection Scenario Establishment Unit
3.2.1 Definition of basic behavior scenario
In order to determine whether data has been leaked based on collected logs, it is necessary to define specific criteria for analyzing the logs. Therefore, we define a scenario of a series of behaviors that can occur when data is leaked, classify the collected logs according to the leakage detection scenario, and determine the data leakage based on whether or not each log shows an abnormal pattern.
A leakage detection scenario is defined as a set of behaviors that can occur for each security solution. For example, in the case of the ‘DLP’ scenario, all event activities, such as ‘USB connection’ and ‘print output of protected document’, that can be collected by the DLP security solution are defined, and this enables the data leakage to be judged by classifying the security log based on the leakage detection scenario, when the security log is collected from insiders afterwards.
The leakage detection scenarios consist of a basic behavior scenario and an associated behavior scenario. The basic behavior scenario defines a behavior scenario as a series of actions that may occur in cases of data leakage that have already been analyzed by the administrator. The association action scenario is defined by analyzing the security log history of those who leaked the data using the Apriori algorithm, and finding a set of behaviors that show a high correlation.
The basic behavior scenario is defined as the scenario in which behaviors for data leakage are classified into security solutions based on data leakage incidents that have already been analyzed by the administrator. For example, most insiders download or copy protected documents stored in the enterprise to leak the data, then export them via mail or portable devices. In this case, the basic behavior scenario is created as a DRM document access scenario that manages important documents, a mail management scenario, and a portable device scenario, and the behaviors corresponding to the respective scenarios are stored as components of the respective scenarios. The following are basic behavior scenarios that may occur in data leakage accidents. The system identifies the insider, who is assumed to have acted as a constituent of the basic behavior scenario through analyzing the security log, as a malicious insider, and blocks access to the important data.
The scenario of accessing a harmful site: Even if an insider member does not intend to deliberately leak the data, the data can be leaked by connecting to an illegal site, or by downloading a file that has no relationship with the work. As a result, the scenario of accessing harmful sites should be organized by the number of times of blocking of connection to harmful sites, the number of times of blocking of malignant codes, the rate of increase of traffic, and the number of times of unauthorized file downloads.
The scenario of access to shared folders: Corporations handle work by having staff members share folders and files with other staff members inside the corporation. However, in the case of a staff member who has authority regarding the management of shared folders, such an insider can randomly correct or delete data and also give the right to access data to a different staff member. There is then the possibility of randomly reproducing the data of the corporation and leaking that data to the outside, deleting the data including his or her personnel information. As a result, the scenario related to the access to shared folders shall be organized by the number of times of access to shared folders, the number of times of having reproduced files in shared folders, and the number of times of deletions of files from shared folders.
The data upload scenario: The insider leaks data by attaching the data to a messenger, or uploading the data to an online storage service or a file-sharing site. Because the data on a webmail or storage service can be leaked through the network protocol, in order to detect data leakage, it is necessary to detect the amount of usage of the protocol of each insider member. The data upload scenario is organized by the number of times of file uploads, blocking the use of unauthorized ports and services, and blocking the installations of web-hard programs.
The DRM document storing scenario: Data from a document protected by DRM can be either downloaded according to the authority to use or leaked through printing. Each insider downloads a document protected by DRM, captures it as a screenshot, or creates a new document, then copies the contents of the document. As a result, the DRM document storing scenario is organized by the number of times of downloads of security documents, the number of times of copying DRM contents, and the number of times of capturing them as a screenshot.
The database access scenario: As important data, like personnel information, are stored in the databases of corporations, these data can be leaked by frequently accessing the database. There are many cases in which a staff member intending to access a database transmits an abnormally high number of search queries, or transmits correction queries to a database, in order to correct his or her personnel information. Therefore, in the scenario related to the database, the number of times of requests for the right to access a database, as well as the number of times of having transmitted the search queries or the correction queries, are organized.
The transport unit device scenario: Most of the data that have been acquired get leaked through the use of a transport unit, like a USB drive, an e-mail, or an output device, like a printer. In cases of data leakage to the outside, the transmission is made by reproducing the data in transport unit files of which the amount is more than when engaged in normal work, or by attaching them to an e-mail. In the case of an output device, like a printer or fax, the amount of documents being output is more than that at ordinary times. Hence, in order to detect data leakage, the scenario must be organized by the number of times of reproductions of files into the transport unit, the sizes of the reproductions of files, the number of times of access to the transport unit, the number of times of attachment of files to the e-mails, the size of files attached, the number of times of output from the output device, and the size of files that have been outputted.
3.2.2 Basic definition for Apriori algorithm
When the insider’s behavior is monitored through the basic behavior scenarios, there is the problem that it is not possible to properly judge data leakage in cases when data is leaked in a pattern that is different from previous data leakage accidents. Therefore, it is possible to judge data leakage more flexibly when analyzing the behavior of insiders only using an existing leakage detection scenario, by analyzing the security log through the Apriori algorithm, and defining the relational behavior group as an associative behavior scenario, and updating the leakage detection scenarios.
The Apriori algorithm has labels, and is mathematically defined as follows.
Definition 1. A set [TeX:] $$L=\left\{l_{1}, l_{2}, l_{3} \dots l_{m}\right\}$$ is a security solution list of extracted security logs, while a set [TeX:] $$A=\left\{a_{1}, a_{2}, a_{3} \dots a_{m}\right\}$$ is a list of behaviors that aims to leak data during work hours. Given two sets L and A, each subset 𝑎 and 𝑙 is defined as an item set in the Apriori algorithm.
Definition 2. Let each of X, Y be a set of the item set; an Association rule is an implication of the form: X→Y,where [TeX:] $$X, Y \subset(L \cup A), X \cap Y \neq \emptyset$$. It uses standard measures of the support and the confidence rules.
Definition 3. A set [TeX:] $$d=\left\{l_{1}, l_{2}, l_{3}, a_{1}, a_{2}, a_{3} \dots l_{t,} a_{j}\right\}$$ is a transaction representing data leakage behavior collected by security solutions in data leakage accidents. In addition, a set [TeX:] $$D=\left\{d_{1}, d_{2}, d_{3} \dots d_{m}\right\}$$ represents a list of all of the transactions.
Definition 4. by the proportion of transactions in which an item set appears. Given item sets l,a, the support of rule l→a is defined as
Definition 5. Support can be used to filter association rules by stating a lower bound for the support of a rule, minimum support. If item set [TeX:] $$<l_{i}, a_{j}>$$ has a value equal to or higher than the minimum support threshold, it is defined as a frequent item set.
Definition 6. Confidence measures the quality of association rules in item sets. Given item set l,a, the confidence of rule l→a (conditional probability that a transaction having item set l also contains item set a) is defined as
3.2.3 Apriori algorithm discovering association rules about security log
The Apriori algorithm is an association analysis algorithm for analyzing the relationship between each set of items for a set of transactions composed of a set of items. Taking market research as an example, each purchased item is defined as an item set, and a purchased item list is defined as a transaction. By analyzing the relationship between items using the Apriori algorithm, it is possible to derive and recommend an item that is likely to be purchased together with an item x that is already purchased.
The Apriori algorithm consists of two major steps:
Step 1: Calculate the occurrence probability of each item set by calculating the support. A support value of an item set that exceeds the minimum support value is defined as a frequent item set.
Step 2: For a subset of the frequent item set derived in Step 1, if the confidence value is lower than the minimum confidence threshold, it is removed from the frequent item set. By contrast, if the confidence satisfies the minimum confidence threshold, it is determined that there is an association relationship among frequent item sets, and this is generated as one rule itemset1→itemset2.
We applied the Apriori algorithm to the log collection and security solution in order to determine the association relationship between abnormal behaviors in the data leakage incident history. Tables 1 and 2 show the notations, input, output, and procedure of the algorithm.
The information of the leakage behavior and the security equipment in the security log collected from the insider are each defined as one item set. A transaction includes all of the leaking activities and security solution types collected in an accident about data leakage. Therefore, it can be said that the types of leakage behavior and types of security equipment collected by the system are defined as the item set. The item set is used to generate a group of candidate item sets, which is composed of components of unequal item sets. In this case, the item sets not exceeding the minimum support threshold are removed from the candidate item set. By contrast, when the minimum support value is satisfied, the candidate set is defined as a frequency item set. The frequent item set generation operation continues, until the support value of all item sets is calculated. Finally, when the frequent item set is generated, a confidence value is calculated for a subset of the frequent item set, and a subset of the frequent item set is derived that has an association relation with abnormal behavior to leaks information. Based on the rules generated by the system, we grouped item sets constituting the association rule into one scenario, then defined the scenario as a data leakage detection scenario for creating a graph.
Notations of Apriori algorithm
Input, output, and pseudo code of Apriori algorithm
3.2.4 Applying Apriori algorithm to generate associated behavior scenario
In the proposed system, the Apriori algorithm is used to create leakage detection scenarios using the relationship between data leakage behaviors. Because the security administrator analyzes the correlation between malicious behaviors that can be collected by each security solution and creates the leakage detection scenario, which is a criterion for judging the data leakage, an administrator can easily understand the intention of the insider, as he or she can understand the relation between the insight and behaviors of the insider to leak the data through the leakage detection scenario. Therefore, this study collected all security log histories of malicious insiders that occurred on the date of data leakage, extracted the set of behaviors that were deemed to be highly relevant through the Apriori algorithm, and defined them as a type of behavior scenario. In this case, the item set for analyzing the association is made available for analyzing not only the relationship between the data leakage behavior, but also the relationship between the security solution and data leakage behaviors, by taking the structure of (security solution that collects the security log, a security log).
Assuming that the minimum support is 0.5 and the minimum confidence is 0.6, the application procedure of Apriori algorithm for item set ‘DRM’ and ‘DRM document authorization request’ in data such as those given in Table 3 is as follows. First, this study calculates the degree of support, then determines the share of each item set in the data leakage. In this study, when security log data are given in the form of (security solution type, security log), ‘security solution type’ and ‘security log’ data are separately defined as item sets. Therefore, the ‘DRM’ and ‘DRM document authorization request’ items in the T1 transaction in Table 3 are defined as an item set with a size of 1, the support is calculated for each item set, and it is judged whether or not the calculated support values satisfy the minimum support.
Set of frequent-1 security logs
Analyzing the correlation between the ‘DRM’ item set and the ‘DRM document authorization request’ item yields that the transaction including the ‘DRM’ item set and the ‘DRM document authorization request’ item set appears twice in the entire transaction, so we can see that the value of support is 2/3, and since the minimum support value is 0.5, it can be judged that both sets of items with the support value of 0.66 are instance of data leakage behavior satisfying the minimum support value. After that, each item set satisfying the minimum support value is defined as a frequent item set in a group, such as {‘DRM’, ‘DRM document authorization request’}, and the size is increased. When a frequent item set is generated by calculating the support, the association analysis algorithm calculates the confidence value between the two item sets, then calculates the degree of association between them. In Table 3, a transaction including ‘DRM’ also includes a ‘DRM document authorization request’, so in this case, the confidence between two sets of items can be calculated to be 100%, and it is thus judged that they have a large correlation with each other. These frequent item sets satisfy both the minimum confidence and the minimum support, and are thus not removed in the algorithm process, but instead stored as a set of next association analysis candidate items. In the process of calculating support and confidence, all frequent item sets are generated and repeated until the support and confidence are analyzed, and finally, a frequent item set such as that shown in Table 4 is generated.
Frequent item sets to which Apriori is completely applied
Assuming a frequent item set is finally determined to be highly related, an attempt has been made to access the DRM document in common when the data is leaked, and it can be understood that the relevant document is downloaded after the authority of the DRM document is released. A frequent item set satisfying both the minimum reliability and support level is defined as an association behavior scenario in order to determine an insider’s leaking of data, and as the scenario is composed of behaviors that can be exhibited for each security solution, in this case, ‘DRM document authority request’, ‘DRM document download’ security log information is defined as a ‘DRM’ association behavior scenario, so it becomes a data set for drawing a graph to be input to the combined product neural network afterwards.
3.3 Data Leakage Definition unit
3.3.1 Generating scenario graphs for leakage detection
This section introduces the method of judging data leakage by analyzing the security log collected from insiders based on the leakage detection scenarios defined in Section 3.2. The security log collected from the insiders is visualized in the form of a graph. This graph shows the frequency of occurrence of the security log indicating each data leakage behavior per hour, and a graph is collected for each leak detection scenario in order to create an image to be input to the CNN.
For example, in the case of the default behavior scenario of ‘harmful site access’, the rule is composed of the set of data leakage behaviors, such as ‘the number of times of connection to harmful sites’, ‘the number of times of blocking malignant codes of traffic’, and ‘the number of times of unauthorized file downloads’. In the system, a graph is created based on the number of occurrences of the security log corresponding to each data leakage behavior, and another graph is created that shows the ‘access to harmful site’ leakage detection scenario by combining the occurrence frequency graph images of the security log corresponding to the behaviors of ‘the number of times of connection to a harmful site’s malicious code’, ‘the rate of increase of traffic’, and ‘the number of times of unauthorized file downloads’, among the generated graphs. At this time, in order to distinguish each security log graph, the graphs are created with different background colors.
The leakage detection scenario representing each graph becomes a class number for discrimination in a CNN. In this case, the class number is differently judged for the normal and abnormal insider so as to distinguish between a normal insider and abnormal insider through the CNN. The administrator can understand the type of security solution detected with the behavior path and leakage behavior by judging whether the graph image input through the classification result output from the CNN shows a certain behavior scenario or a high degree of similarity with the graph of the abnormal insider.
Process of drawing a graph for the leakage detection scenario.
Fig. 2 shows the process of graphically creating a DRM document access scenario. First, the DRM document scenario is extracted from the defined data leaking scenario. After extracting the list of security logs according to the DRM document scenario and creating a graph according to each scenario, the DRM document access scenario graph shown in Fig. 2 is created. Each data leakage scenario graph is entered into the CNN and used to determine whether data is being leaked by each internal employee.
The graph created based on the insider’s security log is image data to be input to CNN, and the data leakage scenario representing graph is defined as a label for discriminating the data. Using the trained data and labels, it is possible to find that the graph of the insiders shows a similar trend to the graph of the abnormal employee. Using the discriminating result, we detect the type of security solution in which the action path/category leaked the data, based on the employee’s information with the determined labels.
3.3.2 Convolutional neural network
In this paper, our purpose is to identify insiders suspected of leaking data by analyzing the behavior scenario graph according to insider’s behaviors. Since the abnormal insider is determined by analyzing the color of the graph and the slope of the graph indicating the trend of the security log in this system, it identifies the insider who shows high similarity with the behavior log pattern of an insider who wants to leak data from the CNN, showing high performance in image analysis.
CNN is a kind of deep learning algorithm that shows excellent performance in image recognition. In the past, the feature extraction algorithm, such as sift and hog, has divided the image into pixels of a certain size, and recognized the images based on the features extracted according to the slope and brightness of each pixel. However, since the existing image feature extraction algorithm only extracts low-level features, spatial information of the three-dimensional image data can be lost. By contrast, the CNN learns the optimal feature map through the convolution layer and the pooling layer and learns the image more efficiently through the high-level feature extracted through the feature map.
CNN is a feed-forward neural network that is mainly applied to image recognition. It is composed of a convolution layer and a pooling layer for extracting features as well as a fully connected layer and softmax classifier for learning extracted features. The CNN aims to use the spatial information represented by the pixels of the image, and also learns and classifies the images through the feature extracted while moving by the pixel area with the fixed feature mask.
The convolution layer is the feature extraction process. The convolution layer applies a kernel with a certain size in order to input the image. The kernel aims to determine whether or not the feature is in the input image, and generally has a small height/width size, such as 5 or 3. Upon completion of the convolution operation, a feature map is extracted that shows the characteristics of the input image.
For the convolution operation, we denote feature maps [TeX:] $$$$ x1..xn of the original image. This entails xn-1 being a feature map derived from the previous layer, where w is a weight kernel for performing a convolution product operation and is a bias term. The convolution layer is operated by:
where f is an activation function; we apply the ReLU operation to f in order to obtain a nonlinearity feature map. The convolution operation refers to a layer that performs an inner product operation by shifting a kernel for extracting a feature from a feature map derived from a previous map, in order to obtain a feature map xn in the current layer.
The pooling layer controls the amount of computation by reducing the size of the feature map output from the convolution layer. We use the max pooling operation to select the maximum value and reduce the feature map of the image. Max pooling is formulated as follows:
That is, the max pooling layer is a layer that only extracts the maximum value within a specific region, when a feature map xn-1 to reduce the size output from the convolution layer is input.
The feature map successfully extracted from the convolution layer and pooling layer is input to the fully connected neural network, then finally used to classify the data through the softmax classifier. The softmax classifier computes the difference between the distribution of the prediction label and the distribution of the real data label at CNN, and as this difference becomes smaller, the classifier outputs a higher value. The softmax classifier that shows the probability that the image is the jth label can be represented by the following equation:
As a result, an output vector having the size of the number of categories of the image to be classified is generated, and the softmax classifier outputs the data having the highest probability among them.
3.3.3 Adapting convolutional neural network to leakage detection
In order to detect an abnormal insider, we analyze the security log with the trend graph. However, instead of only analyzing the occurrence of the security log represented by the graph, we generate graphs by scenarios that express the log generation trend, the insider behavior represented by each log, and the time of abnormal behavior, so as to judge the behavior of the insider more efficiently.
The graph expressed according to the data leakage scenario is input to the CNN for training purposes. In this case, the graph about the security log trends of each insider is defined as the data to be trained, while the type of scenario that represents the category of the graph is defined by the class label. When there are N types of detection scenarios defined in the system, the graphs to be learned in the CNN have G1...GN graphs for each insider. In addition, as the purpose of the system is to identify the insider who wants to leak data, if there are n number of graphs, the number of class labels is 2N, consisting of the number of normal insider’s graph N and the number of abnormal insider’s graph N. We set the size of the kernel’s width/height so as to find the characteristics of the image to be 5 in both cases, and set the degree of movement of the kernel in the convolution layer and degree of reducing the size of the feature map in the pooling layer to be 2 in both cases.
Input, output, and procedure of CNN for leakage detection
We set the structure of the CNN so that the convolution layer and the pooling layer alternate twice to extract the feature map, and the ReLU function is used as the activation function. In addition, fully-connected layers and a softmax classifier were placed next to the last pooling layer in order to classify the images, and it was judged to which class label the input image showed the highest similarity. Let the number of scenarios of the normal and abnormal insider be 36, then the CNN algorithm can be applied as shown in Table 5, and the image feature selection and image classification layer have the structures shown in Tables 6 and 7, respectively.
According to the results of the CNN identification, it is possible for the administrator to identify the purpose of the behavioral path and action through which the data is leaked through the time at which the data is shown in the graph of the insider, which represents a graph similar to the data leakage insider.
Image feature extraction layer structure
Image classification layer structure