Software development led to a period of significant social transformation. The current era is constantly changing technology as well as the environment. Therefore, research and implementation strategies should be adopted by the information technology (IT) environment to prepare for future changes regardless of the cause and speed of change.
The recent changes in IT can be summarized as big flows, such as Blockchain, machine learning, and big data. Blockchain is a data distribution processing technology that allocates and stores all data managed by users participating in the network. This technology suggests that all users manage the data controlled by the existing central administrator. Bitcoin is a typical Blockchain technology application. In this paper, Blockchain technology is applied to patch management and electronic voting systems.
Machine learning is a field of artificial intelligence (AI) that lets people acquire new knowledge by providing data to computers and learning in the same way as humans. This technology has started to attract attention due to its recent development into deep learning. Machine learning-based applied technologies are also emerging. Featured in this paper is a paper that focuses on the identification and analysis of the fuzzy probability of tea diseases and early Bayesian games.
Expressed differently from existing data in terms of size, speed, and diversity, big data has recently drawn attention in various fields. Moreover, various techniques for collecting, storing, and analyzing structured and unstructured data are developed. This issue features papers that analyze some of the methods applied in different fields, such as subject extraction and classification, complex keyword extraction, offline-to-online (O2O) service analysis, and movie recommendation system.
The Journal of Information Processing Systems (JIPS) is an official international journal with indices such as ESCI, SCOPUS, Ei Compendex, DOI, DBLP, EBSCO, and Google Scholar and is published by the Korean Information Processing Society (KIPS). There are four divisions: Computer system and theory, Multimedia systems and graphics, Communication systems and security, and Information systems and application. This issue features 18 peer-reviewed papers following a rigorous review process.
2. Advanced Technologies in Blockchain, Machine Learning, and Big Data
Woo et al.  conducted a study to apply ITSM (IT Service Management), a framework for the integrated management of IT services, to the national defense acquisition system in Korea. Through the analysis of the existing obsolete national defense acquisition system and by applying ITSM to the existing obsolete national defense acquisition system for faster processing and response, the necessity of the ITSM system is explained, and the service satisfaction level for ITSM was investigated. In addition, the data processing procedure according to the application of ITSM to the defense acquisition system is expressed using the UML diagram, and the efficient ITSM application method is described.
Khamis et al.  proposed a new formal representation of Digital Contents domain that uses ontologybased model and semantic vector to redefine digital content data and combine with media segmentation methods. They also suggested an ontology-based digital contents query solution to provide a faster access mechanism of digital contents data stored under the persistent database. To do this, the authors classified the digital contents data into different types and introduced a formal representation method of digital contents data by redefining the digital contents data based on OWL/RDF and combining with media segmentation methods.
Knogwudhikunakorn and Waiyamai  proposed a clustering technique to group short-text documents such as news headlines, social media status, and instant messages into multiple related clusters. According to them, the combination of document representation, document distance, and document clustering needed to be identified in order to provide the best clustering quality. To this end, the k-mean partitioningbased clustering technology was applied to the proposed scheme. To verify the efficiency of the proposed method, they presented the experiment results for clustering quality in terms of accuracy, recall, F1-score, and adjusted Rand index.
Song et al.  proposed a Hyperledger Fabric Blockchain-based distributed patch management system to ensure the stability of enterprise systems through security, log management, and up-to-date status supervision and monitoring functions by improving the problems of the centralized structure of enterprise patch management systems. The authors designed the system using the Blockchain’s distributed database storage method and PBFT (practical Byzantine fault tolerance) consensus algorithm technology and implemented a test environment for the patch management system. The technical validity of the proposed scheme was verified based on the test scenario so that the patch could be executed normally; through this, the integrity of the patch application status database was verified.
In order to control the risk and uncertainty of the Letter of Credit (L/C), a payment method mainly used in international trade processes, Cheng and Huang  presented a risk assessment and decision-making method for the L/C settlement of listed companies based on fuzzy probability and Bayesian game theory. To solve the problem of incomplete information related to L/C, the FAHP and KMV methods were used, and an analytical model was designed for import and export companies based on fuzzy probability and Bayesian game theory. The authors presented reasonable measures to aid in L/C risk assessment and decision making through their own simulation and case study.
Tan  presented a new improved emotional text classification, which is one of the essential research topics in the area of natural language processing. In order to improve the weakness of the existing LDA topic model, the author proposed an improved weighted-LDA topic model that assigns weights so that words related to the subject are not pushed out to high-frequency words in the process of calculating the sample and distribution of words. Based on the experiment results, improved results were shown compared to the existing algorithms in terms of subject classification, precision, and F1-measurement.
Kanaan and Behrad  presented a new algorithm for 3D shape recognition using the local features of model views and its sparse representation. The algorithm processes include the normalization of 3D models and extraction of 2D views from uniformly distributed viewpoints. Support vector machine classifiers are also used to recognize the 3D models by applying Gabot filters as initial recognition, measuring the similarity, and representing the intermediate feature vectors of 3D models. An experiment using the Princeton shape benchmark databases yielded effective results with average recognition rate of 89.7% compared to other known algorithms.
Li et al.  presented a class-oriented attribute reduction (COAR) algorithm, an enhanced heuristic attribution reduction algorithm for providing the better match for multiclass datasets since the existing heuristic algorithms are not perfect for multiscale datasets. The authors proposed the new ensemble constructed algorithm based on class-oriented reducts with a customized weighted majority voting strategy considering the strong dependence in a reduct and target class. The experiment showed that the proposed algorithm was better in terms of four general evaluation metrics by using five actual multiclass datasets.
Selvaraj et al.  presented a new system architecture for O2O service and showed useful knowledge from previous real-time freight data for further business development in the Freight management vertical. A new business module for rapid decision making is needed to improve the business module, and the data analysis process is performed offline. According to the authors, the proposed system architecture is useful for the transport management companies in dynamically requesting the big data analysis results using O2O services for such kinds of predicted customer expectation, price, and overhead reduction by growing profit margins and load balancing.
Sun et al.  presented the mechanism for archiving an online estimation of transmission line full parameters to predict the influence on the electromagnetic environment. The authors propose a method that uses Phasor Measurement Unit and Supervisory Control and Data Acquisition and differs from the existing method, which is based on independent resistance estimation. The experiment result showed that the online estimation of transmission line full parameters was much more accurate.
Zhang et al.  presented the k nearest neighbor query method of a line segment in obstacle space to make up an existing method that cannot handle the nearest neighbor query effectively. The query process has two steps: (1) the filtering process uses the proposed corresponding pruning rules, and (2) the refining process gets the final result by comparing the distance with the proposed corresponding distance expression method. The experiment result showed that the proposed algorithm could solve the problem of k nearest neighbor query of the line segment in the obstacle environment.
Roh and Lee  used Blockchain technology to ensure transparency among the participants. The existing electronic voting system works by applying various algorithms. As one disadvantage of this system, however, the content of the vote can be forged or changed by the administrator as all rights are granted to the administrator. Therefore, the electronic voting system uses a Blockchain technology that provides stability and data integrity. This technology satisfies the security requirements of the system and uses a private Blockchain algorithm that is 50 times better than the existing public one.
Zou et al.  identified tea diseases based on spectral reflectance and machine learning. Machine learning models can classify unknown objects, but using this technique to classify dimensions of hyperspectral data results in overfitting. Therefore, the authors improved the identification method of tea diseases and random forests based on the function selector and spectral reflection and decision tree. The experiment results showed that the recall rate and F1 score improved, with the accuracy of tea diseases recording average values of 15%, 7%, and 11%.
People read a document for the conceptual extraction of keywords from the document, and then construct a concept for information and set keywords to represent the material by merging several words. Lee  collected the titles and abstracts of journals about natural and auditory languages to verify and analyze the validity of the extracted keywords. The author proposed a new method to determine the importance of each keyword, excluding unrelated keywords. As a result of the experiment, the proposed system showed up to 96% accuracy.
Ma et al.  proposed an energy-aware virtual data center embedding using an energy consumption model to solve the energy consumption problem of embedding a virtual data center. The model quantitatively measures the energy consumption of virtual machine and switch nodes and utilizes heuristic and particle cluster optimization techniques. Their results suggest that energy is effectively conserved, and that the embedding success rate is guaranteed.
Liu and Li  developed a meta-heuristic algorithm called photon search algorithm (PSA) through mathematical formulas and models of the proposed algorithm based on physical knowledge, including light speed consistency, uncertainty, and Pauli exclusion principles. The evaluation of the algorithm is compared with 7 single and 23 multi-modal benchmark functions. Their results suggest that PSA obtained high efficiency with excellent convergence and robust global search function. In solving the optimal solution of a specific function, however, a part that is slightly inferior to the existing heuristic algorithm was observed.
A recommendation technology based on the personal information of a user or a best-selling product is generally used in a movie recommendation system. Vilakone et al.  proposed a movie recommendation system using k-click and normalized discounted cumulative gain methods to improve accuracy. They found that the most acceptable MAPE value was obtained at k = 11, which increased the accuracy to 87.28% and solved the cold-start problem.
In the paper by Zhang et al. , they proposed a quality index based on the new Lempel-Ziv complexity (ELZC) to evaluate the quality of multi-lead electrocardiogram (ECG) collected in real time. For the multi-lead ECG quality evaluation of the proposed technique, three algorithms with the same complexity were compared, and six artificial time series were calculated according to each algorithm to compare performance in terms of randomness and irregularity within the time series. To do this, the authors analyzed the sensitivity of the algorithm according to the noise content within the ECG and performed an evaluation by reflecting the trend of changes to artificial synthetic noise containing different kinds of noise. The graph shows that it is more suitable for quality evaluation.
This issue features 18 high-quality articles following a rigorous review process. This paper has reviewed the technologies developed in various research fields, such as data representation, Blockchain application, 3D shape recognition and classification, query method, classification method, and search algorithm, to provide insights into the future paradigm. Published articles on the following topics are also featured in this issue: contributions to theoretical research, including new techniques, concepts, or analyses; experience reports; experiments involving the implementation and application of new theories; and tutorials on state-of-the-art technologies related to Blockchain, machine learning, and big data.