1. Introduction
Future information and communication technology (ICT) stands for all of continuously evolving and converging information technologies, including digital convergence, multimedia convergence, intelligent applications, embedded systems, mobile and wireless communications, bio-inspired computing, grid and cloud computing, semantic web, user experience and human-computer interaction (HCI), and security and trust computing, among others. These applications satisfy our ever-changing needs. In the field of future ICT in particular, M.A.G.I.C. is considered a game changer. It was created after mobility, artificial intelligence, fifth-generation (5G) mobile communication, Internet of Things (IoT), and cloud. Thus, in future ICT, a mysterious and diverse change is induced like magic.
As one of the representative technologies of the future, quantum computing is a technology that uses quantum information theory (QIT) including mathematical physics, statistical physics, and probability theory. Quantum computers are being developed at high speed, and Google has developed a computer that can handle 53 qubits. In future ICT, communication and processing performance will be raised to quantum levels and will be regarded as a new communication channel. These technologies focus on quantum Internet and quantum teleportation. In the current work, we provide an overview of survey papers on future ICT.
This paper reviews the technologies developed in various research fields, such as data analysis, image generation system, graph-embedding technique, Boundary-RRT algorithm, plastic ball grid array (PBGA) manufacturing process and equipment analysis quantum communication technology, zero watermarking algorithm, structural overlay network technique, big IoT healthcare data analysis framework, adaptive residual interpolation algorithm technique, stagewise weak orthogonal matching pursuit algorithm, and safe performance analysis.
The Journal of Information Processing Systems (JIPS) is an official international journal with indices such as ESCI, SCOPUS, Ei Compendex, DOI, DBLP, EBSCO, and Google Scholar and is published by the Korean Information Processing Society (KIPS). There are four divisions: Computer system and theory, Multimedia systems and graphics, Communication systems and security, and Information systems and application. This issue features 18 peer-reviewed papers following a rigorous review process.
2. Algorithms, Processes, and Services for Future ICT
In [1], ], a novel algorithm called Boundary-RRT* algorithm is proposed for application to aerial vehicles for collision avoidance and path re-planning in a three-dimensional environment. This algorithm reduces the number of sampling nodes by bounding the exploration space and preventing unnecessary tree expansion through the removal of passed nodes. The half-torus region supports flexible expansion by enabling expansion and contraction. Additionally, the proposed algorithm creates a path with natural curvature without defining a bias function. As a result, this algorithm proves to be suitable for the collision avoidance of aerial vehicles, re-planning of the local path, and generation of a stable waypoint list.
Thanks to the development of IoT, large data, and machine learning, systems generate large amounts of data and make the efficient implementation of medical data analysis difficult. In [2], these problems are solved through large data analysis. A framework for medical IoT large data analysis is presented based on fog and cloud computing. The analysis performed on the fog side helps manage very large medical IoT data streams from various sources and ensures strong and secure knowledge of patient medical data.
Cloud computing services are applied in various fields, but building trust with each other is difficult for users and cloud computing service platforms. In [3], this problem is solved by designing a weighting algorithm of subjective preference. Using the subjective preference weight allocation (SPWA) algorithm, each evaluation result is integrated to obtain the trust evaluation value of the entire cloud service provider. The use of the cloud model with the SPWA algorithm effectively enables qualitative evaluation of trust for quantitative evaluation, and the evaluation results are more consistent with trust owing to vague and subjective characteristics.
In [4], a tree segmentation algorithm is proposed based on a combination of image abstraction and adaptive mean shift algorithm. The aim is to improve the efficient division strategy of images of trees with complex backgrounds. Image data are collected from the natural environment of Hangzhou, Zhejiang Province located on the southeastern coast of China in the fall of 2018. As a result, the algorithm achieves an average division accuracy of over 90%.
Using image features is necessary, but the watermarking algorithm can have strong robustness. The host image should be a color image, not a grayscale one. In [5], a zero-watermarking algorithm in the conversion area is proposed based on the RGB channel and voting strategy. The algorithm robustness takes full advantage of the RGB channel capabilities of the image and uses voting decisions in a particular process. As a result, the NC of the extracted watermark is improved to over 0.99.
In [6], a new method of identifying the processes and equipment that cause a defect in the PBGA manufacturing process by using logistic regression and stepwise variable selection is proposed to ensure high yield. To identify the major processes and equipment, the authors initially select fault factors that influence the yield of the primary circuit process and measure the suitability for equipment path analysis using logistic regression. Afterward, the effect of the selected major processes and equipment on the major fault factors is verified. Results show that, in the real world of PBGA manufacturing, the yield is generally improved by about 20%.
In [7], an approach to text-image matching and image generation using deep convolutional generative adversarial network (DCGAN) is proposed to create images that are not represented in kids’ books. Three steps are involved in creating new images. First, the model is learned using ImageNet, which has 11 titles and 1,164 images. Images are then extracted, and text is classified using Tesseract, an optical character recognition engine and morpheme analyzer. The learned DCGAN creates an image associated with the classified text.
In [8], an accelerated error correction code decoding technique that parallelizes sparse matrix–vector multiplication by using a built-in GPU in the embedded system when receiving a large amount of data is proposed. The matrix is expressed in compressed sparse row (CSR) format, and a sparse matrix–vector product is calculated in GPU kernel to conduct the ECC operation in parallel. As a result, the execution time with GPU is faster than that with CPU, and this structure can be used to detect and correct errors quickly.
The weak threshold used to estimate sparsity is determined through the maximum iteration, but different maximum methods have different thresholds, affecting the performance of the stagewise arithmetic orthogonal matching pursuit algorithm. In [9], an improved variable weak threshold is proposed based on the stagewise arithmetic orthogonal matching pursuit (SAOMP) algorithm. In this algorithm, with decreased residual value, the threshold value continuously increases and approaches the actual sparsity value, thereby improving the reconstruction accuracy. Additionally, Jaccard coefficients are improved using covariance to cope with the common expectations for two variables based on the generalized Jaccard coefficients. The experimental results show that the proposed method and the execution time of SAOMP do not significantly differ but remain acceptable.
The social force model (SFM) is an important pedestrian movement model extensively used in many evacuation simulations. In the case of multiple obstacle scenes, however, the model can cause problems. In [10], the causes of these problems are analyzed, and an improved SFM for complex multi-disorder scenes is proposed. The new model adds the shortest path principle to the navigation point to SFM. As a result, experiments show that the proposed model of pedestrians can effectively bypass obstacles and plan rational evacuation routes.
An overlay network system that is structured based on multiple different time intervals has been described as well. In [11], a circular topology is assumed, and nodes are assigned to key spaces based on one-dimensional information of time. Each node sets a shortcut link at a specific interval requested by the actual user. The contributions of this paper are as follows: (1) clarification of “interval queries” having specific time intervals; (2) establishment of a structured overlay network scheme based on multiple different time intervals; and (3) experiment results from the viewpoints of communication load, delay, and maintenance cost. The proposed method has been confirmed to reduce the number of messages to process queries related to time intervals.
An efficient interpolation algorithm with good visual quality and performance in an important part of image processing should be provided. In [12], by focusing on the adaptive residual interpolation (ARI) algorithm, the principles of the four interpolation algorithms in image demosaicking are analyzed. An improved algorithm (IARI) with enhanced interpolation performance is then proposed. The method fully considers image brightness information and edge information. The experimental results show that the IARI algorithm performs better than the other four algorithms in subjective and objective evaluations, especially in complex edge areas and color brightness recovery.
In [13], a user interface/user experience (UI/UX) development model based on collaboration with SI practitioners is presented to handle six risks on system integration (SI) projects. The six risks are derived from 13 risk factors in developing UI/UX from 113 risk factors of IT projects through a questionnaire and factor analysis. The UI/UX development stages are classified into planning, design, and implementation based on expert opinions and correlation analysis. Finally, the causal relationships between risks are verified through regression analysis.
In [14], a novel deep neural network model for the accurate scene graph generation of an image is presented. This model uses several multimodal context features to detect objects and relationships. Moreover, it uses a bidirectional recurrent neural network to generate linguistic context feature vectors. Lastly, the model conducts context feature embedding by using a graph neural network to identify dependencies between two related objects. This paper demonstrates the effectiveness and accuracy of the proposed model through comparative experiments using the Visual Genome benchmark dataset.
In [15], a mid-level feature extractor is proposed to reduce the computing costs and increase efficiency by training only the mid-level convolutional layers. The construct transfer learning network is considered to prevent overfitting as a feature extractor, and the convolutional layer is then selected to retrain and update. As a result of the experiment on small-scale medical imaging datasets, this method shows the lowest amount of loss (between 0.4 and 0.02), the most stable training tendency, and the lowest computing costs for convergence.
In [16], the physical security layer of the industrial wireless sensor network finds a vulnerability in eavesdropping attacks. Accordingly, an optimal sensor selection method is proposed according to the maximum channel capacity in the experience with the Nakagami fading transmission environment, which analyzes the system security performance by comparing the intercept probability of the traditional round robin with the optimal sensor selection method. The simulation proves that the proposed optimal selection method has faster convergence rate of intercept probabilities.
In [17], a quantum communication model system based on quantum machines—which use quantum machines with a single qubit as quantum chain repeaters—is proposed to create a scalable quantum Internet network. This work explains Quantum Key Distribution, Blockchain-based Quantum cryptography, QIT, Quantum Computers, and Quantum Internet. Recent state-of-the-art research and project trends in Quantum computers and Quantum Internet are introduced.
In [18], an embedding technique that uses a long short-term memory autoencoder-based graph considered by the structure and weights of graphs to generate their embedding vectors is presented. The proposed technique involves three steps to generate embedding vector extracts from weighted graphs: node weight sequence extraction, node weight sequence encoding, and final embedding vector generation. The experimental results show effectiveness in determining the similarity between weighted graphs on synthetic and real datasets.
3. Conclusion
This paper has introduced 18 high-quality articles following a rigorous review process. It reviewed the technologies developed in various research fields such as data analysis, image generation system, graph-embedding technique, Boundary-RRT algorithm, PBGA manufacturing process and equipment analysis quantum communication technology, zero watermarking algorithm, structural overlay network technique, big IoT healthcare data analysis framework, adaptive residual interpolation algorithm technique, stagewise weak orthogonal matching pursuit algorithm, and safe performance analysis.