The key technologies of the 4th industrial revolution include Internet of Things (IoT), 5G mobile networks, and artificial intelligence (AI). In particular, it collects data from sensors through IoT and transmits the collected data to the server using 5G mobile networks. The transmitted data are used in AI through various processes. In this process, the detailed functions of each core skill are required.
IoT requires a technology that transmits data along with a clustering technology that creates a group of widespread sensors. In addition, 5G mobile networks require technology for resource management, which is a component of each network. AI technology requires various models—e.g., convolutional neural network (CNN) and k-nearest neighbors (kNN)—in various fields. Security technologies, such as intrusion detection, modified malware detection technology, and blockchain technology, are also essential in each field.
This paper introduces 18 novel and enhanced studies. We present diverse kinds of paradigms to subjects that address different kinds of research areas, such as IoT, 5G mobile networks, and AI. Therefore, the work discusses the following technologies in detail: Stay point spatial clustering-based technology, digital evidence management model based on Hyperledger Fabric, aircraft recognition using machine vision, CNN model-based voting and ensemble system, hierarchical semantic clipping and sentence extraction, N-step sliding recursion formula, routing protocol for improving the lifetime of a wireless sensor network, fault diagnosis of wind power generator blade, variant malware detection techniques, resource management in 5G mobile networks, and application of blockchain in multiple fields of financial services. Futuristic and hot-issue topics from the academe and industries are described. The work mainly aims to provide hot and trendy research to researchers rapidly.
2. Future Trends of IoT, 5G Mobile Networks, and AI
Liao  proposed a hot spot analysis method for identifying popular tourist attractions based on improved DBSCAN algorithm on trajectory data. The proposed method is based on the statistical distribution characteristics of data to determine parameters, such as neighborhood radius and density threshold, adaptively. The improved DBSCAN algorithm has been compared with DBSCAN and kmeans on three different datasets to prove its effectiveness and efficiency. In addition, the Getis-Ord Gi* hotspot analysis and mapping are conducted in ArcGIS software using the proposed method. The experiment results demonstrate that the proposed method outperforms traditional methods and classifies hotspot effectively.
Jeong et al.  proposed a digital evidence management model based on the blockchain using Hyperledger Fabric and docker to achieve reliability and integrity. This model can be more secure than traditional centralized management systems that are vulnerable from risks, such as damage and manipulation through malicious insiders, because the user cannot modify and delete the evidence data in the proposed model. Moreover, transparency and reliability are provided by taking advantage of the blockchain technique. The authors implemented the proposed method using Hyperledger Fabric and docker. Then, the reliability of this model is demonstrated by confirming that an external attacker cannot modify the transaction and chaincode.
Liu et al.  proposed a new localization approach based on symmetric sub-array multiple-input multiple-output radar to deal with a subsurface target by reverse projection. This approach reconstructs signals to find multiple objects at different distances accurately using symmetric sub-array. It also introduces reverse projection for acquiring the distance-independent direction of arrival estimates and obtains the localizations of subsurface targets with different distances. Simulation results show that the proposed method is more efficient than existing methodologies, and the optimization method is effective.
Gong et al.  proposed a new system for evaluating urban water security by composing four categories through 21 urban water security factors and 5 sub-systems based on water poverty index (WPI) theory to assess urban water poverty. The system analyzed the contribution rates of the five WPI sub-systems in urban water poverty; namely, resource, access, capacity, use, and environment, through the least-squares method. Moreover, this system divides the city into four divisions: the dual factor drive type, the three factor drive type, the four factor drive type, and the five factor drive type. To analyze the effectiveness of this system, the WPI scores of 14 cities have been analyzed, and the analysis results show that the proposed system is efficient and effective.
Chen et al.  proposed a remote sensing image aircraft detection method based on enhanced Yolov3 to improve the resolution and the target detection of the input image. In this method, the aircraft data set was reclustered using k-means algorithm to improve aircraft detection and optimized network structure by introducing the inception module and multiscale prediction. Experiments with RSOD-Dataset have been conducted to prove the effectiveness of this model, and the experimental results demonstrated that the performance was improved compared with that of existing method.
Jhang  conducted a study to compare with majority voting, softmax-based voting, and ensemble scheme to predict the gender on the photo. In this study, the majority vote used the “argmax” of the outputs of the final fully connected layer of CNN models as inputs to the voter, and the softmax-based voting used the softmax as an output of CNN models. The ensemble scheme used CNN models combined with one more fully connected layer. The experimental results demonstrate that softmax-based voting outperformed the majority voting method. The performance of the fine-tuning process of ensemble models, however, is better than softmax-based voting, but softmax-based voting is faster and more efficient.
Yan and Guo  proposed a neural extractive summarization approach (i.e., JhscSe) with joint hierarchical sentence semantic clipping and selection to solve the repeat problem on news document summarization. More specifically, for redundant information filtering, a hierarchical selective strategy that used bi-layer selected extraction on sentence and document was applied in the encoder-subnetwork. For sentence extraction, joint sentence scoring and redundant information clipping was performed on the document. The experimental results show that the proposed model outperforms the baseline models on Chinese and English datasets and solves the repeat information problem.
Yu et al.  proposed a new multistep recursive algorithm based on the N-step sliding recursion formula for variance calculation of the time-varying data series with O(1) time complexity (constant time) to reduce computing time. First, they prove the one-step recursive algorithm of the variance of fixed-length sequences using sliding windows. Second, they extend to the N-step recursion. In the paper, numerical simulation has been conducted on the one-step recursive algorithm and the N-step recursive algorithm to verify accuracy and efficiency, respectively. The simulation results show that the efficiency of the variance calculation on the time-varying data series is improved.
Abdurohman et al.  proposed the modified end-to-end secure low-energy adaptive clustering hierarchy (ME-LEACH) to overcome WSN’s energy limitations. ME-LEACH aims to improve the performance of energy aware LEACH (E-ELACH) by investigating the transmission mechanisms that reduce the prevalence of high energy consumption from direct data transmission from cluster head (CH) to base station (BS).
The CH in ME-LEACH finds the nearest CH and uses it as the next hop, thereby forming a CH chain, in which the data transmission path of CH sends data to the BS. The authors demonstrated that the proposed method has a more stable and higher throughput than SEEC and EERRCUR and a better network life than E-LEACH algorithm.
Huang  proposed the page-level re-write interval prediction, which was a run-time system that recorded and analyzed memory access history at the page level to predict future memory access to all pages. He presented the problems of existing main, full-size and incremental check point methods and proposed a new incremental check point mechanism by predicting memory access pattern in the application. The experimental results show that the new incremental checkpoint mechanism can achieve a speed increase over the existing incremental checkpoints.
Peng et al.  proposed safe circle synthetic minority over-sampling technology (SC-SMOTE) for freezing diagnosis and wind turbine prediction. The authors present the data imbalance and errors in the decision boundaries as problems in the existing methods. Moreover, they propose a dataset optimization algorithm to improve the data imbalances and errors. The proposed SC-SMOTE was combined with the fault diagnosis method kNN algorithm to carry out the simulation, thereby proving the superiority of fault diagnosis and diagnosis accuracy by comparing its performance with that of the existing SMOTE algorithm.
The amount of malware is increasing exponentially, and the new malware is a variant of the existing malware. Kang and Won  presented the difficulties of variant malware detection in existing signaturebased malware detection methods and utilized machine running to study the detection of variant malware. The authors performed static and dynamic analyses of malicious code datasets to extract features and construct machine learning models. The authors also confirmed higher accuracy than the existing method through experiments.
Chie et al.  discussed the latest technologies and current tasks on wireless access network resource allocation in the 5G network and provided guidance on resource allocation development. The authors reviewed the recent resource allocation development of wireless access and core networks; classified them from a multidimensional perspective, including application goals, service types, and resource types; and discussed their advantages and disadvantages. The unresolved problem of 5G resource allocation and the direction of research direction of research were also presented.
Software refactoring is the process of reorganizing existing software code while maintaining the same external behavior. In this paper, Agnihotri and Chug  investigate the analysis of code smell, method of applying a specific refactoring method to remove bad smell from source code, and how software refactoring affects software quality in order to develop more readable and less complex codes by improving the non-functional properties of software. In order to do this, the authors apply various analysis methods for 68 papers from papers published between 2001 and 2019, and the results of software metric, identification and correction of bad smell, and application of refactoring technology are expressed in detail according to research questions.
Wang et al.  focused on studies on the fields of various financial industries where blockchain technology was applied, and the core technologies for the development of blockchain financial services and current practices were required. To do this, the basic contents and the structure of the blockchain in the financial industry must be described, and the kind of influence the technical factors of the blockchain can exert in the financial services field should be investigated according to their technical characteristics.
In addition, the authors investigated in detail how the blockchain technology is being used in various fields of financial services, such as in crowdfunding, credit investigation, P2P lending, supply chain finance, cross-border remittance, and anti-money laundering. As a result, the authors presented the technical issues, challenges, and opinions for applying blockchain technology in financial services and suggested ways to apply blockchain in the future financial industry effectively
Wang et al.  proposed s an optimal algorithm for determining the size of a fast coding unit based on neighbor prediction to improve the performance of HEVC, one of the video compression techniques. HEVC uses a quad-tree coding tree unit (CTU) structure to increase coding efficiency, but it requires very high computational complexity due to a thorough search process for the optimal CU partition.
Since embedded devices that have power limitations or need real-time processing have very limited resources, however, the authors propose an effective algorithm that reduces computational complexity. In order to quickly predict the optimal partitioning mode for the current CU through neighbor prediction technology, the proposed algorithm reduces the partitioning operation and unnecessary predictions for HEVC by using the partition information of the left, up, and left-up CUs. As a result, it shows that the algorithm proposed in this paper can reduce the coding time by about 19.0% and increase the BD-rate (Bjontegaard Delta rate) by only 0.102% compared to the standard reference software of HM16.1.
Sicato et al.  proposed a software-defined IDS-based cloud architecture for providing a secure IoT environment in IoT network. The author describes several detection methods of IDS for abnormal behavior in the IoT environment and presents the differences between existing studies and the proposed method by comparing and analyzing several characteristics. After explaining security issues that can occur in the IoT environment by dividing each layer of IoT architecture, they propose a software-defined IDS-based cloud architecture that incorporates IDS on SDN's OpenFlow switch for a safe IoT system against threats by IoT architecture layer. From the experiments and analysis, it was confirmed that the proposed technology provides better attack detection power and accuracy than the existing IDS system.
Bu et al.  proposed a content-based image retrieval system that combines the complete local binary pattern (CLBP) texture function and color autocorrelation function based on multi-resolution multidirection (MRMD) filtering. The authors analyzed the previous studies on the color histogram and color autocorrelogram. In the paper, based on the analysis, the authors illustrate the texture feature extraction method using MRMD filtering-based CLBP by considering the features of color and texture and the dimensions of the feature vector. The experiment was conducted using six databases of Corel and visTex, and as a result of the experiment, the proposed scheme shows better image retrieval performance than the CLBP method, color autocorrelogram method using RGB, and multi-channel decoded LBP method.