JongBeom Lim* , DaeWon Lee** , Kwang-Sik Chung*** and HeonChang Yu****Intelligent Resource Management Schemes for Systems, Services, and Applications of Cloud Computing Based on Artificial IntelligenceAbstract: Recently, artificial intelligence techniques have been widely used in the computer science field, such as the Internet of Things, big data, cloud computing, and mobile computing. In particular, resource management is of utmost importance for maintaining the quality of services, service-level agreements, and the availability of the system. In this paper, we review and analyze various ways to meet the requirements of cloud resource management based on artificial intelligence. We divide cloud resource management techniques based on artificial intelligence into three categories: fog computing systems, edge-cloud systems, and intelligent cloud computing systems. The aim of the paper is to propose an intelligent resource management scheme that manages mobile resources by monitoring devices' statuses and predicting their future stability based on one of the artificial intelligence techniques. We explore how our proposed resource management scheme can be extended to various cloud-based systems. Keywords: Artificial Intelligence , Cloud Computing , Edge-Cloud Systems , Fog Computing , Resource Management 1. IntroductionRecent advances in artificial intelligence and its related techniques are introduced [1,2] with emphasis on how these techniques affect resource management on various computing environments—Internet of Things [3], fog computing systems [4], edge-cloud systems [5], and intelligent cloud systems [6]. One of the benefits of using artificial intelligence techniques in the computing environments is that no human intervention is required when managing computing resources (resource monitoring, task assignments, virtual machine scheduling in virtualized computing, and task virtual machine migration), while optimizing resource consolidation by running the iterations and multiplexing many logical components in the data center or the system [7,8]. Of the systems, we consider cloud-based computing systems and architectures in fog computing, edgecloud systems, and intelligent cloud systems. We introduce recent studies on these computing systems based on artificial intelligence techniques and propose an intelligent resource management scheme that manages mobile resources by monitoring devices’ status and predicting their stability information based on one of the artificial intelligence techniques (i.e., the hidden Markov model). Although the hidden Markov model is developed in the concept of the statistics and pattern theory, it has received significant attention, especially in the artificial intelligence field, where an inference model is used for estimating future states [9-11]. Notable applications of the hidden Markov chain are the temporal pattern recognition and the reinforcement learning, including voice-to-text and text-to-speech applications, handwritten text recognition, gesture recognition, grammatical tagging, word-category disambiguation, score following for music, and bioinformatics. Recently, the hidden Markov models have been generalized for complex data structures and nonstationary data with pairwise/triplet Markov models. The remainder of the paper is organized as follows: After reviewing artificial-intelligence-related techniques in the Internet of Things and fog computing in Section 2, we describe the edge-cloud systems and examine recent results of resource management schemes in Section 3. The proposed intelligent resource management scheme based on the hidden Markov chain for mobile devices is presented in Section 4. Finally, Section 5 concludes the paper with future research directions. 2. Fog Computing SystemsFog computing is a decentralized computing infrastructure in which resource-intensive functionalities are located somewhere between the cloud and the data source to reduce resource burdens (computation, network bandwidth, storage capacity). Fig. 1 shows the fog computing architecture with virtualization enabled. There are two features in the fog computing architecture, that is, the quality of services and the energy-aware deployment in virtualized computing environments. For the quality of services, the central cloud center interacts with the edge-cloud servers by retrieving resource monitoring information and service requirements. The edge-cloud servers deploy real-time mobile services to the virtual machines for the Internet-of-Things devices. At the same time, the deployed virtual machines transfer real-time service information, including device types. The Internet of Things devices (laptop, smartphone, and sensor devices) directly interact with the deployed virtual machines by delivering service and quality of service information. According to the received data, the virtual machines are scheduled in real-time. On the energy-aware panel in Fig. 1, the architecture is similar to the quality of service panel, but the resource management scheme is different. In other words, the edge-cloud servers consider the trade-off between energy consumption and performance of the virtual machines and the Internet of Things devices. Therefore, the virtual machines send energy and resource usage information to the edge-cloud servers, which then optimize the deployment of virtual machines in terms of energy consumption. In the fog computing context, the authors of [12] proposed an intelligent algorithm for offloading decisions when multiple Internet of Things devices are present nearby. The focus of the proposed algorithm is twofold: device-driven intelligence and human-driven intelligence for network objectives (energy consumption, latency, network bandwidth, network availability, and security/privacy preservation). Another research approach for fog computing in this context is healthcare [13–15]. Healthcare applications with fog computing technologies are promising because such technologies enable us to develop prediction techniques for our daily lives, for which real-time and low latency are of importance. 3. Edge-Cloud SystemsIn the edge-cloud systems, resource capabilities such as computing, network, and storage are distributed throughout the system, taking it closer to where traffic originates. Fig. 2 shows the edge-cloud architecture with mobile devices. Assume that a mobile device’s edge cloud is associated with the edgecloud server A in the figure. The tasks of the mobile device (partial service) from the central cloud server are copied to the edge-cloud server A. At this stage, if the mobile device moves to another location closest to the edge-cloud server B, the associated tasks (partial service) are scheduled for migration from the edge-cloud server A to B. Note that the user of the mobile device does not acknowledge the processes in the central cloud server and the edgecloud servers. Again, if the mobile device moves to another location closest to the edge-cloud server C, the associated tasks (partial service) are scheduled for migration from the edge-cloud server B to C. In this manner, mobile and real-time applications can benefit from the edge-cloud systems with latency reduced. For intelligent edge-cloud systems, the authors of [16] proposed a stochastic online machine learning technique that learns from the dynamism of the system. The study summarizes applications to wireless communications (traffic classification, channel coding/decoding, channel estimation, scheduling, and cognitive radio) and suggests an online learning framework for mobile edge computing systems for big data analytics. The offloading technique is widely used since it significantly reduces the burdens of computation and communication of mobile devices. For this matter, the authors of [17] proposed a two-way initiative scheme for offloading decision making. It detects network congestion and solves the offloading problem by integrating the random early detection algorithm. For the resource management with respect to energy consumption in edge cloud computing environments, Liu et al. [18] proposed an energy management system for the Internet of Things devices based on the deep reinforcement learning. The proposed framework allows agents to schedule tasks, considering energy consumption. Unlike the basic method, Liu et al.’s approach considers the capacity limitation of edge servers. 4. Intelligent Cloud Computing SystemsCloud computing is a pay-as-you-go and on-demand model for delivering computing resources (CPU, memory, storage, and network) based on the virtualization technology. When a user requests a certain amount of computing resources, the cloud data center schedules for provisioning as requested, and the requested virtual machine (or container) can be used within a minute. To provide computing resources, the cloud data center pools a significant number of physical machines. Hence, the resource consolidation of the cloud data center affects performance and management costs. A well-managed cloud data center can save energy consumption and reduce carbon dioxide emissions. For managing computing resources, artificial intelligence techniques can be integrated into the cloud computing systems. The authors of [19] proposed dynamic resource prediction and allocation techniques in the cloud radio access network for 5G. The method uses long- and short-term memory for predicting throughput and employs a genetic algorithm for allocating resources. Chien et al. [20] proposed an intelligent architecture for beyond 5G heterogeneous networks. The research aim of the architecture is to improve network performance in edge cloud computing environments based on artificial intelligence techniques. For maintaining the quality of services, the authors use the packet forwarding technique, and they recommend appropriate deep learning methods for different network themes. Zhang et al. [21] proposed a multiple algorithm service model to support heterogeneous services and applications. The model is designed to reduce energy consumption and network latency/delay by consolidating virtual machines in the cloud computing system. To solve the optimization problem, the authors use the tide ebb algorithm that finds robust results by assessing the relationship between computation speeds and energy costs. For managing mobile devices in the cloud computing environments, we propose an intelligent resource monitoring scheme that predicts their future stability based on the hidden Markov model. Fig. 3 shows the proposed workflow of artificial intelligence applications with the hidden Markov chain model. The workflow in the figure is based on the iterative model. Note that other task models can also be applied in the proposed model. When a user submits one or more artificial intelligence tasks, computing resources for the tasks are allocated via the cloud portal system. The allocated resources can be virtual machines, containers, or edge cloud servers according to the cloud computing environment. Then, the artificial intelligence tasks are performed by iterating the feedback control loop. After one round finishes, the (partial) results are forwarded to the input for the next round. In the feedback control loop stage, the hidden Markov chain model is applied. The hidden Markov chain model uses current and past mobile devices’ stability information and predicts future stability. More specifically, we regard the monitoring information as observable states and calculate a probability for hidden states. By computing the probability for hidden states, we predict the future stability of mobile devices. The predicted stability information can be used for cloud consolidation and cloud resource scheduling. Table 1 shows the comparison and summary of resource management schemes based on artificial intelligence techniques in cloud computing environments. With respect to the categories of the cloudbased systems, our scheme is closely related to intelligent cloud computing systems. For the characteristics, our scheme is differentiated from other studies. In the proposed intelligent resource management scheme, mobile devices in the cloud-based system (including fog computing and edgecloud) are periodically monitored, and the monitored information is used for predicting future stability and mobility based on the hidden Markov model. Thus, our scheme can be used for general cloud applications such as task scheduling, resource consolidation, and computation offloading while optimizing the overall system performance. Table 1.
5. ConclusionsManagement of mobile devices is not a trivial task since it is challenging to predict movement and faults. The proposed intelligent resource management scheme predicts mobile devices’ stability based on the hidden Markov model. With monitoring information, the future stability information of mobile devices can be obtained. We divided cloud resource management techniques into three categories (i.e., fog computing systems, edge-cloud systems, and intelligent cloud computing systems), and analyzed various schemes for cloud resource management and its requirements (quality of services, service level agreements, and availability of the system) based on artificial intelligence techniques. Future work includes further improvement of algorithms, performance evaluation of various cloud services (e.g., backup, replication, checkpoint, task, and virtual machine migration). BiographyJongBeom Limhttps://orcid.org/0000-0001-8954-2903He received B.S. degree in information and communication from Baekseok University, Korea in 2009. In 2011 and 2014, he received M.S. and Ph.D. degrees in computer science and education from Korea University, Korea, respectively. From 2015 to 2017, he was a visiting professor with the IT Convergence Education Center, Dongguk University, Korea. Since March 2017, he is with the department of game and multimedia engineering, Korea Polytechnic University, Korea as an assistant professor. His research interests fall within the general fields of computer science and its applications including distributed computing and algorithms, cloud computing and virtualization, artificial intelligence and big data analytics, mobile and sensor networks, and fault-tolerant and resilient techniques. BiographyDaeWon Leehttps://orcid.org/0000-0001-7089-8205He received his B.S. in the division of Electricity and Electronic Engineering from Soonchunhyang University, Asan, Korea in 2001. He received his M.E. and Ph.D. degrees in Computer Science Education from Korea University, Seoul, Korea in 2003 and 2009, respectively. He is currently an assistant professor in the Department of Computer Engineering at Seokyeong University in Korea. His research interests are in IoT, mobile computing, distributed computing, cloud computing, and fault-tolerant systems. Intelligent Resource Management Schemes for Systems, Services, and Applications of Cloud Computing Based on Artificial Intelligence BiographyHeonChang Yuhttps://orcid.org/0000-0003-2216-595XHe received the B.S., M.S., and Ph.D. degrees in computer science and engineering from Korea University, Seoul, Korea, in 1989, 1991, and 1994, respectively. He has been a Professor of computer science and engineering with Korea University since 1998. From February 2011 to January 2012, he was a Visiting Professor of electrical and computer engineering in Virginia Tech. Since 2015, he has been the Vice President of Korea Information Processing Society, Korea. He was awarded the Okawa Foundation Research Grant of Japan in 2008. His research interests include cloud computing, virtualization, distributed computing, and fault-tolerant systems. References
|