## Zhiqiang Ma## |

Full name | Abbreviation |
---|---|

Internet of Vehicles | IoV |

Mobile edge computing | MEC |

Road side units | RSU |

Genetic algorithm | GA |

The system scenario is shown in Fig. 1.

We can see from Fig. 1 that in this model, L roadside units (RSU) are deployed around the road, denoted as [TeX:] $$R=\left\{r_{1}, r_{2}, \cdots, r_{L}\right\}$$. Each MEC server is placed next to the RSU. There are N vehicles, and their vehicle identification is [TeX:] $$n=\{1,2, \cdots, N\}$$; thus, the vehicle is denoted as [TeX:] $$V=\left\{v_{1}, v_{2}, \cdots, v_{N}\right\}$$. Both the MEC server and neighboring vehicles are collectively referred to as the service node [TeX:] $$\psi=\left\{\varphi_{1}, \varphi_{2}, \cdots, \varphi_{M}\right\}$$.

The vehicles are randomly distributed, and the collection of vehicles in cell j is [TeX:] $$\phi_{j}=\left\{v_{1}, v_{2}, \cdots, v_{n}\right\}$$. The spectrum is equally divided into K sub-channels, denoted as [TeX:] $$k=\{1,2, \cdots, K\}$$, and the bandwidth of each sub-channel is B. The set of vehicle unloading strategies is denoted as [TeX:] $$A=\left\{a_{1}, a_{2}, \cdots, a_{N}\right\}$$. If [TeX:] $$a_{i}=1$$, it means that [TeX:] $$v_{i}$$ will offload the task to the service node [TeX:] $$\varphi_{m}$$ for calculation. If [TeX:] $$a_{i}=0$$, it means that [TeX:] $$v_{i}$$ is performing calculation tasks locally.

Communication consumption occurs when the task is offloaded to the server and the server downloads the calculation content required to complete the task. Communication consumption is not only related to the amount of data transmitted but also is related to the bandwidth and the actual transmission rate of the wireless network. It is assumed that the communication link between the vehicle and the MEC node is a frequency-flattened fast fading Rayleigh channel. Then the vehicle offloads the task to the MEC node and the MEC node returns the calculation result to the vehicle speed [TeX:] $$v_{V 2 I}$$, which can be expressed as:

where, [TeX:] $$v_{V 2 I}$$ represents the bandwidth between the vehicle and the MEC node. [TeX:] $$P_{s}$$ represents the transmit power of the transmitting device. [TeX:] $$d^{-\delta}$$ represents the path loss between the vehicle and the server node. [TeX:] $$\delta$$ represents the path loss factor. [TeX:] $$\tau^{2}$$ represents the channel fading factor of the upload link.[TeX:] $$N_{0}$$ represents the Gaussian white noise power. Therefore, the required upload delay [TeX:] $$T_{i}^{u p}$$ when offloading the task to the server is:

where, [TeX:] $$S_{i}$$ represents the size of the task.

Assuming that the distance between the MEC server nodes is [TeX:] $$d_{M E C 2}$$, and the distance between the edge server and the cloud server is [TeX:] $$d_{C M E C}$$. It can be assumed that the transmission speed between the MEC server nodes is [TeX:] $$v_{M E C 2}$$, and the transmission speed from the cloud service to the edge server is [TeX:] $$v_{C M E C}$$. When task [TeX:] $$\Omega_{i}$$ is sent to the MEC server numbered [TeX:] $$l$$ and has cached [TeX:] $$G_{s}^{i}$$ cached content, the edge server needs to initiate a request for downloading missing computing content to the cloud. Therefore, the time [TeX:] $$T_{i}^{\text {down }}$$ required for the service server to download the missing content can be approximately expressed as:

In the formula, [TeX:] $$\vartheta_{\Sigma}^{i}$$ represents the calculation content involved in the calculation process.[TeX:] $$\vartheta_{m}^{i}$$ represents the calculated content that is offloaded to the MEC server. [TeX:] $$v_{\Sigma}$$ is the sum of the transmission speed between the MEC server nodes and the cloud service to the edge server.

Assuming that the energy consumption due to the transmission per unit time is [TeX:] $$E_{\text {tran }}$$, the total energy consumption [TeX:] $$E_{\text {tran }}^{\Sigma}$$ due to the transmission is:

Assuming that each vehicle has a calculation task [TeX:] $$\Omega=\left\{s_{i}, c_{i}, t_{i}^{\max }\{\}\right\}, i \in N$$ needs to be processed. Among them, [TeX:] $$s_{i}$$ is the input size of task [TeX:] $$\Omega_{i}$$. [TeX:] $$c_{i}$$ is the number of CPU cycles required to calculate task [TeX:] $$\Omega_{i}$$. [TeX:] $$t_{i}^{\max }$$ is the maximum time delay that can be tolerated by the calculation task [TeX:] $$\Omega_{i}$$. The task will be offloaded to the MEC server or neighboring vehicles through the RSU. It can also be executed on the local vehicle.

** Offload calculation **

Define the cost of the calculation process of task request vehicle [TeX:] $$v_{i}$$ offloading task as the service node [TeX:] $$\varphi_{m}$$, which is the weighted combination of delay and energy consumption, expressed as:

where, [TeX:] $$\alpha$$ represents the weighting factors of delay, and [TeX:] $$\beta$$ represents the energy consumption, and satisfies [TeX:] $$\alpha+\beta \leq 1$$. [TeX:] $$t_{i}^{o f f}=\frac{s_{i}}{v_{V 2 I}}+\frac{c_{i}}{f_{m}^{i}}$$ is the sum of the unloading delay and the calculation delay.[TeX:] $$f_{m}^{i}$$ is the computing resource allocated by the service node [TeX:] $$\varphi_{m}$$ to the vehicle [TeX:] $$v_{i} \cdot e_{i}^{o f f}=P_{s} \frac{s_{i}}{v_{V 2 I}}$$ is the energy consumption of the transmission process.

** Local calculation **

Assuming that the computing power of vehicle [TeX:] $$v_{i}$$ is [TeX:] $$F_{i}^{1}$$, different vehicles have different computing powers. When [TeX:] $$\Omega_{i}$$ is calculated locally, the delay is represented by [TeX:] $$t_{i}^{1}, t_{i}^{1}=c_{i} / F_{i}^{1}$$; the cost that the vehicle [TeX:] $$v_{i}$$ needs to bear is:

The binary-based task offloading scheme is adopted, that is, tasks are only allowed to be processed locally or offloaded to the server. In the network, the role of the vehicle is only to collect data or offload tasks to the server. Therefore, all tasks can only be processed in cloud servers or in edge servers. If the total number of tasks requested from vehicles is [TeX:] $$N$$, there is a task set [TeX:] $$\Omega=\left\{\Omega_{i} \mid i=1,2, \cdots, N\right\}$$. For each task, [TeX:] $$\Omega_{i}$$ has [TeX:] $$\Omega_{i}=\left\{\alpha_{i}, Q_{c}^{i}, \vartheta_{\sum}^{i}, t_{\max }^{i_{i}}\{\}\right\}$$, where [TeX:] $$Q_{c}^{i}$$ is the CPU resource required for execution, [TeX:] $$t_{\max }^{i}$$ is the maximum execution time of the task, [TeX:] $$A_{i}$$ is the unloading decision of the task, [TeX:] $$\vartheta_{\Sigma}^{i}$$ is the calculation content involved in the calculation process and [TeX:] $$s_{i}$$ represents the size of the task.

The main concern in this paper is how to realize a joint optimization of the time delay and the energy consumption required by the offloading task. For any task [TeX:] $$\Omega_{i}$$, its energy consumption [TeX:] $$E_{\Sigma}$$ is composed of transmission energy consumption [TeX:] $$E_{\text {tran }}^{\sum}$$ and calculation energy consumption [TeX:] $$E_{\text {cal }}$$. The delay consumption [TeX:] $$T_{\Sigma}$$ is composed of calculation delay [TeX:] $$T_{c a l}$$ and transmission delay [TeX:] $$T_{t r a n}$$. Then the optimization objective function can be expressed as:

where, [TeX:] $$\lambda$$ represents the weight of energy consumption and delay consumption. Constraint C1 represents that the delay consumption of the task cannot be greater than the maximum tolerable delay of the task. Constraint C2 means that tasks can only be offloaded to cloud servers or MEC servers. Constraint C3 represents that the computing resources required by the task cannot exceed the maximum computing resources of the MEC server.

The optimization problem is transformed into a knapsack problem, and a GA is used to solve it to meet the computational unloading requirements. Different MEC servers have different computing capabilities. When the MEC server with insufficient computing resources undertakes a large number of computing tasks, it cannot guarantee the completion of computing tasks, and will also cause the MEC server to be overloaded, affecting the processing of other services. At the same time, the random offloading of computing tasks will lead to a lengthy number of iterations when searching for the optimal computing offloading strategy. Therefore, a task offloading strategy based on GA is used to reserve suitable initial chromosomes. We combine the greedy algorithm with the GA to find the optimal offloading strategy for each MEC server; thus, speeding up the iterative process.

Using binary coding, the coding of each chromosome is an offloading strategy [TeX:] $$A$$. [TeX:] $$A_{(q)}$$ represents the value of the [TeX:] $$q$$ bit in the strategy [TeX:] $$A$$, and [TeX:] $$A_{(q)}$$ takes 0 or 1. Let [TeX:] $$\left[X_{i M}, X_{i M+1}, \cdots, X_{(i+1) M}\right]$$ denote the unloading result of computing task Ω_i, the number a_i of the uninstalled MEC server is:

where, [TeX:] $$\kappa$$ represents the number of code bits.

Fig. 2 shows the coding process of computing task strategy for no more than 8 MEC servers.

Initialize the relevant parameters of the GA, including the maximum number of iterations [TeX:] $$I$$, chromosome length [TeX:] $$N \times M$$, mutation probability [TeX:] $$P_{m}$$, crossover probability [TeX:] $$P_{c}$$, population size [TeX:] $$Z$$ and reserved population [TeX:] $$Z^{\prime}$$. Most GA select the initial population randomly. The proposed strategy combines pre-defined chromosomes and random chromosome algorithms for population initialization. In the process of pre-defining chromosomes, classify the chromosomes with the highest weight. That is, the on-board computing task encoding is set as follows:

The fitness [TeX:] $$f_{i}$$ of each unloading scheme in the proposed strategy is the reciprocal of the objective function. The higher the fitness function value, the smaller the value of the objective function model. It means that when the overall cost of the computational offloading scheme decreases, the offloading scheme presents better performances. The specific calculation formula of [TeX:] $$f_{i}$$ is as follows:

After the fitness function is calculated, the general chromosomes with poor fitness are eliminated according to the calculated value, (i.e., the unloading plan), and the chromosomes with higher fitness are retained. In this way, the quality of the chromosomes after several iterations is increasingly higher. In other words, the overall cost of the uninstall scheme is becoming less.

(1) Select operation: The chromosomes are selected according to the selection probability, and the above-mentioned individuals are regarded as the first generation. Using the roulette random selection method proportional to fitness, assuming the fitness of the individual is [TeX:] $$f_{i}$$, the probability of [TeX:] $$i$$ being selected is:

For the initialized population, calculate the fitness value of each chromosome and its probability of being selected for comparison, and eliminate the chromosome with the lowest probability. The chromosome with the highest probability is selected for copying, instead of the chromosome that has been eliminated.

(2) Cross operation: Using a one-point crossover method, the crossover probability is [TeX:] $$P_{c}$$.

(3) Mutation operation: For the proposed optimization problem, the mutation operation is to change the mutation bit 1 of the chromosome to 0, and 0 to 1. The other bits remain unchanged, and the probability of mutation is [TeX:] $$P_{m}$$. Therefore, select a variant position for mutation. Then calculate whether its fitness is greater than or equal to its original fitness. If not, re-select the variant position for mutation.

The GA requires the setting of termination conditions. Therefore, the fitness function value needs to be calculated after each chromosome mutation. If the set maximum number of iterations has been reached or the fitness value has remained unchanged for a long time, the entire GA process is terminated. That is, the global optimal solution that meets the conditions is obtained; otherwise, the iteration continues.

The simulation experiment of the proposed strategy is carried out on the MATLAB platform (MATLAB2012a). Considering that there are 5 cells on the roadside, each cell is configured with RSU and MEC server. The specific simulation parameters are shown in Table 2.

Table 2.

Parameter | Value |
---|---|

Maximum transmission power of vehicle | 30 dBm |

System bandwidth | 30 dBm |

Task calculation size | 15-30 MB |

Calculate the CPU cycles required for the task | 1500-3000 |

Gaussian white noise power | -75 dBm |

Weight factor [TeX:] $$\alpha=\beta$$ | 0.45 |

Coverage diameter of RSU | 1 km |

Vehicle computing power | 2-4 GHz |

Computing power of MEC server | 15 GHz |

Number of uplink transmission channels | 15 strip |

Number of vehicles | 100 |

Vehicle moving speed | 40-70 km/hr |

Vehicle buffer capacity | 100 MB |

Cache capacity of MEC server | 500 MB |

First, verify the impact of the number of iterations of the proposed strategy on the task completion rate. The task completion rate is expressed as the proportion of successfully calculated computing tasks to the total computing tasks. Through repeated experiments, the effect of the number of iterations on the task completion rate under different vehicle-mounted terminals is studied. The result is shown in Fig. 3.

It can be seen from Fig. 3 that the proposed strategy can reach a plateau after effective iterations. It tends to converge about 250 times. And the larger the number [TeX:] $$N$$ of vehicle-mounted terminals, the lower the completion rate of the task. When [TeX:] $$N$$ increases from 10 to 50, the task completion rate drops from 100% to 73%. As the number of vehicle-mounted terminals increases, the amount of computing tasks increases. In the case of limited computing resources, data processing capabilities will decline. As a result, the task completion rate has dropped.

In order to demonstrate the performance of the proposed strategy, it is compared with [11,12,17]. The relationship between the number of vehicles and the total cost is shown in Fig. 4.

It can be seen from Fig. 4 that as the number of vehicles increases, the total cost increases. The proposed strategy makes full use of the surrounding idle resources, reasonably allocating computing and communication resources, and the cloud-side collaborative offloading can effectively improve resource utilization. The caching mechanism can avoid double calculations and reduce network congestion, thereby reducing system overhead. When the number of vehicles is 60, the total calculation cost is only about 180 J.

Similarly, the relationship between the amount of task data and the total cost of different strategies is shown in Fig. 5.

As can be seen from Fig. 5, the total overhead increases with the increase of the amount of task data. Among several comparison strategies, the total cost of the proposed strategy is the smallest, and the cost growth rate is relatively slow. When the task volume is 30 MB, the total overhead is about 300 J. Because it adopts the cloud-side collaborative offloading method, the GA selects the best resource allocation mode, which can provide resource utilization. In the case of a large number of tasks, low computational overhead can be guaranteed. Wei et al. [17] uses a deep reinforcement learning algorithm to solve the communication resource allocation problem of mobile users in a wireless cellular network with MEC. The learning algorithm used is relatively complicated. When the number of tasks increases, the computational overhead will increase significantly, exceeding 600 J. Chen et al. [12] only uses the asynchronous advantage actor-critic learning algorithm for communication resource allocation. But it relies too much on the processing power of the vehicle itself. Therefore, the total overhead is higher when the number of tasks increases. Jiang et al. [11] allocates resources through the cooperation of mobile cellular network and is dedicated to short-range communication, lacking the optimization of the allocation scheme. Therefore, the overall cost remains high, exceeding 800 J.

In order to verify the delay and energy consumption performance of the proposed model, an experiment is designed to verify the performance of different strategies with the number of tasks as the variable. The experimental results are shown in Fig. 6.

It can be seen from Fig. 6 that the delay consumption of the strategy in [11] is higher than that of other strategies. It relies on a dedicated short-range communication network, but the vehicle has a wide range of movement, and this network architecture cannot meet the communication needs of the IoVs. Therefore, the time delay is as high as 280 ms. In [12,17], both use learning algorithms to optimize the communication resource allocation plan. But the strategy in [17] is based on MEC deployment, offloading tasks to the MEC server with stronger processing capability, which can reduce the delay by about 20 ms. The proposed strategy is based on the cloud-edge coordinated car networking and uses GA to solve the optimal resource allocation plan. It not only makes full use of the powerful processing capabilities of MEC, but also combines with the optimization capabilities of GA. Therefore, the average delay is relatively small, the highest is only 210 ms.

Similarly, for different number of tasks, the average energy consumption of the four strategies is shown in Fig. 7.

It can be seen from Fig. 7 that compared with other strategies, the average energy consumption of the proposed strategy is the smallest. When the number of tasks is 60, its average energy consumption is 300 J. Its use of cloud-side collaborative offloading can effectively improve MEC resource utilization. And the caching mechanism can avoid repeated calculations and reduce network congestion, thereby reducing system energy consumption. The authors of [11] only uses changes in network to optimize resource allocation, unable to adapt to the IoVs environment with a large amount of data. Therefore, the energy loss is large. In [12,17], both use learning algorithms to achieve communication resource allocation. However, the learning algorithm is more complicated, and the computational energy loss is large.

The IoVs is facing the impact of massive data traffic. In order to realize the real-time response of the IoVs business and to reduce energy consumption. A strategy for resource allocation of IoV communication based on edge computing is proposed. Based on the cloud-side collaborative cache and the resource allocation system of the IoVs, the GA is used to solve the optimization objective function that minimizes time delay and energy consumption. In order to find the best resource allocation plan, experimental analysis based on the built IoV model shows that under different system parameters, the system overhead and the average delay of our proposed strategy are 300J and 210ms, respectively. They are superior to other strategies and provide a certain theoretical basis for practical applications. In future work, we will continue to study the MEC-based V2X (Vehicle-to-Everything) communication resource allocation strategy. At the same time, the situation where the link is disconnected due to the high-speed movement of the vehicle and the task backhaul fails should also be addressed. It is suggested to build a V2X resource allocation framework based on vehicle location prediction.

- 1 B. He, T. Li, "An offloading scheduling strategy with minimized power overhead for Internet of vehicles based on mobile edge computing,"
*Journal of Information Processing Systems*, vol. 17, no. 3, pp. 489-504, 2021.doi:[[[http://jips-k.org/digital-library/2021/17/3/489]]] - 2 S. Ullah, K. Kim, A. Manzoor, L. U. Khan, S. A. Kazmi, C. S. Hong, "Quality adaptation and resource allocation for scalable video in D2D communication networks,"
*IEEE Access*, vol. 8, pp. 48060-48073, 2020.doi:[[[10.1109/access.2020.2978544]]] - 3 T. Park, K. I. Hwang, "Receiver protection from electrical shock in vehicle wireless charging environments,"
*Journal of Information Processing Systems*, vol. 16, no. 3, pp. 677-687, 2020.doi:[[[10.3745/JIPS.03.0139]]] - 4 M. Wen, J. Park, K. Cho, "A scenario generation pipeline for autonomous vehicle simulators,"
*Human-centric Computing and Information Sciences*, vol. 10, no. 24, 2020.doi:[[[10.1186/s13673-020-00231-z]]] - 5 L. Ferdouse, A. Anpalagan, S. Erkucuk, "Joint communication and computing resource allocation in 5G cloud radio access networks,"
*IEEE Transactions on V ehicular Technology*, vol. 68, no. 9, pp. 9122-9135, 2019.doi:[[[10.1109/tvt.2019.2927904]]] - 6 S. Kim, Y. Yoon, "ACP model for vehicle monitoring based on CPS,"
*Human-centric Computing and Information Sciences*, vol. 11, no. 5, 2021.doi:[[[10.22967/HCIS..11.005]]] - 7 L. Yang, C. Zhong, Q. Yang, W. Zou, A. Fathalla, "Task offloading for directed acyclic graph applications based on edge computing in industrial Internet,"
*Information Sciences*, vol. 540, pp. 51-68, 2020.doi:[[[10.1016/j.ins.2020.06.001]]] - 8 S. K. Singh, P. K. Sharma, Y. Pan, J. H. Park, "BIIoVT: blockchain-based secure storage architecture for intelligent Internet of vehicular things,"
*IEEE Consumer Electronics Magazine*, 2021.doi:[[[10.1109/MCE..3089992]]] - 9 Y. Chen, N. Zhang, Y. Zhang, X. Chen, "Dynamic computation offloading in edge computing for Internet of Things,"
*IEEE Internet of Things Journal*, vol. 6, no. 3, pp. 4242-4251, 2019.doi:[[[10.1109/jiot.2018.2875715]]] - 10 X. Zhang, J. Zhang, Z. Liu, Q. Cui, X. Tao, S. Wang, "MDP-based task offloading for vehicular edge computing under certain and uncertain transition probabilities,"
*IEEE Transactions on V ehicular Technology*, vol. 69, no. 3, pp. 3296-3309, 2020.doi:[[[10.1109/tvt.2020.2965159]]] - 11 X. Jiang, K. Li, H. Jiang, N. Zhu, X. Tong, "A bandwidth-link resources cooperative allocation strategy of data communication in intelligent transportation systems,"
*China Communications*, vol. 16, no. 4, pp. 234-249, 2019.doi:[[[10.12676/j.cc.2019.04.018]]] - 12 M. Chen, T. Wang, K. Ota, M. Dong, M. Zhao, A. Liu, "Intelligent resource allocation management for vehicles network: an A3C learning approach,"
*Computer Communications*, vol. 151, pp. 485-494, 2020.doi:[[[10.1016/j.comcom.2019.12.054]]] - 13 H. Wang, H. Ke, G. Liu, W. Sun, "Computation migration and resource allocation in heterogeneous vehicular networks: a deep reinforcement learning approach,"
*IEEE Access*, vol. 8, pp. 171140-171153, 2020.doi:[[[10.1109/access.2020.3024683]]] - 14 M. Tang, L. Gao, J. Huang, "Communication, computation, and caching resource sharing for the Internet of Things,"
*IEEE Communications Magazine*, vol. 58, no. 4, pp. 75-80, 2020.doi:[[[10.1109/mcom.001.1900354]]] - 15 K. Guan, D. He, B. Ai, D. W. Matolak, Q. Wang, Z. Zhong, T. Kurner, "5-GHz obstructed vehicle-to-vehicle channel characterization for Internet of intelligent vehicles,"
*IEEE Internet of Things Journal*, vol. 6, no. 1, pp. 100-110, 2019.doi:[[[10.1109/jiot.2018.2872437]]] - 16 R. Li, P. Hong, K. Xue, M. Zhang, T. Yang, "Energy-efficient resource allocation for high-rate underlay D2D communications with statistical CSI: a one-to-many strategy,"
*IEEE Transactions on V ehicular Technology*, vol. 69, no. 4, pp. 4006-4018, 2020.doi:[[[10.1109/tvt.2020.2973228]]] - 17 Y. Wei, Z. Wang, D. Guo, F. R. Y u, "Deep Q-learning based computation offloading strategy for mobile edge computing,"
*ComputersMaterials & Continua*, vol. 59, no. 1, pp. 89-104, 2019.doi:[[[10.32604/cmc.2019.04836]]] - 18 X. Wang, C. Wang, X. Li, V. C. Leung, T. Taleb, "Federated deep reinforcement learning for Internet of Things with decentralized cooperative edge caching,"
*IEEE Internet of Things Journal*, vol. 7, no. 10, pp. 9441-9455, 2020.doi:[[[10.1109/jiot.2020.2986803]]]