He* and Li: An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

# An Offloading Scheduling Strategy with Minimized Power Overhead for Internet of Vehicles Based on Mobile Edge Computing

## 1. Introduction

At present, the automotive industry at home and abroad generally believes that low carbonization, informatization and intelligence are the future development directions of automobiles [1-3]. With this overall direction and trend, Internet of Vehicles came into being, providing the basic communication technology and the platform for in-vehicle applications. Introducing the mobile edge cloud computing into Internet of Vehicles can effectively solve the shortage of computing resources and storage resources of vehicles [4]. On this basis, through the research on the algorithm for offloading computing in Internet of Vehicles, the computing pressure of in-vehicle applications and services can be effectively alleviated, which makes the service delay as short as possible.

According to the status quo in current research, no matter from the perspective of MEC service providers or automotive terminals, most of the existing works consider the calculation delay and the communication delay, and then defines utility functions, designs a distributed or centralized offloading algorithm, and obtains an optimal solution. There is hardly any work that takes the energy consumption into consideration, and explores the multi-hop transmission cost between RSU and V2V. In 5G communications, the energy efficiency is a key point [12]. Due to the high device access density and data density in the 5G era, energy consumption will also greatly increase.

This paper proposes an offloading scheduling strategy for minimizing power overheads based on the mobile edge computing in Internet of Vehicles. The strategy fully considers the energy consumption problem in its design, and discusses the multi-hop transmission costs between RSU and V2V. Finally, setting the corresponding Monte Carlo simulation parameters and conditions, it is proved that the proposed strategy not only meets the delay and energy consumption requirements at high speeds and high densities, but also ensures the lowest cost.

## 2. System Model and Problem Modeling

##### 2.1 System Model

The system model is shown in Fig. 1. RSUs are equidistantly distributed and the distribution interval is L. Similarly, the access coverage diameter of each RSU is also L. Therefore, a road can be divided into many sections according to the coverage of RSU, and the length of each section is L. In addition, each vehicle only communicates with the RSUs that it has access to, and includes the uplink data transmission and the downlink data download. In this model, each RSU is connected to a MEC server via a wired cable. It can be considered that the communication bandwidth is sufficiently large; thus, the transmission time between RSU and MEC is negligible.

Fig. 1.

System model.

The distribution of vehicles follows a one-dimensional Poisson point process with a density λ along the road. In other words, the distance between two adjacent vehicles is independent, and it follows the exponential distribution with parameter λ. The probability density function of the distance l between any two adjacent vehicles is as follows:

##### (1)
[TeX:] $$f_{d}(l)=\lambda \cdot e^{-\lambda d}(l>0)$$

X is the distance from the rear boundary of RSU coverage when vehicles initiate the calculation offload to the MEC server. After the computing task offload is initiated, the stay time t in current RSU coverage is:

##### (2)
[TeX:] $$t=\frac{L-X}{v}$$

Next, the description of a computing task is considered. For a certain type of on-board computing task i, it will be denoted by [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ , where [TeX:] $$c_{i}$$ is the amount of calculation and may specifically be the number of CPU operation cycles required for computing tasks. [TeX:] $$d_{i}$$ is the amount of input data required for calculation, and [TeX:] $$t_{i, \max }$$ is the maximum tolerable delay of computing tasks. Assuming that there are I tasks, and the proportion of task i is [TeX:] $$\varepsilon_{i}$$ , then:

##### (3)
[TeX:] $$\sum_{i=1}^{I} \varepsilon_{i}=1$$

The vehicle that initiates computing task i is referred to as a type i vehicle, and its speed is v. Let [TeX:] $$c_{v}$$ denote the computing power of vehicles (which can specifically be the CPU frequency, i.e., the number of computing cycles executed by CPU in one second), and [TeX:] $$c_{m}$$ denote the computing power of MEC servers. [TeX:] $$R_{i}$$ is the type i vehicle uplink communication rate, i.e., the rate at which the vehicle uploads data to its connected RSU. Generally speaking, the amount of input data of a computing task is much larger than the amount of output data, such as the virtual reality and augmented reality applications. Thus, in the study of this paper, the download delay of calculation output is ignored, and only the multi-hop transmission delay of computing results is considered.

##### 2.2 Model for Calculation of Offload Energy Consumption

This paper primarily analyzes the local execution time of computing tasks and the execution time offloaded to the MEC server from the perspective of latency. Considering that the vehicle speed is too high, and the computing task or the MEC server load is heavy, the computing output needs to be transmitted by multi-hop RSUs. A predictive offloading scheduling algorithm based on the MEC server load status is designed. [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ is used to define an on-board computing task. In fact, [TeX:] $$t_{i, \max }$$ is not used in the subsequent derivation and analysis. As defined, [TeX:] $$t_{i, \max }$$ represents the maximum tolerable delay. As long as the calculated service completion time is less than this value, the service quality can be accepted by vehicle users. On this basis, further considering the energy consumption of locally performing computing tasks or offloading to the MEC server, the delay and the energy consumption are better balanced. According to the study [13], the calculated energy consumption (power) and computing power c (CPU frequency) satisfy the following relationship:

##### (4)
[TeX:] $$P=k \cdot c^{\alpha}$$

where, k and α are parameters. According to the study [14], α = 3 will be taken. When computing tasks are performed locally, there is no communication process; thus, only the calculation power consumption is considered. When task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ is executed locally, the energy consumption is [TeX:] $$W_{i, \text { local }},$$ then:

##### (5)
[TeX:] $$W_{i, \text { local }}=P_{\text {local }} \cdot t_{i, \text { local }}=k \cdot c_{v}^{\alpha} \cdot \frac{c_{i}}{c_{v}}=k \cdot c_{v}^{\alpha-1} \cdot c_{i}=k c_{i} c_{v}^{2}$$

where [TeX:] $$P_{\text {local }}$$ is the local calculated CPU power.When the vehicle needs to offload computing tasks to MEC servers for execution, it directly uploads the computing input data to the currently connected RSU via V2I. In other parlance, the computing task is offloaded to MEC servers to which the RSU is connected. In this case, it is foreseeable that if the vehicle speed is very high, or the calculation amount of computing tasks is large, much calculation time is required. The computing result needs to be transmitted back to vehicles through multi-hop RSUs, and the communication between RSUs needs to go through the wireless backhaul link. Moreover, the backhaul link is relatively unstable, with large fluctuations and high latencies. The entire process is shown in Fig. 2.

For computing tasks performed on the MEC server, the energy consumption is mainly composed of three parts. The communication energy consumption [TeX:] $$W_{i, \text { upload }}$$ of uploading data to the RSU connected to the MEC server, the calculation energy consumption [TeX:] $$W_{i, \text { upload }}$$ on the MEC server, and the energy consumption[TeX:] $$W_{i, \text { upload }}$$ of the multi-hop back transmission calculation output. The total energy consumption of the system where the computing task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ offloaded to the MEC server is recorded as [TeX:] $$W_{i, M E C, S Y S}$$ then:

Fig. 2.

##### (6)
[TeX:] $$W_{i, M E C, S Y S}=W_{i, \text { ipload }}+W_{i, \text { compute }}+W_{i, \text { download }}=p_{i} \cdot \frac{d_{i}}{R_{i}}+k c_{i} c_{m}^{2}+W_{0} \cdot x_{i}$$

where [TeX:] $$W_{0}$$ is the energy consumption of I2I transmission data between RSUs covered by a section of RSU or by the energy consumption of V2V transmission data in a section of RSU, corresponding to different strategies in Section 3. [TeX:] $$p_{i}$$ is the communication power for uploading data, i.e., the transmission power of the vehicle antenna. [TeX:] $$R_{i}$$ is the data uploading rate. According to Shannon’s formula, the relationship between [TeX:] $$p_{i}$$ and [TeX:] $$R_{i}$$ is:

##### (7)
[TeX:] $$R_{i}=B \log _{2}\left(1+\frac{p_{i} H_{i}}{\sigma^{2}}\right)$$

where B is the channel bandwidth for uploading data to the RSU, [TeX:] $$H_{i}$$ is the channel gain from vehicle i to the RSU, and [TeX:] $$\sigma^{2}$$ is the channel noise power. In fact, from the perspective of the vehicle, there is no need to consider the calculated energy consumption[TeX:] $$W_{i, \text { compute }}$$ of the MEC server and the energy consumption of calculation data backhaul. Thus, we only consider the uploading energy consumption in the next step, namely:

##### (8)
[TeX:] $$W_{i, M E C}=W_{i, u p l o a d}=p_{i} \cdot \frac{d_{i}}{R_{i}}$$

We define the cost of executing a computing task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ as [TeX:] $$\operatorname{COST}_{i} \$$. Define [TeX:] $$\delta_{i, t}$$ and [TeX:] $$\delta_{i, w}$$ as the time preference factor and the energy consumption preference factor of computing tasks, then:

##### (9)
[TeX:] $$\operatorname{COST}_{i}=\delta_{i t} t_{i}+\delta_{i, W} W_{i}$$

usually,[TeX:] $$0 \leq \delta_{i, t} \leq 1,0 \leq \delta_{i, W} \leq 1$$ . Then, for computation task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, m a x}\right)$$ , the cost of its local execution is:

##### (10)
[TeX:] $$\operatorname{COST}_{i, \text { local }}=\delta_{i, t} t_{i, l o c a l}+\delta_{i, W} W_{i, l o c a l}=\delta_{i, t} \frac{c_{i}}{c_{v}}+\delta_{i, W} k c_{i} c_{v}^{2}$$

The cost of execution on the MEC server is:

##### (11)
[TeX:] $$\operatorname{COST}_{i, M E C}=\delta_{i, t} t_{i, M E C}+\delta_{i, W} W_{i, M E C}=\delta_{i, t}\left[\frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}+\left(t_{0} \times x_{i}\right)\right]+\delta_{i, W} \cdot p_{i} \cdot \frac{d_{i}}{R_{i}}$$

For computing task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, m a x}\right)$$ , we define [TeX:] $$\xi_{i}$$ as the offload option. [TeX:] $$\xi_{i}=1$$ means to choose to execute locally, [TeX:] $$\xi_{i}=0$$ means to choose to offload the computing task to the MEC server to perform calculation. Therefore, the cost of the entire task can be expressed as:

##### (12)
[TeX:] $$\operatorname{COST}_{i}=\xi_{i} C O S T_{i, \text { local }}+\left(1-\xi_{i}\right) \operatorname{COST}_{i, M E C}$$

For the entire system of Internet of Vehicles that introduces the mobile edge cloud computing, the total cost can be written as:

##### (13)
[TeX:] $$\operatorname{COST}_{A L L}=\sum_{i=1}^{S} \varepsilon_{i} \operatorname{COST}_{i} \varepsilon_{i}$$

where [TeX:] $$\varepsilon_{i}$$ is the proportion of computing task [TeX:] $$F_{i}\left(c_{i}, d_{i}, t_{i, \max }\right)$$ in all tasks. Next, we will optimize the computing task and the entire system based on these expressions.

##### 2.3 Analysis for Calculating Offload Delay

For vehicles, the execution of computing tasks is divided into two cases: local execution and offloading to MEC servers for execution. When the computing task is selected to be performed locally, the calculation input data is located in the vehicle storage device. There is no additional communication overhead required in the calculation process; thus, only the calculation time needs to be considered. For a type of task, the completion delay is only related to the computing power [TeX:] $$c_{v}$$ of the vehicle. For type i vehicles that perform type i tasks, it is noted that the local execution time is [TeX:] $$t_{i, \text { local }}$$ , then:

##### (14)
[TeX:] $$t_{i, \text { local }}=\frac{c_{i}}{c_{v}}$$

The execution time of the MEC server is mainly composed of three parts: the calculation data uploading time, the MEC server execution calculation time and the data return time, which are respectively denoted as [TeX:] $$t_{i, \text { upload }}, t_{i, \text { compute }}$$ and [TeX:] $$t_{\text {i,download }}$$ . In most computing applications, the amount of calculated output data is much smaller than the amount of input data. Therefore, the transmission time from the RSU to vehicles is not considered. Thus, [TeX:] $$t_{\text {i,download }}$$ is only composed of the multi-hop transmission time taken by the vehicle to pass through multiple RSU coverage areas due to the excessively high vehicle speed or the long calculation time. For type i vehicles that perform type i tasks, the execution time to be offloaded to the MEC server is [TeX:] $$t_{i, M E C}$$ , then:

##### (15)
[TeX:] $$t_{i, M E C}=t_{i, u p l o a d}+t_{i, \text { compute }}+t_{i, \text { download }}=\frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}+\left(t_{0} \times x_{i}\right)$$

When the speed of vehicles is too high or the calculation time required for tasks is quite long, after the computing task offloaded to the MEC server is completed, the vehicle has left the access range of RSU connected to the MEC server responsible for calculation, and multi-hop transmission is required to transfer the results back to vehicles. Where [TeX:] $$x_{i} \mathrm{i}$$ is the number of hops and [TeX:] $$t_{0}$$ is the transmission time of a single hop. Combined with the residence time of vehicles in the coverage of RSU connected to the MEC server that initiated computing tasks and offloading, the expression that can be derived for [TeX:] $$x_{i} \mathrm{i}$$ is as follows, where ⌈ ⌉ means to round up.

##### (16)
[TeX:] $$x_{i}=\left\{\begin{array}{cl} {\left[\frac{v_{i}}{D} \times\left(\frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}-\frac{D-X}{v_{i}}\right) \mid\right.} & \frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}-\frac{D-X}{v_{i}}>0 \\ 0 & \frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}-\frac{D-X}{v_{i}} \leq 0 \end{array}\right.$$

In order to minimize the cost of local execution, we can choose an optimal CPU operating frequency within the actual range, so that both the delay and the energy consumption meet requirements, and the cost is thus minimized. The optimization problem can be expressed as:

##### (17)
[TeX:] $$\begin{array}{l} \min _{c_{v}} g\left(c_{v}\right)=\delta_{i, t} \frac{c_{i}}{c_{v}}+\delta_{i, W} k c_{i} c_{v}^{2} \\ \text { s.t. } C 1: t_{i, \text { locat }}=\frac{c_{i}}{c_{v}} \leq t_{i, \max } \\ C 2: W_{i, \text { local }}=k c_{i} c_{v}^{2} \leq W_{i, \max } \\ C 3: C_{v, \min } \leq c_{v} \leq C_{v, \max } \end{array}$$

where[TeX:] $$W_{i, \max }$$ is the maximum tolerable energy consumption of type i tasks, [TeX:] $$C_{v, \min }$$ and [TeX:] $$C_{v, \max }$$ are the minimum and the maximum operating frequencies of the vehicle’s local CPU respectively.

## 3. Simulated Annealing Algorithm for Solving the Optimization Model

Due to the nonlinear relationship between the optimization goal and the constraints mentioned above, the optimization mode is difficult to solve. Therefore, the simulated annealing algorithm is used to solve this optimization model.

##### 3.1 Establishment of Metropolis Guidelines

Set in the current state i, the solution of system is [TeX:] $$W_{i}$$, and the solution in the next state j is [TeX:] $$W_{i}$$. According to the metropolis criterion, the following formula holds:

##### (18)
[TeX:] $$W_{i}-W_{j}=\left\{\begin{array}{ll} >0 & \begin{array}{l} \text { Accept the current solution as the optimal } \\ \text { solution } \end{array} \\ <0 & \begin{array}{l} \text { Accept the current solution as the optimal } \\ \text { solution with probability } p \end{array} \end{array}\right.$$

where p is the probability of accepting the current solution as the optimal solution, which can be expressed as:

##### (19)
[TeX:] $$p=\left\{\begin{array}{cc} 1 & W_{i}>W_{j} \\ \exp \left(\frac{-\left(W_{i}-W_{j}\right)}{\varphi T}\right) & W_{i}<W_{j} \end{array}\right.$$

where φ is the Boltzmann’s constant.

##### 3.2 Algorithm Process

The algorithm process is shown in Algorithm 1.

##### Algorithm 1
Simulated annealing algorithm

## 4. Simulation Experiment and Result Analysis

In this section, we will select multiple types of computing tasks, including the computationally intensive, the delay sensitive, etc., and compare our proposed strategy with the two methods in [9] and [11] to verify our strategy. Taking the vehicle speed and the vehicle density as variables, the simulation calculates the delay and offloads the energy consumption. By the MATLAB simulation platform, and in accordance with the relevant provisions of the MEC white paper, a system model is built. The hardware environment used in the experiment is a Samsung laptop, specifically configured as Intel Core i5-7300HQ CPU @ 2.5GHz, 8G memory, 500G hard drive, Windows 10 operating system.

##### 4.1 Parameter Settings

The computing tasks of five parameters are selected, which represent ordinary applications in normal applications, resource-intensive applications (both the amount of calculation and the amount of data are relatively large), delay-sensitive (the maximum tolerable time is short), calculation-intensive (a large amount of calculation) and data-intensive (a large amount of data). The specific data and the proportion of each task are shown in Table 1.

The parameters of these tasks have no actual units, and are only used to reflect the characteristics of tasks. On this basis, the simulation experiment sets the following parameters, as shown in Table 2.

Table 1.

[TeX:] $$\boldsymbol{F}_{i}\left(\boldsymbol{c}_{\boldsymbol{i}}, \boldsymbol{d}_{\boldsymbol{i}}, \boldsymbol{t}_{\boldsymbol{i}, \max }\right)$$ Task type Ratio [TeX:] $$\varepsilon_{i}$$
F1(10,50,1) Normal type 0.2
F2(200,200,5) Resource-intensive 0.2
F3(5,5,0.5) Delay-sensitive 0.2
F4(500,100,50) Calculation-intensive 0.2
F5(100,500,10) Data-intensive 0.2

Table 2.

Simulation parameters
Parameter Value
The computing power of vehicles [TeX:] $$c_{v}$$ 15
The uplink communication rate of vehicles [TeX:] $$R_{i}$$ 100
The computing power of MEC server [TeX:] $$c_{m}$$ 50
RSU pitch L 100 m
The vehicle density per unit road section [TeX:] $$\lambda_{u}$$ 0.2-10
The speed of vehicles v 40-120 km/hr
The RSU communication delay of unit road section [TeX:] $$t_{o}$$ 0.5 s
Single set of parameters Monte Carlo simulation times 1000000
The V2V communication delay of unit road section [TeX:] $$r_{V 2 V} \cdot t_{o}$$ 0.1 s

The unit of vehicle density [TeX:] $$\lambda_{u}$$ per road section is not [TeX:] $$m^{-1}$$ , but the average number of vehicles per RSU. By definition, when the RSU space is 100 m, the actual density is [TeX:] $$\frac{\lambda_{u}}{100} m^{-1}$$.

The simulation method for the queuing time of a single server will be explained below. For [TeX:] $$T_{W 1}$$, this queuing process can first be regarded as a M/M/1 queuing process with a mortality rate, i.e., a task completion rate of [TeX:] $$\mu=1 / \sum_{i=1}^{I} \varepsilon_{I}\left(\frac{c_{i}}{c_{m}}\right)$$ . During the simulation, a random number from 0 to 1 is generated. According to the discrete probability distribution of states, the state where the server is located can be simulated. Taking λ = 0.2 and μ = 0.5 as examples, the probability that the server is in each state is shown in Table 3.

Table 3.

Server state probability (λ=0.2, μ=0.5)
State 0 1 2 3 4 ...
Probability 0.6 0.24 0.096 0.0354 0.1536 ...
Random number 0-0.6 0.6-0.84 0.84-0.936 0.936-0.9744 0.9744-0.098976 ...

Suppose the server is in state 0 ([TeX:] $$T_{W 1}$$ = 0). Assuming that the server is in state k(k > 1), there are k – 1 tasks waiting for computing services. A task is receiving computing services. First, we simulate the remaining time of task receiving computing services and generate a random number from 0 to 1. According to the proportion of computing tasks, the types of computing tasks are simulated. Then, a random number from 0 to 1 is generate to simulate the completion progress of tasks that are receiving the calculation. [TeX:] $$T_{W 1}$$ can be expressed as follows:

##### (20)
[TeX:] $$T_{W 1}(\text { state }=k)=\theta \cdot T_{w 1}+T_{w 2}+\cdots+T_{w(k-1)}+T_{w k}$$

where [TeX:] $$T_{w k}(x=1,2, \ldots k)$$ is an independent discrete random variable, representing the calculation time of each task in the queue. The probability distribution is shown in Table 4.

Table 4.

Probability distribution of [TeX:] $$T_{w k}$$
F1(10,50,1) F1(10,50,1) F1(10,50,1) F1(10,50,1) F1(10,50,1)
Value 0.2 4 0.1 10 2
Probability 0.3 0.1 0.2 0.2 0.2

Moreover, θ is a random variable subject to the uniform distribution U(0,1), and represents the progress of completing the computing task of receiving the MEC server. When the vehicle initiates a computing task, the distance X from the boundary of rear coverage of the currently connected RSU follows a uniform distribution U(0,D).

##### 4.2 Comparative Analysis of Delay Energy Consumption with Vehicle Density Changes

The following will analyze the average time change of three offloading strategies with the change of vehicle density and speed per unit road section. Fig. 3 shows the average time of five types of computing tasks as a function of vehicle density. The selected vehicle speed is 70 km/hr.

It can be seen that when the five tasks under different vehicle densities use our proposed predictive V2V offload strategy based on the MEC load status, the average time of calculation is less than the previous two existing strategies. For normal tasks, it can be seen that the proposed strategy does not perform much differently from the strategy in [9]. As the calculation time is short, the MEC queuing time saved by the proposed strategy is not outstanding. When the vehicle density is low, the calculation results’ return time of the strategy in [9] and the proposed strategy becomes evident. For resource-intensive computing tasks, we can see that when the vehicle density is between 5 and 7, the time saved by our proposed strategy is more prominent. As the vehicle density continues to increase, the total time spent performing calculations at the MEC may be longer than the time spent performing calculations locally. The offloading ratio approaches zero, and the time of three strategies will also approach. Since the local execution has been chosen in most cases, the time is same.

Fig. 3.

Average time with vehicle density changes for five computing tasks in three offloading strategies: (a) Type1, (b) Type2, (c) Type3, (d) Type4, and (e) Type5.
##### 4.2.2 Average energy consumption with vehicle density changes for five computing tasks in three offloading strategies

The results in Table 5 show the average energy consumption of three methods at different vehicle densities. The following will analyze the average energy consumption changes in three offloading strategies with the change of vehicle density and speed per unit road segment.

Table 5.

Average energy consumption of three methods under different vehicle density (unit: J)
Group Method Vehicle density (number per unit)
2 4 6 8 10
Type1 Proposed 6.54 10.45 14.68 18.80 23.67
Zhang et al. [9] 7.65 12.43 16.87 20.45 27.48
Du et al. [11] 8.21 13.56 17.80 22.42 28.94
Type2 Proposed 4.42 6.04 8.68 12.64 16.82
Zhang et al. [9] 4.87 6.98 9.76 13.90 18.03
Du et al. [11] 5.04 7.33 10.34 14.63 19.25
Type3 Proposed 9.45 13.55 19.84 25.80 31.64
Zhang et al. [9] 12.53 15.76 23.89 27.69 34.73
Du et al. [11] 12.23 16.88 26.86 28.36 35.67
Type4 Proposed 0.85 1.56 2.03 2.66 3.02
Zhang et al. [9] 0.89 1.90 2.12 2.87 3.31
Du et al. [11] 1.02 1.95 2.21 2.97 3.45
Type5 Proposed 1.23 2.32 3.34 4.98 6.32
Zhang et al. [9] 1.36 3.03 3.90 5.38 7.76
Du et al. [11] 1.31 2.88 4.02 5.66 8.03

It can be seen from Table 5 that the total system energy consumption is decreasing with the increase in the delay constraint range. According to the analysis in Table 5, this is because as the range of delay constraint increases, more and more users choose the local computing. The greater the delay constraint, the more energy saved by the local computing. In addition, when the delay constraint range is small, there are many delay-sensitive users. For these users, due to the fact that there are more wireless resources available during offloading, the co-channel interference will be more severe as a result of frequent reuse. This will cause an increase in the system energy consumption. Thus, the total energy consumption of the system decreases with the increase in the delay constraint range.

##### 4.3 Comparative Analysis of Delay Energy Consumption with Vehicle Speed Changes

Fig. 4 illustrates changes of average time with the vehicle speed for five computing tasks. The selected vehicle density is 4 (per segment RSU).

For resource-intensive computing tasks, it can be seen that when the vehicle speed is between 80 and 120 km/hr, the proposed strategy shows a greatly improved performance compared to the first two strategies. When the vehicle speed is high, [TeX:] $$x_{i}=\left[\frac{v_{i}}{D} \times\left(\frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}-\frac{D-X}{v_{i}}\right)\right]$$ and [TeX:] $$x_{i}$$ will increase, and there are more MEC servers for offloading. Therefore, a large amount of MEC server queuing time is saved. For delay-sensitive computing tasks, as shown in this figure, these three strategies have seldom any time difference, and most of them choose to execute locally. For calculation-intensive computing tasks, our proposed offloading strategy reduces the average time compared to the other two strategies. Furthermore, as the vehicle speed increases, the time increase becomes less apparent. For data-intensive computing tasks, when the vehicle speed is high, the average time is also greatly reduced.

##### 4.3.2 Average energy consumption with vehicle speed changes for five computing tasks in three offloading strategies

The results in Table 6 show the average energy consumption of three methods at different vehicle speeds. The following will analyze the average energy consumption changes in three offloading strategies with the change of vehicle density and speed per unit road segment.

As can be seen from Table 6, the total energy consumption of the system is decreasing as the delay constraint range increases. According to the analysis in Table 6, the reason is that as the range of delay constraints increases, more and more users choose local computing. The greater the delay constraint, the more energy saved by local computing. In addition, when the delay constraint range is small, there are many delay-sensitive users. For these users, due to the large amount of wireless resources that are allocated during offloading, the co-frequency interference based on the frequency of reuse will be more severe, which will enhance the system energy consumption. Therefore, with the increase in the delay constraint range, the total system energy consumption is reduced.

Table 6.

Average energy consumption of three methods at different vehicle speeds (unit: J)
Group Method Vehicle speed (km/hr)
2 4 6 8 10
Type1 Proposed 6.03 8.21 10.68 13.87 17.84
Zhang et al. [9] 7.12 9.74 11.89 16.48 18.95
Du et al. [11] 7.25 9.52 14.56 17.64 20.45
Type2 Proposed 3.89 5.35 8.95 12.84 17.48
Zhang et al. [9] 4.31 6.78 9.78 14.78 19.48
Du et al. [11] 5.08 7.02 10.37 16.37 21.05
Type3 Proposed 8.43 13.56 17.89 24.23 30.63
Zhang et al. [9] 9.84 15.32 20.85 26.68 33.79
Du et al. [11] 10.28 16.08 21.36 28.45 35.02
Type4 Proposed 1.35 2.53 3.13 4.66 6.26
Zhang et al. [9] 1.66 3.08 3.85 5.67 7.06
Du et al. [11] 1.91 3.23 4.09 5.94 7.83
Type5 Proposed 1.53 2.26 3.23 3.96 4.82
Zhang et al. [9] 1.88 2.99 1.32 4.87 5.81
Du et al. [11] 2.04 3.25 4.27 5.67 6.35

Fig. 4.

Average time change with vehicle speed for five computing tasks in three offloading strategies: (a) Type1, (b) Type2, (c) Type3, (d) Type4, and (e) Type5.

## Acknowledgement

This work was supported by Characteristic Innovation Project from Guangdong Provincial Department of Education in 2019 (No. 2019gktscx073).

## Biography

##### Bo He
https://orcid.org/0000-0002-0077-9312

She has a master of Software Engineering, She is a senior engineer. She graduated from Jinan University in 2001. She is working in Guangzhou Institute of Technology. Her research interests include software engineering and graphic image.

## Biography

##### Tianzhang Li
https://orcid.org/0000-0002-6594-9431

He has a master of Computer Science, He is a senior engineer. He graduated from Jinan University in 2013. He is working in Jinan University. His research interests include big data and Digital Library.

## References

• 1 Q. Liu, D. Xu, B. Jiang, Y. Ren, "Prescribed-performance-based adaptive control for hybrid energy storage systems of battery and supercapacitor in electric vehicles," International Journal of Innovative ComputingInformation and Control, vol. 16, no. 2, pp. 571-584, 2020.custom:[[[-]]]
• 2 W. Xu, S. Wang, S. Yan, J. He, "An efficient wideband spectrum sensing algorithm for unmanned aerial vehicle communication networks," IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1768-1780, 2019.custom:[[[-]]]
• 3 R. Hu, H. M. Zhao, Y. Wu, "The methods of big data fusion and semantic collision detection in Internet of Thing," Cluster Computing, vol. 22, no. 4, pp. 8007-8015, 2019.custom:[[[-]]]
• 4 K. Zhang, S. Leng, Y. He, S. Maharjan, Y. Zhang, "Mobile edge computing and networking for green and low-latency Internet of Things," IEEE Communications Magazine, vol. 56, no. 5, pp. 39-45, 2018.doi:[[[10.1109/MCOM.2018.1700882]]]
• 5 K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, "Optimal delay constrained offloading for vehicular edge computing networks," in Proceedings of 2017 IEEE International Conference on Communications (ICC), Paris, France, 2017;pp. 1-6. custom:[[[-]]]
• 6 K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, "Optimal delay constrained offloading for vehicular edge computing networks," in Proceedings of 2017 IEEE International Conference on Communications (ICC), Paris, France, 2017;pp. 1-6. custom:[[[-]]]
• 7 Q. Liu, Z. Su, Y. Hui, "Computation offloading scheme to improve QoE in vehicular networks with mobile edge computing," in Proceedings of 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP), Hangzhou, China, 2018;pp. 1-5. custom:[[[-]]]
• 8 C. M. Huang, M. S. Chiang, D. T. Dao, W. L. Su, S. Xu, H. Zhou, "V2V data offloading for cellular network based on the software defined network (SDN) inside mobile edge computing (MEC) architecture," IEEE Access, vol. 6, pp. 17741-17755, 2018.custom:[[[-]]]
• 9 H. Zhang, Q. Luan, J. Zhu, F. Li, "Task offloading and resource allocation in vehicle heterogeneous networks with MEC," Chinese Journal on Internet of Things, vol. 2, no. 3, pp. 36-43, 2018.custom:[[[-]]]
• 10 G. Qiao, S. Leng, K. Zhang, Y. He, "Collaborative task offloading in vehicular edge multi-access networks," IEEE Communications Magazine, vol. 56, no. 8, pp. 48-54, 2018.doi:[[[10.1109/MCOM.2018.1701130]]]
• 11 J. Du, F. R. Y u, X. Chu, J. Feng, G. Lu, "Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization," IEEE Transactions on V ehicular Technology, vol. 68, no. 2, pp. 1079-1092, 2018.custom:[[[-]]]
• 12 K. Zhang, Y. Mao, S. Leng, Y. He, Y. Zhang, "Mobile-edge computing for vehicular networks: a promising network paradigm with predictive off-loading," IEEE V ehicular Technology Magazine, vol. 12, no. 2, pp. 36-44, 2017.custom:[[[-]]]
• 13 A. P. Miettinen, J. K. Nurminen, "Energy efficiency of mobile clients in cloud computing," in Proceedings of 2nd USENIX Workshop on Hot Topics Cloud Computing (HotCloud), Boston, MA, 2019;custom:[[[-]]]
• 14 J. Zhang, X. Hu, Z. Ning, E. C. H. Ngai, L. Zhou, J. Wei, J. Cheng, B. Hu, "Energy-latency tradeoff for energy-aware offloading in mobile edge computing networks," IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2633-2645, 2018.doi:[[[10.1109/JIOT.2017.2786343]]]