## Bo He* and Tianzhang Li## |

[TeX:] $$\boldsymbol{F}_{i}\left(\boldsymbol{c}_{\boldsymbol{i}}, \boldsymbol{d}_{\boldsymbol{i}}, \boldsymbol{t}_{\boldsymbol{i}, \max }\right)$$ | Task type | Ratio [TeX:] $$\varepsilon_{i}$$ |
---|---|---|

F1(10,50,1) | Normal type | 0.2 |

F2(200,200,5) | Resource-intensive | 0.2 |

F3(5,5,0.5) | Delay-sensitive | 0.2 |

F4(500,100,50) | Calculation-intensive | 0.2 |

F5(100,500,10) | Data-intensive | 0.2 |

Table 2.

Parameter | Value |
---|---|

The computing power of vehicles [TeX:] $$c_{v}$$ | 15 |

The uplink communication rate of vehicles [TeX:] $$R_{i}$$ | 100 |

The computing power of MEC server [TeX:] $$c_{m}$$ | 50 |

RSU pitch L | 100 m |

The vehicle density per unit road section [TeX:] $$\lambda_{u}$$ | 0.2-10 |

The speed of vehicles v | 40-120 km/hr |

The RSU communication delay of unit road section [TeX:] $$t_{o}$$ | 0.5 s |

Single set of parameters Monte Carlo simulation times | 1000000 |

The V2V communication delay of unit road section [TeX:] $$r_{V 2 V} \cdot t_{o}$$ | 0.1 s |

The unit of vehicle density [TeX:] $$\lambda_{u}$$ per road section is not [TeX:] $$m^{-1}$$ , but the average number of vehicles per RSU. By definition, when the RSU space is 100 m, the actual density is [TeX:] $$\frac{\lambda_{u}}{100} m^{-1}$$.

The simulation method for the queuing time of a single server will be explained below. For [TeX:] $$T_{W 1}$$, this queuing process can first be regarded as a M/M/1 queuing process with a mortality rate, i.e., a task completion rate of [TeX:] $$\mu=1 / \sum_{i=1}^{I} \varepsilon_{I}\left(\frac{c_{i}}{c_{m}}\right)$$ . During the simulation, a random number from 0 to 1 is generated. According to the discrete probability distribution of states, the state where the server is located can be simulated. Taking λ = 0.2 and μ = 0.5 as examples, the probability that the server is in each state is shown in Table 3.

Table 3.

State | 0 | 1 | 2 | 3 | 4 | ... |
---|---|---|---|---|---|---|

Probability | 0.6 | 0.24 | 0.096 | 0.0354 | 0.1536 | ... |

Random number | 0-0.6 | 0.6-0.84 | 0.84-0.936 | 0.936-0.9744 | 0.9744-0.098976 | ... |

Suppose the server is in state 0 ([TeX:] $$T_{W 1}$$ = 0). Assuming that the server is in state k(k > 1), there are k – 1 tasks waiting for computing services. A task is receiving computing services. First, we simulate the remaining time of task receiving computing services and generate a random number from 0 to 1. According to the proportion of computing tasks, the types of computing tasks are simulated. Then, a random number from 0 to 1 is generate to simulate the completion progress of tasks that are receiving the calculation. [TeX:] $$T_{W 1}$$ can be expressed as follows:

where [TeX:] $$T_{w k}(x=1,2, \ldots k)$$ is an independent discrete random variable, representing the calculation time of each task in the queue. The probability distribution is shown in Table 4.

Table 4.

Task type | |||||
---|---|---|---|---|---|

F1(10,50,1) | F1(10,50,1) | F1(10,50,1) | F1(10,50,1) | F1(10,50,1) | |

Value | 0.2 | 4 | 0.1 | 10 | 2 |

Probability | 0.3 | 0.1 | 0.2 | 0.2 | 0.2 |

Moreover, θ is a random variable subject to the uniform distribution U(0,1), and represents the progress of completing the computing task of receiving the MEC server. When the vehicle initiates a computing task, the distance X from the boundary of rear coverage of the currently connected RSU follows a uniform distribution U(0,D).

The following will analyze the average time change of three offloading strategies with the change of vehicle density and speed per unit road section. Fig. 3 shows the average time of five types of computing tasks as a function of vehicle density. The selected vehicle speed is 70 km/hr.

It can be seen that when the five tasks under different vehicle densities use our proposed predictive V2V offload strategy based on the MEC load status, the average time of calculation is less than the previous two existing strategies. For normal tasks, it can be seen that the proposed strategy does not perform much differently from the strategy in [9]. As the calculation time is short, the MEC queuing time saved by the proposed strategy is not outstanding. When the vehicle density is low, the calculation results’ return time of the strategy in [9] and the proposed strategy becomes evident. For resource-intensive computing tasks, we can see that when the vehicle density is between 5 and 7, the time saved by our proposed strategy is more prominent. As the vehicle density continues to increase, the total time spent performing calculations at the MEC may be longer than the time spent performing calculations locally. The offloading ratio approaches zero, and the time of three strategies will also approach. Since the local execution has been chosen in most cases, the time is same.

For delay-sensitive tasks, we can see that there is seldom any difference among these three strategies. As this task has a small amount of calculation, the local execution time is short. When executed on the MEC server, the task will introduce the data uploading time and possibly the multi-hop calculation result return time. For calculation-intensive tasks, it is predictable that the proposed strategy significantly saves the MEC server queuing time. As can be seen, when the vehicle density is extremely high, much time can still be saved. For data-intensive tasks, it can be seen that when the vehicle density is low, the strategy in [9] and the proposed strategy have greater advantages over the strategy in [11]. As the vehicle density increases, the advantages of our proposed offloading strategy become more apparent.

The results in Table 5 show the average energy consumption of three methods at different vehicle densities. The following will analyze the average energy consumption changes in three offloading strategies with the change of vehicle density and speed per unit road segment.

Table 5.

Group | Method | Vehicle density (number per unit) | ||||
---|---|---|---|---|---|---|

2 | 4 | 6 | 8 | 10 | ||

Type1 | Proposed | 6.54 | 10.45 | 14.68 | 18.80 | 23.67 |

Zhang et al. [9] | 7.65 | 12.43 | 16.87 | 20.45 | 27.48 | |

Du et al. [11] | 8.21 | 13.56 | 17.80 | 22.42 | 28.94 | |

Type2 | Proposed | 4.42 | 6.04 | 8.68 | 12.64 | 16.82 |

Zhang et al. [9] | 4.87 | 6.98 | 9.76 | 13.90 | 18.03 | |

Du et al. [11] | 5.04 | 7.33 | 10.34 | 14.63 | 19.25 | |

Type3 | Proposed | 9.45 | 13.55 | 19.84 | 25.80 | 31.64 |

Zhang et al. [9] | 12.53 | 15.76 | 23.89 | 27.69 | 34.73 | |

Du et al. [11] | 12.23 | 16.88 | 26.86 | 28.36 | 35.67 | |

Type4 | Proposed | 0.85 | 1.56 | 2.03 | 2.66 | 3.02 |

Zhang et al. [9] | 0.89 | 1.90 | 2.12 | 2.87 | 3.31 | |

Du et al. [11] | 1.02 | 1.95 | 2.21 | 2.97 | 3.45 | |

Type5 | Proposed | 1.23 | 2.32 | 3.34 | 4.98 | 6.32 |

Zhang et al. [9] | 1.36 | 3.03 | 3.90 | 5.38 | 7.76 | |

Du et al. [11] | 1.31 | 2.88 | 4.02 | 5.66 | 8.03 |

It can be seen from Table 5 that the total system energy consumption is decreasing with the increase in the delay constraint range. According to the analysis in Table 5, this is because as the range of delay constraint increases, more and more users choose the local computing. The greater the delay constraint, the more energy saved by the local computing. In addition, when the delay constraint range is small, there are many delay-sensitive users. For these users, due to the fact that there are more wireless resources available during offloading, the co-channel interference will be more severe as a result of frequent reuse. This will cause an increase in the system energy consumption. Thus, the total energy consumption of the system decreases with the increase in the delay constraint range.

Fig. 4 illustrates changes of average time with the vehicle speed for five computing tasks. The selected vehicle density is 4 (per segment RSU).

For normal computing tasks, it can be seen that the proposed predictive offloading algorithm based on the MEC load status is not significantly different from the offloading strategy in [9], which is superior to the offloading strategy in [11]. Due to the fact that the computational task is relatively less burdensome, when the vehicle speed is high (there are many RSU coverage sections passed during this time), our proposed strategy is superior to the offloading strategy in [9].

For resource-intensive computing tasks, it can be seen that when the vehicle speed is between 80 and 120 km/hr, the proposed strategy shows a greatly improved performance compared to the first two strategies. When the vehicle speed is high, [TeX:] $$x_{i}=\left[\frac{v_{i}}{D} \times\left(\frac{d_{i}}{R_{i}}+\frac{c_{i}}{c_{m}}-\frac{D-X}{v_{i}}\right)\right]$$ and [TeX:] $$x_{i}$$ will increase, and there are more MEC servers for offloading. Therefore, a large amount of MEC server queuing time is saved. For delay-sensitive computing tasks, as shown in this figure, these three strategies have seldom any time difference, and most of them choose to execute locally. For calculation-intensive computing tasks, our proposed offloading strategy reduces the average time compared to the other two strategies. Furthermore, as the vehicle speed increases, the time increase becomes less apparent. For data-intensive computing tasks, when the vehicle speed is high, the average time is also greatly reduced.

The results in Table 6 show the average energy consumption of three methods at different vehicle speeds. The following will analyze the average energy consumption changes in three offloading strategies with the change of vehicle density and speed per unit road segment.

As can be seen from Table 6, the total energy consumption of the system is decreasing as the delay constraint range increases. According to the analysis in Table 6, the reason is that as the range of delay constraints increases, more and more users choose local computing. The greater the delay constraint, the more energy saved by local computing. In addition, when the delay constraint range is small, there are many delay-sensitive users. For these users, due to the large amount of wireless resources that are allocated during offloading, the co-frequency interference based on the frequency of reuse will be more severe, which will enhance the system energy consumption. Therefore, with the increase in the delay constraint range, the total system energy consumption is reduced.

Table 6.

Group | Method | Vehicle speed (km/hr) | ||||
---|---|---|---|---|---|---|

2 | 4 | 6 | 8 | 10 | ||

Type1 | Proposed | 6.03 | 8.21 | 10.68 | 13.87 | 17.84 |

Zhang et al. [9] | 7.12 | 9.74 | 11.89 | 16.48 | 18.95 | |

Du et al. [11] | 7.25 | 9.52 | 14.56 | 17.64 | 20.45 | |

Type2 | Proposed | 3.89 | 5.35 | 8.95 | 12.84 | 17.48 |

Zhang et al. [9] | 4.31 | 6.78 | 9.78 | 14.78 | 19.48 | |

Du et al. [11] | 5.08 | 7.02 | 10.37 | 16.37 | 21.05 | |

Type3 | Proposed | 8.43 | 13.56 | 17.89 | 24.23 | 30.63 |

Zhang et al. [9] | 9.84 | 15.32 | 20.85 | 26.68 | 33.79 | |

Du et al. [11] | 10.28 | 16.08 | 21.36 | 28.45 | 35.02 | |

Type4 | Proposed | 1.35 | 2.53 | 3.13 | 4.66 | 6.26 |

Zhang et al. [9] | 1.66 | 3.08 | 3.85 | 5.67 | 7.06 | |

Du et al. [11] | 1.91 | 3.23 | 4.09 | 5.94 | 7.83 | |

Type5 | Proposed | 1.53 | 2.26 | 3.23 | 3.96 | 4.82 |

Zhang et al. [9] | 1.88 | 2.99 | 1.32 | 4.87 | 5.81 | |

Du et al. [11] | 2.04 | 3.25 | 4.27 | 5.67 | 6.35 |

Based on two existing offloading strategies, this paper proposes a predictive offloading algorithm based on the MEC load status. Predictability in our study refers to the estimation of task uploading and calculation time, the uploading of calculation input data by V2V communication, and the return of calculation results. When the vehicle speed is too high or computing tasks are heavy, the data backhaul time of multi-hop RSU backhaul link is saved. In other words, based on the MEC status, during the above-mentioned period of time, multiple RSU coverage sections are passed. According to the MEC status information, we select the MEC server with the least waiting time to offload computing tasks. When the vehicle density is high or the vehicle speed is fast, the waiting time on the MEC server is saved. In this paper, a simple linear model is employed to discuss the delay and energy consumption, regardless of whether it is a simple analysis of delay or a comprehensive analysis of delay and energy efficiency. In fact, the user satisfaction cannot be described by a simple linear model in terms of individual’s subjective feelings. User satisfaction models on delay and energy consumption can be introduced in the future work. As such research is more in line with subjective feelings, it is more applicable.

- 1 Q. Liu, D. Xu, B. Jiang, Y. Ren, "Prescribed-performance-based adaptive control for hybrid energy storage systems of battery and supercapacitor in electric vehicles,"
*International Journal of Innovative ComputingInformation and Control*, vol. 16, no. 2, pp. 571-584, 2020.custom:[[[-]]] - 2 W. Xu, S. Wang, S. Yan, J. He, "An efficient wideband spectrum sensing algorithm for unmanned aerial vehicle communication networks,"
*IEEE Internet of Things Journal*, vol. 6, no. 2, pp. 1768-1780, 2019.custom:[[[-]]] - 3 R. Hu, H. M. Zhao, Y. Wu, "The methods of big data fusion and semantic collision detection in Internet of Thing,"
*Cluster Computing*, vol. 22, no. 4, pp. 8007-8015, 2019.custom:[[[-]]] - 4 K. Zhang, S. Leng, Y. He, S. Maharjan, Y. Zhang, "Mobile edge computing and networking for green and low-latency Internet of Things,"
*IEEE Communications Magazine*, vol. 56, no. 5, pp. 39-45, 2018.doi:[[[10.1109/MCOM.2018.1700882]]] - 5 K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, "Optimal delay constrained offloading for vehicular edge computing networks," in
*Proceedings of 2017 IEEE International Conference on Communications (ICC)*, Paris, France, 2017;pp. 1-6. custom:[[[-]]] - 6 K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, "Optimal delay constrained offloading for vehicular edge computing networks," in
*Proceedings of 2017 IEEE International Conference on Communications (ICC)*, Paris, France, 2017;pp. 1-6. custom:[[[-]]] - 7 Q. Liu, Z. Su, Y. Hui, "Computation offloading scheme to improve QoE in vehicular networks with mobile edge computing," in
*Proceedings of 2018 10th International Conference on Wireless Communications and Signal Processing (WCSP)*, Hangzhou, China, 2018;pp. 1-5. custom:[[[-]]] - 8 C. M. Huang, M. S. Chiang, D. T. Dao, W. L. Su, S. Xu, H. Zhou, "V2V data offloading for cellular network based on the software defined network (SDN) inside mobile edge computing (MEC) architecture,"
*IEEE Access*, vol. 6, pp. 17741-17755, 2018.custom:[[[-]]] - 9 H. Zhang, Q. Luan, J. Zhu, F. Li, "Task offloading and resource allocation in vehicle heterogeneous networks with MEC,"
*Chinese Journal on Internet of Things*, vol. 2, no. 3, pp. 36-43, 2018.custom:[[[-]]] - 10 G. Qiao, S. Leng, K. Zhang, Y. He, "Collaborative task offloading in vehicular edge multi-access networks,"
*IEEE Communications Magazine*, vol. 56, no. 8, pp. 48-54, 2018.doi:[[[10.1109/MCOM.2018.1701130]]] - 11 J. Du, F. R. Y u, X. Chu, J. Feng, G. Lu, "Computation offloading and resource allocation in vehicular networks based on dual-side cost minimization,"
*IEEE Transactions on V ehicular Technology*, vol. 68, no. 2, pp. 1079-1092, 2018.custom:[[[-]]] - 12 K. Zhang, Y. Mao, S. Leng, Y. He, Y. Zhang, "Mobile-edge computing for vehicular networks: a promising network paradigm with predictive off-loading,"
*IEEE V ehicular Technology Magazine*, vol. 12, no. 2, pp. 36-44, 2017.custom:[[[-]]] - 13 A. P. Miettinen, J. K. Nurminen, "Energy efficiency of mobile clients in cloud computing," in
*Proceedings of 2nd USENIX Workshop on Hot Topics Cloud Computing (HotCloud)*, Boston, MA, 2019;custom:[[[-]]] - 14 J. Zhang, X. Hu, Z. Ning, E. C. H. Ngai, L. Zhou, J. Wei, J. Cheng, B. Hu, "Energy-latency tradeoff for energy-aware offloading in mobile edge computing networks,"
*IEEE Internet of Things Journal*, vol. 5, no. 4, pp. 2633-2645, 2018.doi:[[[10.1109/JIOT.2017.2786343]]]