Changhao Piao , Xiaoyue Ding , Jia He , Soohyun Jang and Mingjie LiuImplementation of Image Transmission Based on Vehicle-to-Vehicle CommunicationAbstract: Weak over-the-horizon perception and blind spot are the main problems in intelligent connected vehicles (ICVs). In this paper, a V2V image transmission-based road condition warning method is proposed to solve them. The encoded road emergency images which are collected by the ICV are transmitted to the on-board unit (OBU) through Ethernet. The OBU broadcasts the fragmented image information including location and clock of the vehicle to other OBUs. To satisfy the channel quality of the V2X communication in different times, the optimal fragment length is selected by the OBU to process the image information. Then, according to the position and clock information of the remote vehicles, OBU of the receiver selects valid messages to decode the image information which will help the receiver to extend the perceptual field. The experimental results show that our method has an average packet loss rate of 0.5%. The transmission delay is about 51.59 ms in low-speed driving scenarios, which can provide drivers with timely and reliable warnings of the road conditions. Keywords: Internet of Vehicles , Real-Time Image Transmission , Road Condition Warning , V2X 1. IntroductionIn recent years, with the rapid growth of national car ownership, it makes people’s travel more convenient, but it also brings problems such as traffic congestion and traffic accidents. Especially due to the traffic congestion, the casualties and economic losses are particularly severe. Data from the National Bureau of Statistics of the People’s Republic of China shows that, the deaths which are caused by traffic accident in 2020 is about 62,000, and the direct property losses are as high as 1.346 billion yuan [1]. The development of technologies such as wireless communication and sensor networks has provided solutions to these problems. These technologies are gradually being applied to intelligent transportation systems (ITS), laying the foundation for the development of Internet of Vehicles (IoV) technology [2]. IoV is similar to the Internet of Things (IoT) [3]. The communication objects of IoV include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P). The communication equipments of IoV are mainly on-board unit (OBU) and road side unit (RSU). The data information of IoV mainly includes vehicle speed, heading angle, longitude, latitude, traffic light information, road sign information, etc. This wireless communication feature of IoV improves the perception capability of ICVs. It avoids the drawbacks of sensor detection failures caused by external factors such as weather or view blocked by the preceding vehicle. Relevant studies have shown that IoV, which supports information interaction between vehicles, can provide real-time traffic information, entertainment information, and other applications for ICVs or drivers [4-6]. It effectively guarantees road safety and traffic efficiency. According to “Cooperative intelligent transportation system; vehicular communication; application layer specification and data exchange standard,” detection methods used by ICVs for various road emergencies in the over-the-horizon scenario [7] mainly include the following applications: hazardous location warning (HLW), blind spot warning (BSW), vulnerable road user collision warning (VRUCW), etc. The warning messages of the following application scenarios are all delivered based on text. (1) Hazardous location warning: HLW refers to the application that warns the vehicle or driver when the ICV is traveling on a potentially dangerous road section. It can improve the vehicle’s ability to perceive dangerous road conditions over the horizon and reduce the risk of accidents. The application which adopts the V2I communication method gives warning by judging the positional relationship between the vehicle and the dangerous road conditions through the roadside information (RSI) broadcast by the RSU. (2) Blind spot warning: BSW means that when a remote vehicle (RV) traveling in blind spot of our own vehicle, the application needs to alert the possible collision risk to the vehicle or driver. The application which adopts the V2V communication method gives a warning by judging the position of the two ICVs according to the longitude, latitude, heading angle, vehicle size, acceleration, and other information of the RVs. (3) Vulnerable road user collision warning: VRUCW means that the application needs to warn the vehicle or driver when there is a risk of collision with the surrounding pedestrians, bicycles and other small vehicles while driving. It can assist vehicles or drivers to reduce the risk of collision with surrounding pedestrians, which improves the driving safety of vehicles on urban roads and the safety of pedestrians. The application which adopts the V2I or V2P communication methods predicts dangerous collisions by acquiring the vehicle's own data and the information about the status of surrounding pedestrians. According to the above methods, traditional warning messages are delivered in the form of text. After the vehicle recognizes the road condition information through various sensors, it broadcasts warning messages to the rear vehicle in the form of a text message [8]. The messages received by the rear vehicle are all processed warning messages, and there is a risk of being unreliable. As we all know, image can convey more details than text. ICVs and drivers can obtain accurate information of the road conditions ahead through image information, including the type of traffic accident, its severity, casualties, and surrounding environmental conditions, further to take corresponding measures on their own to avoid secondary accidents or aggravating road congestion [9,10]. Therefore, this paper proposes a forward road condition detection method based on V2V image transmission. The method introduces the image information into the traditional road warning message to achieve a more detailed presentation of the road condition ahead. Secondly, the geographic location and clock information are integrated into the warning message, so that the vehicle only receives the road condition images from vehicles behind, filtering out redundant information to improve the efficiency of data reception. This also enables ICV to obtain the time and location of the road condition to make a more accurate warning. 2. Image Transmission based on V2V2.1 Overall StructureIn the autonomous driving scenario, ICVs mainly rely on the assistance of various sensors for road condition detection, such as millimeter-wave radar, camera, and LIDAR [11]. Although the detection accuracy of sensors is high and more maturely developed, they often cannot give accurate detection results in extreme weather or limited visual information [12]. In these cases, it is necessary to combine the detection results of the sensor with V2X technology to ensure the safe driving of ICVs. The overall framework of the V2V image transmission scheme proposed in this paper is shown in Fig. 1. It is mainly divided into two parts: the image processing module and the communication module. Describing the situation where a sudden dangerous condition occurs on the road, but the visual information of the rear vehicle is obscured by the front vehicle, resulting in the rear vehicle can’t carry out danger warning through the sensors in time. In this situation, the solution adopted is that the RV encodes the collected images, and then transfers the information through OBU1. After receiving the image coding information from OBU1, OBU2 of the HV decodes it to acquire the original image. The HV analyzes the acquired image immediately and gives corresponding warnings. Since the communication method adopted in this paper is PC5, which transmits messages in the form of broadcast, the method described in Fig. 1 is also applicable for one-to-many, many-to-one, and many-to-many modes. In Fig. 2, the algorithm flow chart on the left depicts the behavior of the sender. During the image segmentation of the sender, U bytes is set as the segmentation unit firstly, then the total length of the encoded image is calculated and denoted as P. Secondly, we perform division and remainder on P to obtain the total number of segments and the length of the last segment. We study different cases according to whether the length of the last segment is 0. Finally, we generate the JSON packet and send it. The algorithm flow chart on the right depicts the behavior of the receiver. Firstly, the JSON format string is parsed, the information is extracted and stored in variables. Secondly, the receiver filters out the invalid image information through the positional relationship and the timestamp between HV and RV. In the decoding process, the reorganization of the segmented image starts when the flag is equal to “Start” and the flag position p is set to 1. A counter n is set to compare with the seq in the JSON string to avoid the reorganization failure problem generated by packet loss. Thus, reassembly is performed only when p=1 and n=seq. It is not until a fragment with the flag “End” is received that both p and n are set to 0, indicating that the last packet is received and decoded. Finally, HV gives corresponding warnings. 2.2 Location AlgorithmThe location information obtained by the RTK in this experiment adopts the WGS84 coordinate system, which is a geocentric coordinate system and a space Cartesian coordinate system. The coordinates (B, L, H) under WGS84 can be directly obtained through GPS. B is the latitude, L is the longitude, and H is the geodetic height, which means the height to the WGS84 ellipsoid. Mercator Projection assumes that the earth is surrounded by a cylinder, the equator and the cylinder are connected. Then assuming that there is a lamp in the center of the sphere, project the figure on the sphere onto the cylinder, and expand the cylinder to form a picture using Mercator Projection’s world map. The positioning algorithms used in this paper are all calculated after converting the WGS84 coordinate system to Mercator Projection. We assume that the coordinate of a point in the Mercator Projection is (x, y). The conversion formula is as follows:
(1)[TeX:] $$\begin{aligned} &x=\frac{L \times 20037508.342789}{180}\\ &y=\frac{\frac{\log \left(\tan \left(\frac{(90+B) \times \pi}{360}\right)\right.}{\frac{\pi}{180}} \times 20037508.342789}{180} \end{aligned}$$The positioning idea of this paper is as follows. Firstly, in order to determine whether the vehicle is on the road and driving forward. We should calculate the relationship between the road centerline points and the vehicle. Secondly, select two adjacent points on the road centerline points set, and convert them into a Mercator Projection together with the vehicle’s latitude and longitude. Then, according to the converted coordinates, the triangle formed by the three points is calculated. If it is an acute triangle, it means that the vehicle is located between these two points, and the specific position of the vehicle can be determined. Eventually, after performing such calculations on the information of the HV and the RV, the HV selects the information of the RV located in front of itself and on the same road, and performs the image decoding operation on this message. 2.3 Experimental Equipment Construction2.3.1 Hardware platformAs shown in Fig. 3, the hardware platform chosen for this experiment is an embedded development board based on ZTE ZM8350 C-V2X module, which is an industrial-grade C-V2X wireless communication module based on LTE-V protocol in LGA package. This module can support data linking of B46D/B47 band network as well as Linux embedded operating system. The verification platform of this experiment is a connected unmanned vehicle designed by us, equipped with CTI inertial navigation. The experimental site is a straight section of the southern campus of Chongqing University of Posts and Telecommunications, with a lane width of 4 m. 2.3.2 Software platformThe kernel system of the embedded development board is Linux4.14.78, and the cross-compilation chain is arm-poky-linux-gnueabi-gcc. The development language is C/C++. 3. Performance AnalysisThe experimental results of image transmission in this paper will be analyzed in terms of transmission delay, packet loss rate, and the accuracy of the positioning algorithm. Firstly, the transmission efficiency of different image fragment sizes is compared and analyzed, and the optimal fragment size is selected comprehensively. Then, we analyze the packet loss rate of image transmission at a given distance between the two vehicles. Finally, the experimental vehicle will pass through the test points of 150 m, 100 m, 50 m, and 0 m at the speed of 10 km/hr, 20 km/hr, and 30 km/hr. Recording the calculated distance and actual distance output by the algorithm respectively, so as to obtain the precision of the positioning algorithm. 3.1 Image Transmission DelayWe assume that the byte size of a frame of image is P bytes, the fragment length is U bytes, and the transmission delay of a fragment with a length of U bytes is [TeX:] $$\operatorname{Delay}_{U}.$$ This paper uses formula (2) to measure the transmission efficiency [TeX:] $$\eta$$ of a fragment:
(2)[TeX:] $$1 / \eta=(P / U) \times \text { Delay }_{U}=\left(\text { Delay }_{U} / U\right) \times P$$It can be known from formula (2) that when the ratio of [TeX:] $$\operatorname{Delay}_{U}.$$ and U is smaller, the transmission efficiency of this fragment is higher. In this experiment, through the comparison of multiple sets of data, the following five fragment lengths with smaller transmission delay and bigger [TeX:] $$\eta$$ are finally selected, which are 1500 bytes, 1200 bytes, 1050 bytes, 1030 bytes, and 1000 bytes, respectively, and they are analyzed below. Their [TeX:] $$\eta$$ at each moment was obtained through the calculation of formula (2). The distribution of [TeX:] $$\operatorname{Delay}_{U}$$/U is shown in Fig. 4, and the distribution of the transmission delay is shown in Fig. 5. As shown in Fig. 4, the ratio is randomly distributed, and its image features are high in the middle and low on both sides, which satisfies the graphic law of normal distribution. When the fragment length is 1030 bytes, the normal distribution curve has a thin and narrow shape, its average μ is also the smallest, indicating that the transmission efficiency at this time is the highest, and the average ratio is 0.050085. According to the normal distribution theory, data points far from the average are small probability events, so a more concentrated distribution curve represents a more stable transmission. Therefore, this experiment selects 1030 bytes as the fragment length of image to ensure the best transmission efficiency. As shown in Fig. 5, the V2V image transmission delay presents random distribution characteristics, mainly high in the middle, low at both ends, and basically symmetrical distribution, which also conforms to the normal distribution law. Analysis of the waveform in Fig. 5 shows that when the fragment size is 1030 bytes, the average transmission delay [TeX:] $$\mu$$ is 51.59 ms. Using the normal distribution theory, the data points far from the average are small probability events, and events with a probability lower than 5% are almost impossible to happen. 3.2 Image Transmission Packet Loss RateTo test the packet loss rate of V2V communication, just add a subject “Count” to the JSON message of the above experiment. “Count” cycles between 0–99 and increments by 1 after each message is sent. A total of 3,000 data samples are collected for this experiment. The collated experimental data are plotted in Fig. 6, and the packet loss rate of image transmission is 0.5%. As shown in Fig. 6, when there is no packet loss, the points represented by the heartbeat code should form a completely straight line. When there is a packet loss situation, a vacancy occurs. 3.3 Positioning Algorithm AccuracyIn this experiment, four test points were set at 0 m, 50 m, 100 m, and 150 m away from the test starting point. The connected unmanned vehicle passes through the test points of 150 m, 100 m, 50 m, and 0 m at speeds of 10 km/hr, 20 km/hr, and 30 km/hr, respectively. The recorded data are shown in Table 1. The analysis shows that the average positioning error is 1.19 m, which meets the requirements of cooperative communication standards. It can ensure good positioning accuracy at low speed or stationary state. Although the positioning accuracy may be affected in high-speed scenarios, it is still within a certain allowable error range, so the positioning algorithm used in this paper can be well applied to various driving scenarios. 4. ConclusionThe forward road condition detection scheme based on V2V image transmission proposed in this paper can provide ICVs and drivers with more detailed and accurate road condition information, and improve the accuracy of road condition warning. And this method can send the road condition image through OBU to the rear vehicle with a low packet loss rate and transmission delay. When the size of the transferred image exceeds the number of bytes in a single transfer of OBU, it can choose an optimal segment length to segment the image. The method adds a fault-tolerant mechanism to ensure that the received fragment can be accurately reconstructed into the original image. The vehicle status information integrated with the image message, which can filter out invalid messages for the ICVs and improve the communication efficiency of the IoV. BiographyChanghao Piaohttps://orcid.org/0000-0002-0576-5032He received his B.E. degree in electrical engineering and automation from Xi’an Jiaotong University in 2001, M.S. and Ph.D. degree in Inha University, South Korea in 2007. He is currently a professor in School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China. His research interests include automobile electronics and energy electronics. BiographyXiaoyue Dinghttps://orcid.org/0000-0001-6358-9594She received B.E. degree in Internet of Things Engineering from Chongqing Uni-versity of Posts and Telecommunications in 2020. Since September, 2020, she is with the School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China as a M.S. candidate. Her current research interests include internet of vehicle and video transmission. BiographyJia Hehttps://orcid.org/0000-0002-0612-1984He received B.E. degree in Petroleum Engineering from Southwest Petroleum Uni-versity in 2020. Since September, 2020, he is with the School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China as a M.S. candidate. His current research interests include smart transportation. BiographySoohyun Janghttps://orcid.org/0000-0003-2852-0318He received the B.S., M.S., and Ph.D. degrees in the School of Electronics and Infor-mation Engineering from Korea Aerospace University, Goyang, Korea, in 2009, 2011, and 2015, respectively. He is currently a principal researcher in the Mobility Platform Research Center, Korea Electronics Technology Institute, Seongnam, Korea. His research interests include the signal processing algorithm and VLSI implementation for the wireless communication systems. BiographyMingjie Liuhttps://orcid.org/0000-0003-0464-675XHe received his M.S. degree from Chongqing University of Posts and Telecommuni-cations in 2012, Ph.D. degree from Inha University, South Korea in 2019. He is currently a lecturer in School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China. His research interests include computer vision and automobile electronics. References
|