1. Introduction
Satellite communication has the advantages of abundant radio frequency (RF) resources, large coverage area, and low ground interference, and it plays an important role in communication and television broadcasting in remote areas [1]. The 6th generation mobile communication network will integrate ground and satellite communication and build an integrated air-ground–space network to achieve true full ground–air coverage in a wider space. The integrated air–ground–space network connects users, aircraft, and various communication platforms on the ground, at sea, in the air, and in deep space through inter-satellite links (ISLs) and satellite–ground links, realizing a high-capacity information network that accurately acquires information, processes it quickly, and efficiently transmits it [2]. As the backbone of this integrated network, satellite networks have advantages over traditional ground networks, such as global coverage, simple access, support for various services, and on-demand bandwidth allocation.
They also play an increasingly important role in global communication, navigation, and positioning; environmental and disaster monitoring; and military applications [3]. Although China’s ground communication network is developing rapidly, it is difficult to cover special environments such as deserts, forests, and oceans. Owing to their seamless global coverage capabilities, satellite networks can provide continuous communication connections in areas where ground networks cannot provide coverage and can provide reliable redundant connections in the event of wireless infrastructure damage in areas already covered by terrestrial networks.
In the construction of satellite communication systems, low-Earth orbit (LEO) satellites have significant advantages over geostationary Earth orbit (GEO) satellites. The round-trip delay of LEO satellites is less than 100 ms, while that of GEO satellites is approximately 600 ms. Thanks to the relatively short propagation distance of LEO satellites, the signal propagation loss is smaller, which is particularly beneficial for the low-power design of small devices, such as Internet of Things (IoT) terminals [4]. In addition, because GEO satellites are synchronized with the Earth, the position between the terminals and GEO satellites is relatively stationary, and the satellite–ground communication links are severely restricted by terrain. When there are obstacles in the line of sight between the terminal and satellite, the communication quality drops sharply. By contrast, LEO satellites and terminals are in relative motion, and a communication window exists even if the terminal is near an obstacle [5]. Owing to their late start, satellite systems in China are under significant frequency resource constraints and often need to be compatible with applications such as broadband Internet access and IoT services. Therefore, the coexistence of low- and high-speed services and delay-sensitive and -insensitive services is common. To ensure timely data transmission, ISLs are required to enable inter-satellite data routing. Therefore, designing reasonable routing strategies based on the connectivity provided by ISLs has become a key issue for enabling satellite systems to efficiently transmit data in a timely manner.
This paper first proposes a traffic prediction method for LEO satellite systems, followed by a dynamic cache allocation routing (DCAR) strategy based on traffic prediction. Currently, ground network traffic prediction models, such as time-series analysis, nonlinear analysis, and artificial intelligence, are widely used. For example, support vector machines (SVMs) [6] have been applied in actual ground network traffic prediction, and the prediction results are applicable to traffic allocation strategies. However, compared with ground networks, satellite networks have limited available resources and time-varying topologies. Thus, traditional ground network traffic prediction algorithms inevitably face challenges in terms of prediction accuracy and operational efficiency if directly applied to satellite networks. Combination models based on neural networks [7-9] have achieved good results in prediction due to their strong fault tolerance, fast parallel computing, and powerful learning ability. However, parameter selection can only rely on experience, and the algorithm has a high time complexity, long training time, and slow convergence speed. Therefore, this paper proposes a gradient boosting regression tree (GBRT)- based method. First, the time–space correlation of satellite network traffic is determined through correlation coefficients to determine time–space-related traffic. Then, the GBRT is used for training to obtain smooth parameters. Relatively accurate prediction values were obtained through testing. Second, a DCAR strategy based on traffic prediction is introduced, which can be adaptively adjusted during network operation. This strategy dynamically adjusts the cache length of ISLs based on traffic prediction results and selects the optimal path with the least end-to-end delay and highest cache hit rate.
2. Related Work
The following factors must be considered when designing routing strategies for satellite networks: first, satellite nodes have limited cache and processing capabilities, and routing strategies should not be too complex. Second, the state of ISLs changes frequently changes. Whether it is link interruption caused by high bit error rate (BER), link interruption and reconnection caused by relative movement between satellites, or node failure, the network topology will be in an unstable state. Routing strategies must have adaptive rerouting capabilities under changing topologies while considering the signaling and processing overhead required for rebuilding routes [10].
In a collaborative satellite–ground network, both the satellite and ground can cache content to provide more efficient service when requested, reducing the communication load of repeated content transmission and round-trip links. Wu et al. [
11] proposed a dual-layer satellite–ground caching scenario in which the ground used an information-centric network (ICN) to decouple data from the location where the service was actually provided through the joint optimization of satellite–ground content caching placement, reducing the uplink and downlink bandwidth consumption. An ICN decouples data using a locationindependent naming method, which can solve the inconvenience caused by the mobility of the data source and improve the efficiency of content search and transmission by collecting information on the same file stored on different devices. There have also been studies involving multi-satellite distributed caching strategies. Liu et al. [
12] proposed the selection of a LEO satellite constellation network, considered the scenario of multiple satellites jointly serving multiple ground terminals, and solved the cache placement problem using the stable matching algorithm. Zhong et al. [
13] designed a multi-layer caching network for user–base station communication based on a ground social network, and remote scheduling of uncached content was accomplished using satellites.
In research on routing strategies for LEO networks, the initial routing strategy was connection-oriented. With the development of TCP/IP and networking, connectionless routing has attracted more attention [14]. Many satellite network routing strategies focus on masking the dynamic nature of LEO networks and determining the optimal path with minimal end-to-end delay, such as data packet routing based on virtual nodes and shortest path routing based on virtual topology [15]. To achieve a more even utilization of ISLs, the idea of alternate backup links was adopted to use source-based and all-node-based alternate link forwarding. To further consider the effect of traffic on routing strategies, routing selection was optimized based on the traffic flow category, thereby satisfying the requirements of different traffic classes [16]. However, both these strategies require the entire network topology state to be updated in advance every time the routing path is calculated, leading to high signaling overhead. In addition, obtaining an accurate network topology state in a time-varying LEO network is difficult. Inspired by ground ad hoc routing strategies, researchers proposed location-assisted on-demand routing (LAOR) [17]. The goal of LAOR is to reduce the end-to-end delay with minimal signaling overhead. To improve the signaling overhead of the LAOR strategy under a high business request frequency, an agent-based load balancing routing (ALBR) strategy, which defined two types of agents, mobile and fixed agents, was proposed [18]. Each satellite node deploys both types of agents during its operating period. However, when the mobile agent explores the available path between the source and destination nodes, it introduces additional traffic load and signaling overhead into the network. Furthermore, although LAOR and ALBR can collect network state information, due to the long propagation delay between satellite nodes, neither strategy can accurately obtain the actual network state.
Li et al. [19] proposed a state-aware and load-balanced (SALB) routing strategy for LEO networks to address load imbalance and node congestion. Their strategy employed queue occupancy grading based on link state estimation and delay weight adjustment, taking into account different scenarios, including load changes and node failures. The routing overhead was significantly reduced through route table resetting and dynamic updates based on the shortest path tree algorithm. Li et al. [20] proposed a semidistributed load balancing routing (SDLBR) algorithm that considers node congestion and link states. Based on the current congestion information, priority was set to respond to congestion, the correct forwarding direction was selected, and routing decisions were made based on real-time node status through a semi-distributed forwarding mechanism.
From the above analysis, it can be seen that existing routing strategies generally select paths based on the local traffic of the current and neighboring satellite nodes, and the scope of signaling interaction is limited. Although this reduces the signaling overhead, the lack of consideration of global state makes it difficult to ensure the global optimality of the routing strategy results. In addition, the cache length of ISLs is fixed and is not dynamically adjusted based on the traffic state.
3. System Modeling
The satellite-to-ground terminal communication process in an LEO satellite communication system is illustrated in Fig. 1. First, the satellite broadcasts to terminals within a certain area on the ground. The terminal initiates a registration request to the satellite using a public channel. After receiving a response from the satellite, the terminal feeds back the registration completion information through a dedicated channel. A communication link is then established through the application and response of the terminal’s location information. Subsequently, based on a series of interactive information, the satellite and terminal select an appropriate transmission mode for information transmission and release the communication link after the transmission is completed.
LEO satellite communication process.
Significant differences exist between LEO satellite mobile and terrestrial cellular communication systems. Handover in the ground cellular communication system is caused by a user moving out of a location zone. In an LEO satellite mobile communication system, handover is not only caused by the user’s movement but also by the high-speed movement of the satellite itself. Specifically, the high-speed movement of the LEO satellite and certain regions of the Earth are the main factors affecting handover.
Therefore, users will frequently initiate handovers during the communication process. The LEO satellite mobile communication system performs interstellar routing, which is jointly completed by ground stations and satellites. Designing routing algorithms, reducing interaction signaling between the ground and satellites, and reducing communication delays are the keys to interstellar link design; the call access control function is mainly completed on satellites. Due to the limited processing capacity of onboard equipment, strategies related to call access control must be simple and practical.
Fig. 2 shows a system diagram of LEO satellite-to-ground terminal communication. Due to the proximity of LEO satellites to the ground, the Earth cannot be treated as a point.
The communication distance d between the satellite and ground terminal varies with the motion of the satellite and is calculated as follows [
21]:
where [TeX:] $$R_e$$ denotes radius of the Earth, [TeX:] $$H_s$$ represents altitude of the satellite orbit, and α is the elevation angle between the satellite and ground terminal. It can be observed that the minimum value of d is achieved when [TeX:] $$\alpha=90^{\circ}$$, and the maximum value is achieved when [TeX:] $$\alpha=0^{\circ}$$. It should be noted that at very low values of α, due to the obstruction of tall buildings or mountains, line-of-sight propagation between the terminal and satellite is not possible, and multipath transmission dominates the transmission, further deteriorating the performance of the communication link.
In the communication process, the spatio-temporal correlation of the satellite network traffic is very important. We must update the output traffic of each inter-satellite link based on the actual traffic situation and perform reasonable cache allocation.
LEO satellite communication system.
4. Dynamic Cache Allocation based on GBRT
The proposed method first determines the spatio-temporal correlation of the LEO satellite network traffic using the correlation coefficient to identify the correlated traffic, and then GBRT is employed to train and smooth the parameters and obtain relatively accurate prediction values through testing. Finally, the predicted traffic of each ISL is updated based on the actual traffic conditions, and cache allocation is adjusted based on the updated traffic ratio.
4.1 Spatio-temporal Correlation Traffic
The spatio-temporal correlation of the satellite network traffic is analyzed using the time-delayed Pearson correlation coefficient [22]. Assuming that the traffic load for the i-th satellite is represented as a time series, we have the following:
where [TeX:] $$x_i(t)$$ denotes the latest traffic observation value for the i-th satellite. The time-delayed Pearson correlation between the two traffic time series [TeX:] $$X_i \text { and } X_j$$ with a delay of [TeX:] $$d_l$$ is defined as follows:
where [TeX:] $$\rho_{i j}\left(d_l\right)$$ is the Pearson correlation coefficient for [TeX:] $$X_j$$ in advance of [TeX:] $$X_i$$ by [TeX:] $$d_l$$ time units, which measures the degree of correlation between the current traffic of the i-th satellite and the historical traffic of the j-th satellite. n is the length of the traffic time series used for comparison, and [TeX:] $$x_i^{\prime} \text { and } x_j^{\prime}$$ are the means of [TeX:] $$X_i \text { and } X_j$$, respectively. When [TeX:] $$\rho_{i j}\left(d_l\right)$$ approaches +1 or -1, there is a strong spatio-temporal correlation between the two time series, whereas when [TeX:] $$\rho_{i j}\left(d_l\right)$$ is approximately 0, there is no spatiotemporal correlation between the two time series.
Using spatio-temporally-related traffic as predictive input eliminates subjectivity and randomness in the selection of input variables while simultaneously providing a more accurate description of traffic patterns, eliminating unnecessary interference, and ultimately enhancing training speed and prediction accuracy.
4.2 Traffic Prediction based on GBRT
The concept of gradient boosting originated from Bentejac et al. [23] and can be regarded as an optimization algorithm based on an error function. GBRT is a machine learning technique for solving classification and regression problems, generating a strong predictive model in the form of an ensemble of weak predictive models, such as decision trees. The core of GBRT lies in computing a basic model in each iteration, with the next computation aimed at reducing the residual of the previous model and building a new model in the direction of the residual reduction gradient. Therefore, by continuously adjusting and optimizing the weights of weak classifiers to form a strong classifier, the loss function can be minimized and optimized. To improve the prediction accuracy of the LEO network, a traffic prediction model was designed based on GBRT. Fig. 3 shows a flowchart of the prediction model.
Each basic model is a regression tree (RT) used to correct the residual generated in the previous iteration of calculation. The residual is the difference between the predicted and observed values; the smaller its value, the better the predictive performance of the model. To improve the performance of the model and reduce the residual value, improvements are made in two aspects: the loss function and input variables. The Huber loss function is introduced to reduce the residual loss of the samples. This is a loss function for robust regression [24], where variable a is expressed as a residual a=y-f(x) as follows:
Flowchart of the prediction model.
The spatio-temporally correlated traffic in the LEO system exhibits temporal and spatial characteristics. In the determined area, traffic with a high correlation coefficient with the target satellite is selected as the independent variable, and the traffic of the target satellite is selected as the dependent variable to establish a GBRT-based prediction model with spatio-temporally correlated traffic as the input variable.
Consider a training dataset [TeX:] $$(x, y)=\left\{\left(x_1, y_1\right), \ldots,\left(x_n, y_n\right)\right\},$$ where x is the input variable and y is the corresponding output variable. The leaves of each RT are denoted by [TeX:] $$J_m,$$ so the input space is divided into [TeX:] $$J_m$$ disjoint regions [TeX:] $$\left(R_{1 m}, R_{2 m}, \ldots, R_{j m}\right),$$ which can be expressed as follows:
where [TeX:] $$b_{j m}$$ is a constant value for region [TeX:] $$b_{j m},$$ and I(x) is an indicator function that takes the value 1 if [TeX:] $$x \in R_{j m}$$ and 0 otherwise.
The objective function of the prediction model is defined as follows:
where [TeX:] $$\tilde{F}(\mathrm{x})$$ minimizes the expected value of a given loss function [TeX:] $$L(y, f(x))$$ over the dataset.
Let the number of trees be M. The Huber loss function is introduced for fitting, and the model is initialized as a constant function:
The model iterates in the direction of gradient descent to reduce the residuals. The GBRT model fits the residual as an RT [TeX:] $$h_m(x).$$ To correct the residual of the previous RT, a new RT is added at each step, and the model is updated as follows:
where [TeX:] $$\rho_m$$ denotes the step size of gradient descent. The GBRT constructs a new model in the direction of gradient descent and continuously updates the model by minimizing the expected value of the loss function so that the model eventually stabilizes.
The model has three key parameters: number of trees M, learning efficiency μ, and depth of trees D. In general, as the number of trees increases, training errors can be reduced. However, if the number of trees is too large, the generalization ability of the model will be significantly weakened, and the predictive performance of the model will decrease due to overfitting. Therefore, it is necessary to optimize the number of model trees to minimize prediction errors; overfitting can also be avoided by controlling the number of trees. In addition, the gradient boosting algorithm introduces a shrinkage factor, which gradually approaches the optimal result in small steps to avoid overfitting caused by approaching the result too quickly. The model is represented as follows:
where the parameter [TeX:] $$0\lt \mu \leq 1$$ is the weight reduction coefficient of each weak classifier. GBRT with a low learning rate can effectively improve the generalization ability of the model; however, a very low learning rate requires more trees to make the model converge, which may reduce the prediction accuracy. The depth of the trees, i.e., the complexity of the basic model trees, is also a factor that affects prediction accuracy. The proposed model improves the prediction accuracy by continuously optimizing the combination of M, μ, and D.
4.3 Cache Allocation based on Traffic Prediction
The basic idea of the DCAR strategy is to perform cache allocation based on the predicted traffic results during the network initialization phase according to the traffic ratio of each satellite link. During the system operation phase, the output traffic of each satellite link is updated according to the actual traffic situation, and the cache allocation is adjusted based on the updated traffic ratio.
4.3.1 Initialization
During the initialization phase, each satellite node is in a light-load state and congestion will not occur. Therefore, each satellite node calculates the routing path based on the traffic prediction results when forwarding data packets. Taking satellite node [TeX:] $$V_i$$ as an example, the business traffic from [TeX:] $$V_i$$ to other nodes [TeX:] $$V_j$$ is composed of two parts: ground node traffic and traffic from neighboring satellite nodes flowing in, represented by [TeX:] $$J_{i j}.$$ Therefore, the traffic [TeX:] $$J_{i j}$$ from [TeX:] $$V_i$$ to all other [TeX:] $$V_j$$ can be obtained, and this traffic will be forwarded to neighboring nodes through four satellite links of [TeX:] $$V_i$$, with a forwarding traffic size of [TeX:] $$J_{i j}^k$$, where k = 1,2 represent the upper and lower ISLs in the same orbit plane, respectively, and k = 3,4 represent the left and right ISLs between different orbit planes, respectively. After obtaining [TeX:] $$J_{i j}^k$$, the total traffic [TeX:] $$J_{j}^k$$, on the four satellite links of [TeX:] $$V_i$$ can be represented as follows:
If the total cache resources of [TeX:] $$V_i$$ are defined as Q, then the cache resources allocated to the four satellite links [TeX:] $$Q_k$$ can be represented as follows:
It can be seen that [TeX:] $$\sum Q_k=Q,$$ which means that the total cache resources allocated to the four satellite links [TeX:] $$Q_k$$ sum up to the total cache resources Q.
4.3.2 Dynamic cache allocation
During the system operation phase, [TeX:] $$V_i$$ periodically monitors the traffic output of the four satellite links, with a monitoring cycle of [TeX:] $$t_\delta$$. Assuming that in the last monitoring cycle, i.e., [TeX:] $$\left(t-t_\delta, t\right),$$ the output traffic of the ISL is [TeX:] $$\hat{J}_i^k$$ and the average output traffic in the time interval [TeX:] $$\left(0, t-t_\delta\right)$$, is [TeX:] $$\hat{J}_i^k$$, then the output traffic can be updated as follows:
where [TeX:] $$0 \lt \lambda \lt 1$$ denotes the forgetting factor. The output traffic should be a long-term average value, and a sharp drop should be avoided. Therefore, when λ is too small, the change in output traffic will be very slow, making it difficult to represent the long-term average value and affecting the efficiency of cache resource allocation. When λ is too large, the change in output traffic will follow short-term traffic fluctuations, making it difficult to achieve accurate traffic prediction.
In addition to traffic monitoring, [TeX:] $$V_i$$ periodically monitors the queue usage of the four satellite links, with a monitoring cycle of [TeX:] $$t_\delta$$. The cache queue occupancy rate of each satellite link obtained in each monitoring cycle is defined as [TeX:] $$Q_k^r,$$ and two threshold values are set: [TeX:] $$Q_k^a \text { and } Q_k^b.$$ When [TeX:] $$Q_k^r \leq Q_k^a,$$ the state of the satellite link is defined as idle. When [TeX:] $$Q_k^a \leq Q_k^r \leq Q_k^b,$$ the state of the satellite link is defined as normal. When [TeX:] $$Q_k^r\gt Q_k^b,$$ the state of the satellite link is defined as congested. To avoid packet loss in the cache queue, the value of [TeX:] $$Q_k^b$$ must satisfy the following:
where B is the capacity of the ISL, [TeX:] $$t_d$$ is the one-hop propagation delay of the ISL, and [TeX:] $$\tilde{P}$$ packet size. Therefore, [TeX:] $$Q_k^b$$ can be obtained as follows:
The value of [TeX:] $$Q_k^b$$ should ensure sufficient time to recalculate the routing path before reentering the congested state. If the time consumed to calculate the routing path is [TeX:] $$t_{RP}$$ then the value of [TeX:] $$Q_k^b$$ should satisfy the following:
It can be seen that [TeX:] $$Q_k^a \text { and } Q_k^b$$ can be calculated once [TeX:] $$\hat{J}_i^k$$ is determined. In the proposed method, the satellite node [TeX:] $$V_i$$ dynamically adjusts the cache allocation based on the status of the four cache queues in each monitoring cycle [TeX:] $$t_\delta.$$ Taking link k as an example, if its cache queue occupancy rate in the current monitoring cycle is [TeX:] $$Q_k^r$$ and the total length of the queue is [TeX:] $$Q_k$$, then if the link is in the idle state, the length of its cache queue will be adjusted to [TeX:] $$\tilde{Q}_k=Q_k-\left(Q_k-Q_k^r Q_k\right) / 2,$$ and the cache resources released by the link will be [TeX:] $$Q_k^{\prime}=Q_k-\tilde{Q}_k=\left(Q_k-Q_k^r Q_k\right) / 2$$; if the link is in the normal or congested state, the released cache resources will be free, i.e., [TeX:] $$Q_k^{\prime}=0 .$$ By traversing all satellite links, the total cache resources released are [TeX:] $$Q_{\text {total }}=\sum Q_k^{\prime}, \text { and } V_i$$ allocates [TeX:] $$Q_{\text {total }}$$ according to the cache resource demand ratio for each congested link. Thus, a dynamic cache allocation strategy is realized based on traffic prediction.
5. Experiment and Analysis
An Iridium satellite constellation was constructed using STK, as shown in Fig. 4, and the proposed routing strategy was simulated using the OPNET simulation software [25]. OPNET has rich statistical parameter collection and analysis functions. It can directly collect commonly used statistical performance parameters at each network level and has a variety of statistical parameter collection and processing methods. Special network parameters can be collected through underlying network programming, and simulation reports can be prepared and output. The simulation adopted the network structure of the Iridium system, with an LEO height of 670 km, 6 orbit planes, and 11 satellites per orbit plane. Each satellite established ISLs with two neighboring satellites in the same orbit plane (upper and lower) and two neighboring satellites in adjacent orbit planes (left and right), except for the satellites on the opposite sides of the orbit planes. The capacity of the ISLs was set to 50 Mbit/s, and that of the satellite-to-ground links was set to 100 Mbit/s. The one-hop propagation delay [TeX:] $$t_d$$ of the ISLs was set to 20 ms. In the satellite network, data were transmitted based on IP, and the data packet size was set to 10 KB. The monitoring period [TeX:] $$t_\delta$$ was set to 30 ms, and the calculation time [TeX:] $$t_{RP}$$ for routing strategy was set to 5 ms. The total cache queue length Q of the satellite was set to 600 data packets. The simulation used the orbit agent strategy [26], and each agent collected the link status and queuing delay information every 50 seconds, i.e., each satellite node refreshed its routing table every 50 seconds. A total of 165 terminals were used in the simulation. Each terminal’s traffic flow was set to follow an on-off flow with a Pareto distribution, with a shape parameter of 1.2 and an average activation and waiting time of 200 ms. The simulation lasted for 48 hours and was repeated 10 times, with the average of the 10 runs taken as the final simulation result.
5.1 Evaluation Metrics
Mean absolute percentage error (MAPE) and mean absolute error (MAE) were used to evaluate the performance of the traffic prediction model, with smaller values indicating more accurate predictions:
where [TeX:] $$y(i)$$ denotes the actual traffic value, [TeX:] $$y^{\prime}(i)$$ is the predicted traffic value, and m is the number of test samples.
In addition, we compared the performances of different routing strategies in terms of packet loss rate and average end-to-end delay.
5.2 Results of Traffic Prediction
At time t, the LEO_2_3 satellite was selected as the target satellite [TeX:] $$X_1$$ for traffic prediction. The load of [TeX:] $$X_1$$ at time t is denoted as [TeX:] $$X_1(t)$$. By observing the time series of [TeX:] $$X_1$$'s traffic load during this period, we found that the traffic was relatively high during the daytime and low in the early morning, indicating a periodic variation in satellite network traffic. We selected the area covered by the nine Iridium satellites as the study area. In this area, if the angle between the line connecting a satellite and the ground point directly below [TeX:] $$X_1$$ and the line connecting [TeX:] $$X_1$$ and its own ground point was less than or equal to [TeX:] $$60^{\circ}$$, we collected the satellite network traffic of that satellite over a certain period as the condition for analyzing the spatio-temporal correlation of the satellite network traffic. We first collected the traffic time series of [TeX:] $$X_1$$ in the last 400 minutes and obtained measurements every 60 minutes.
Let S be the satellite sequence number and C be the correlation coefficient with the target satellite. Table 1 lists the Pearson correlation coefficients between
[TeX:] $$X_1$$ and the traffic of other satellites. As shown in Table 1,
[TeX:] $$X_1$$ has a high spatio-temporal correlation with the traffic of these 32 groups of satellites (correlation coefficient greater than 0.80), but there are 14 groups with correlation coefficients above 0.85, so 0.85 was selected as the threshold. These 14 groups of spatio-temporally correlated traffic were used as input for predicting the traffic of
[TeX:] $$X_1$$. The spatio-temporally correlated traffic reflects the crosscorrelation and autocorrelation of the satellite network. Using spatio-temporally correlated traffic for prediction can eliminate interference from irrelevant traffic and improve the accuracy of the final results.
Table 2 provides a comparison of the prediction performance of different traffic prediction methods. The SVM-based method used in [6] cannot handle the temporal dynamics and nonlinearity present in LEO traffic data, resulting in poor performance when applied to LEO networks. The methods in [7] and [9] use deep learning models for traffic prediction, which are highly complex and dependent on hyperparameter selection. In addition, the limited amount of training data severely limits the performance of these methods. The proposed method considers the spatio-temporal correlation of satellite network traffic when predicting traffic load. By continuously adjusting the weights of the basic model, the proposed method achieved significantly improved prediction accuracy compared to traditional methods while reducing the training time.
Fig. 5 presents a comparison of the prediction errors of the proposed method and [9] for different numbers of testing samples. The proposed method continuously optimizes the weights of weak learners during the training process of the decision tree, turning them into strong learners and thereby improving the accuracy of the model’s predictions. The prediction error of the proposed method was significantly lower than that of the method in [9], demonstrating the effectiveness of the proposed method.
Person correlation coefficient between target satellite and other satellites
Traffic prediction comparison
Prediction error comparison.
5.3 Performance Analysis of Dynamic Cache Routing Strategy
Table 3 lists the packet loss rate performances of different strategies under various traffic loads. It can be seen from the table that the proposed DCAR strategy successfully reduced packet loss probability. The methods in [17] and [18] could not obtain an accurate state of the LEO network due to the introduction of additional signaling overhead and excessive traffic loads, and the performance deteriorated rapidly as the traffic load increased. The methods in [19] and [20] achieved significant performance improvements through load balancing and node congestion processing. By contrast, the proposed strategy can adjust the buffer queue length according to the traffic load ratio so that the ISL can provide more space to cache data packets under a high traffic load. From the results in Table 3, when the total traffic flow is high, the packet loss rate increases significantly. This is because the data packets that must be transmitted in the link exceed the capacity of the ISL, resulting in congestion at multiple nodes in the network.
Packet loss rate comparison (unit: %)
Table 4 lists the average end-to-end delays of different strategies under various total traffic loads. It can be seen that the proposed DCAR strategy improves the average end-to-end delay of data packets. The core idea of [19] and [20], and the proposed strategy is to perform rerouting of data packets when congestion occurs, i.e., to divert data packets to paths that do not include congested nodes for transmission. However, the advantage of the proposed DCAR strategy is that it provides sufficient cache space through cache queue allocation when the delay of rerouting data packets exceeds the queuing delay and avoiding the serious deterioration of queuing delay by changing the routing path when the delay of rerouting data packets is less than the queuing delay. Therefore, although the end-to-end delay increases with increasing traffic load, the proposed DCAR strategy shows an optimal average end-to-end delay performance.
Average end-to-end delay comparison (unit: ms)
As shown in Fig. 6, we compared the total system throughput under different numbers of terminal devices using the proposed method and methods in [19] and [20]. Fig. 6 shows the number of terminal devices on the horizontal axis and system throughput on the vertical axis. The system throughput decreased as the number of terminal devices increased. The system throughputs obtained by the two methods in [19] and [20] are very close because they have the same total resources and each terminal device occupies independent resources, resulting in small differences in the final results. The proposed method achieved a higher system throughput, demonstrating the effectiveness of using a gradient boosting tree algorithm for traffic prediction and dynamically adjusting cache queues based on the prediction results.
Fig. 7 shows the business distribution coefficient (BDC) performance of the three strategies under different total traffic amounts. A higher BDC indicates a more uniform traffic distribution over the entire system. It can be seen from Fig. 6 that the traffic distribution of the proposed dynamic cache allocation strategy is initially slightly better than that of [19] and [20]; however, with an increase in traffic load, the improvement ratio of the business distribution increases. This is because when the load is high, the proposed strategy can better balance the queuing and switching of data packets to obtain a higher BDC.
System throughput comparison.
Business distribution coefficient.
Now, we discuss practical applications. LEO satellites orbit approximately 750–1,500 km above Earth’s surface, which is just above Earth’s atmosphere and below the inner Van’s radiation belt. LEO satellite networks can reuse wireless spectra more efficiently, thereby increasing the communication bandwidth. The round-trip delay between satellite and ground nodes is approximately 15 ms, which meets the network delay requirements of most real-time applications. At the same time, the terminal size of the ground user node is equivalent to that of an ordinary handheld terminal and consumes less power. For user nodes, the same LEO satellite can provide services for approximately ten minutes at most. The most notable feature of LEO is the provision of voice and low-speed data services with transmission rates of 2.4 k and 7.2 k, respectively.
6. Conclusion
To address the data packet routing problem in LEO satellite systems, the spatio-temporal characteristics of the traffic load of each satellite node in an LEO system were analyzed, and a traffic prediction method based on the GBRT algorithm was proposed. Based on this prediction method, a DCAR strategy based on actual traffic loads was proposed, dividing the system into two stages: initialization and system operation. The cache resources occupied by the four ISLs of the satellite nodes are dynamically allocated based on the traffic prediction results. Simulation results show that the proposed method can improve the packet loss rate and end-to-end delay and has certain theoretical value and application prospects in LEO satellite systems.
The round-trip delay between medium-orbit (MEO) satellites and the ground is long. Compared with the round-trip delay between ground user nodes and LEO satellites, which is only several milliseconds, the round-trip delay from ground user nodes to MEO satellite nodes is approximately 100 ms, even without forwarding by LEO satellites. Therefore, the round-trip delay caused by MEO satellites is greater than that caused by LEO satellites. Similar to the traditional land-based public mobile communication network, only before the communication process between user nodes is established, the network needs to obtain the current location and access information of the target user node through the mobility management system. From the perspective of routing strategies, the dynamic route caching strategy used in this study is also applicable to MEO satellites.