The continuous development of the Internet has led to an increase in Internet users. Although network services have enhanced efficiency and convenience in various aspects of management, various challenges persist. With the rapid growth in the number of Internet users, website traffic has increased exponentially. Moreover, certain events and times are associated with high website traffic. Additionally, some business features also result in users accessing the business servers simultaneously, often causing a performance bottleneck in server response.
Network service providers typically select a cluster strategy based on multiple servers to improve the working ability of web servers. Numerous concurrent access requests from users are first distributed to different servers through load balancers, which ensure consistency in the processing of the user requests. This method effectively reduces the computing intensity and data flow of a single server, and solves the problem of a single-machine architecture not meeting high concurrency requirements. The choice of load balancer depends on the specific usage scenario of the user. Load balancers have different working principles and functionalities; however, they all comprise hardware and software. Hardware equipment includes products from enterprises such as F5, CISCO, and Radware, which have high-performance loadbalancing capabilities and also provide solutions and optimization methods for security, disaster recovery, network monitoring, and other aspects. Hardware equipment exhibits satisfactory loadbalancing performance, high efficiency, and good stability; its main deficiency is its high cost. The software equipment included a Linux virtual server (LVS), Nginx, and HAProxy. Users can configure load-balancing policies according to their requirements at a low cost. Compared with hardware equipment, software equipment can be both economical and practical, and its disadvantage is that it requires a self-optimized configuration; however, there are relatively few practical application cases at present. Therefore, the representative Nginx software is selected as the research object to explore strategy selection and optimization in the specific application process to provide further experience for the practical application of the software equipment.
This study introduces Nginx into the educational administration system, compares and selects the best Nginx technology strategy, and investigates the optimal configuration of server cluster load balancing at the peak of user access according to the performance of Nginx and the network architecture of the educational administration system.
Nginx is an open-source lightweight web service that provides high-performance HTTP/HTTPS . Because of its stability, low resource consumption, high performance, and other characteristics, it has been widely favored by mainstream network service providers in recent years .
Nginx can solve the problem of server load balancing under high concurrency of user requests. Its advantages include high concurrency, open-source code, and very efficient static content processing; therefore, it is often used as a load balancer for external services on websites . Moreover, Nginx consumes fewer resources when a server runs. After optimization, the number of concurrent connection responses can reach 30,000, which effectively addresses the C10K problem .
As a server, Nginx’s value lies not only in load balancing but also in its reverse agent function . Servers in the server cluster are prone to server load pressure imbalance while responding to requests, which not only underutilizes the server resources, but also leads to a decline in the overall performance of the cluster system . In such cases, Nginx can provide load allocation strategies and feed Web server response information back to the requesting users, serving as a reverse proxy. A reverse proxy loadbalancing diagram is shown in Fig. 1.
When a user initiates an access request, Nginx allocates the web servers and forwards a response from them. In this state, the server IP is hidden , and the client cannot obtain specific web server information. This effectively prevents an external network from maliciously attempting to attack the cluster server maliciously , guarantees the normal service function of the system, adds security protection at the front end of the system, and provides a barrier to system security.
Architecture of the Nginx reverse proxy.
2.1 Nginx Working Strategy
To maximize the performance of the cluster system, when users send requests to web servers, Nginx is typically used to set an appropriate balance strategy to forward the user requests to the back-end node servers. Subsequently, multiple servers in the cluster share network requests, and the server nodes work evenly, which expands the server bandwidth, reduces the average network response time, and improves network flexibility. Redundancy is also provided to ensure that businesses can still run normally even when their servers are unavailable . When a server node in the cluster fails, Nginx can create the failed node offline by detecting the node status to ensure high availability of the server node in the cluster system.
The Nginx load-balancing function is mainly realized through its built-in load-balancing strategy, which includes the following strategies .
1) Polling strategy
The node servers in the cluster have the same response level, and the system assigns client requests to different servers in the cluster for processing according to the time order. If a server fails, the Nginx service will automatically take the server node offline. This is the default policy applicable to a situation in which the configuration of the servers in the cluster has little difference, and the service request remains stateless.
2) Weight strategy
The node server task assignment in the cluster is performed using Nginx according to weights. The default value of each weight is 1, and Nginx usually assigns user requests to the corresponding node servers in the order of weight value. A large weight indicates a higher allocation probability. This policy is suitable for situations in which the hardware configurations of the node servers in the cluster are very different.
3) IP-hash strategy
Nginx uses the client IP address for hash calculation and allocates the corresponding Web server in the cluster to link with the user according to the hash function value of the IP address. This approach ensures that the same IP address is assigned to the same Web server, guaranteeing session continuity and resolving issues associated with sessions spanning multiple servers. This strategy also applies to state services.
Nginx can act as a front-cache server, caching front-end requests and improving the performance of web servers. This creates a local copy of the content that the user has recently accessed. When the data are accessed again within a period of time, Nginx does not need to make a request to the backend, thereby reducing network congestion and data transmission delay, and improving user access speed. Nginx can also be used as a forward proxy server. The client initiates a request from the target server. After the target server responds, the request is returned to the client through the Nginx forward agent, which effectively protects the data security and personal privacy of the client.
3. Characteristics of the Educational Administration System and Analysis of Nginx Deployment
3.1 Characteristics of the Educational Administration System
The system network architecture operates in a B/S mode. Once users are assigned roles by the system, they can log into the system through authentication and initiate requests to the Web server through a browser. After authentication, the server generates a client session ID and returns it to the client, who then receives the session ID and saves it in a cookie. The client visits the server again and provides a session ID. When the server receives a request from a client, it first checks for the session ID. If it does not exist, the server creates a new session ID and repeats the above process. Otherwise, the server goes through its session file, finds the session ID, and transfers the information. A client session flowchart is shown in Fig. 2.
Client session flowchart.
3.2 Analysis of Nginx Deployment
Establishing a server cluster ensures the service stability of the system during peak hours and satisfies the system session requirements. If the IP-hash strategy is used, users with the same IP address and fixed node server in the cluster are identified through a hash calculation and its value, which meets the requirements of the educational administration system. However, this method suffers from the problem of access to one server from the same LAN user, and access aggregation occurs during peak hours, which seriously affects the server response time and system throughput under high concurrent requests.
The server cluster introduces Nginx to achieve load balancing, and the node servers in the cluster must meet session content synchronization, that is, session content sharing of the node servers in the cluster.
The remote dictionary server (Redis) service mode is introduced to facilitate session content sharing. Redis is a distributed cache system with key-value as storage objects. During its operation, all stored data are loaded into the memory, where the operation of the cache data is completed with extremely high response speed and throughput. Redis saves sessions, and node servers in the cluster share sessions through Redis distributed storage to ensure the normal completion of user requests. The system topology diagram is as shown in Fig. 3.
Nginx plus the Redis load balancing mode.
4. Application of Nginx Deployment and Configuration
4.1 Web Server Configuration and Load Balancing Policy Settings
1) Web server configuration
In the Tomcat software of the web server, the following code is compiled in the conten.xml file of the Conf directory.
Here, the host is the Redis server address, post is the Redis service port number, and the database is the number showing that Redis stores the session. MaxInactiveInterval indicates that the session expiration time is 1800 s.
The three files, commons-pool2-2.2.jar, jedis-2.7.2.jar, and tomcat-redis-session-manage-tomcat7.jar, are added to the lib directory in the Tomcat software.
2) The Nginx load-balancing policy settings
In the Nginx server, the upstream module of the nginx.conf file is called jwServer, the default service port of the web server in the cluster is 80, the maximum failure is set to three, and the failure timeout is set to 15 seconds. If there are three server requests, the failure number is 3, and the server is paused for 15 seconds.
Implementing a reverse proxy with proxy_pass in the location module forwards the customer request service to the defined server upstream.
In the configuration of the print server, according to the deployment characteristics of the educational administration system, a fixed web server is set up to process the print service. The print server location is set to cjPrint in Nginx, and its external access address is http://(webip).
4.2 Nginx Security Policy Settings
Users make access requests through the browser, and data transmission is completed through the HTTP protocol. If there is no encryption mechanism in the HTTP protocol, there are some deficiencies, such as no verification of customer responses and failure to identify camouflage. Nginx can rewrite the URL with the Rewrite module using the HTTPS protocol combined with SSL and HTTP, which has the following advantages: (1) enables encryption of data using the security features of the HTTPS protocol itself to prevent the leakage of sensitive data; (2) enables completion of the identity authentication problem of a third party to prevent the system from being accessed by other unauthorized personnel; and (3) prevents malicious tampering of user data during network transmission to ensure data integrity . After establishing a secure communication line with SSL, HTTP communication is conducted online to increase data transmission security.
The server module is configured in the nginx.conf. The Nginx listening port is 80. Server_name indicates the external access domain name, and rewrite is used to redirect the URL using the HTTPS transport protocol.
The SSL protocol and its configuration fields are shown in Table 1.
SSL protocol configuration fields and definitions
To prevent HTTP access from displaying error 400, configure port 443 for SSL and enable the simultaneous use of both the HTTP and HTTPS protocols.
The configuration content is as follows:
5. Experimental Verification
5.1 Web Client Virtual Machine Configuration
The experiment utilizes a system consisting of three servers with Linux operating systems, each with the following attributes: CPU, Intel Xeon E5-2630 V3; main frequency, 2.40 GHz; memory, 16 G; and hard disk, 300 G. One Redis server is also used.
5.2 Test Process
In the experimental setup, the operating system Windows 10 Professional Edition is used, Apache 2.4 is employed as the test software, and the experimental simulation environment involves users ranging from 1,000 to 10,000 in increments of 1,000. The objective of the experiment is to assess the performance variations of Nginx under high concurrent request conditions.
The command line is entered in the Windows PowerShell as an administrator:
5.3 Statistical Data
The results obtained from the tests are shown in Tables 2 and 3. The comparison results are shown in Fig. 4.
5.4 Experimental Results
The pressure test experiment reveals that, in the context of high user concurrency and no Nginx agents, the performance of the web server in the educational administration system fluctuates significantly. During the stress test, within the range of 1,000–10,000 user requests, the error message “The timeout specified has expired” appears three times during the user concurrent request levels of 4,000, 7,000, and 10,000, indicating that the web server cannot fully guarantee the system access requirements in a high concurrency state. The system balances user requests to each web server in the server cluster composed of Nginx agents; the entire educational administration system has no errors when 1,000–10,000 users access the system, and the system is stable in the high concurrent access test.
The test experiment also reveals that the Nginx and node servers form a server cluster, and both the IPhash policy and the Nginx plus Redis mode can meet the system requirements of client access, that is, maintain the session state. However, the IP-hash hash has the problem of aggregation with LAN access, and LAN proxy access is not applicable. Although the Nginx + Redis mode increases the number of Redis hardware servers, the configuration requirements are not high. Through an experimental comparison, the addition of the Redis server consumes less time than the IP hash, and the completion time is better than that of the IP hash strategy. When the high-concurrency state is within 10,000, the request time of the client fluctuates slightly. Therefore, it can be applied to educational administration systems, owing to its relatively stable performance.
System response statistics table (IP hash strategy)
System response statistics table (session sharing strategy)
Response time comparison of different Nginx strategies.
In big-data applications, the functions of information management systems are becoming increasingly perfect. With an increase in the number of users, the problem of system stability during peak system access must be addressed. The load balancing approach should leverage the technical advantages and analyze the characteristics of the system to meet specific needs. This approach ensures a rational allocation of system resources, appropriate application, and successful integration of technology with system requirements. Combining Nginx with a web server to build a server cluster can further optimize the service functions, ensure fast and stable access to the educational administration system, and solve the problem of an uneven system load when requests are highly concurrent to improve the stability of the system. Simultaneously, the Nginx reverse proxy function conceals the node server, preventing network attacks from accessing target information and improving the security of system services. This study focuses on the characteristics of the system, compares and analyzes different load-balancing strategies, and achieves an optimal configuration and reasonable application.