Why does nginx reverse Proxy have poor concurrency performance?

Problem description:

The hardware configuration of the front-end server and the source server is as follows. The system is centos7:

CPU: E5-2696V4*2(88 threads)
Memory: 64GB DDR4
Hard disk: 500GB SSD (about 450M/s for read/write)

Both the front-end and back-end servers use nginx, and they can test concurrency independently. After the front-end reverse proxy connects to the back-end, the total normal connection time is 200ms. However, once the concurrency exceeds 20, at least one or two connections take more than 1.5 seconds. If the concurrency is increased, the proportion of the connections that take a long time is higher.

The network quality between the front-end server and the source server is very good, and it can be basically determined that it is not the problem of the source station, because when accessing the CDN service provider and testing the source with a single CDN node, the time will not become longer if the concurrency is very high.

I am from nginx configuration and linux kernel parameters to optimize, do a lot of attempts, have no effect, have encountered similar problems friends? Can you give me some advice?

Solution A:

The back end must use of long connections, be sure to use the name upstream (the http layer defines the upstream, and then proxy_pass refers to the upstream, while the back end uses keep-alive, the default back to the upper layer is close), otherwise back to the internal application. Each request creates a new TCP connection and consumes a large number of connections.

map $http_upgrade $fwd_http_connection {
    "" "keep-alive";
    "h2c" "keep-alive";
    default $http_connection;
}

map $http_upgrade $fwd_http_upgrade {
    "h2c" "";
    default $http_upgrade;
}

upstream local-apps {
    keepalive 128;
    keepalive_requests 1000;
    keepalive_time 3600s;
    keepalive_timeout 180s;

    server 127.0.0.1:xxxx;
}

server {
    xxxx...
    location / {
        xxxx...
        proxy_pass http://local-apps;
        proxy_set_header Upgrade $fwd_http_upgrade;
        proxy_set_header Connection $fwd_http_connection;
    }
}

Solution B:

http1.0, http1.1, and http2.0, their keepalive rules are different. I chose to clear the connection directly, and the problem was solved

Leave a Reply

Your email address will not be published. Required fields are marked *

Releated

Seriously, why don’t I recommend you buy Samsung SSD

As the title says, I have always been disgusted by Samsung’s products, especially SSD. Samsung always give me the impression that it is good at packaging itself, naming its high-end products. Problems have emerged one by one in the past two years, and the company’s positioning, selling price, and some operations make me uncomfortable. 0E […]