A Distributed Denial‑of‑Service (DDoS) attack is an attempt to make a service, usually a website, unavailable by bombarding it with so much traffic from multiple machines that the server providing the service is no longer able to function correctly because of resource exhaustion.
Typically, the attacker tries to saturate a system with so many connections and requests that it is no longer able to accept new traffic, or becomes so slow that it is effectively unusable.
Application‑Layer DDoS Attack Characteristics
Application‑layer (Layer 7/HTTP) DDoS attacks are carried out by software programs (bots) that can be tailored to best exploit the vulnerabilities of specific systems. For example, for systems that don’t handle large numbers of concurrent connections well, merely opening a large number of connections and keeping them active by periodically sending a small amount of traffic can exhaust the system’s capacity for new connections. Other attacks can take the form of sending a large number of requests or very large requests. Because these attacks are carried out by bots rather than actual users, the attacker can easily open large numbers of connections and send large numbers of requests very rapidly.
Characteristics of DDoS attacks that can be used to help mitigate against them include the following (this is not meant to be an exhaustive list):
The traffic normally originates from a fixed set of IP addresses, belonging to the machines used to carry out the attack. As a result, each IP address is responsible for many more connections and requests than you would expect from a real user.
Note: It’s important not to assume that this traffic pattern always represents a DDoS attack. The use of forward proxies can also create this pattern, because the forward proxy server’s IP address is used as the client address for requests from all the real clients it serves. However, the number of connections and requests from a forward proxy is typically much lower than in a DDoS attack.
- Because the traffic is generated by bots and is meant to overwhelm the server, the rate of traffic is much higher than a human user can generate.
- The
User-Agent
header is sometimes set to a non‑standard value. - The
Referer
header is sometimes set to a value you can associate with the attack.
Using NGINX to Fight DDoS Attacks
NGINX and NGINX Plus have a number of features that – in conjunction with the characteristics of a DDoS attack mentioned above – can make them a valuable part of a DDoS attack mitigation solution. These features address a DDoS attack both by regulating the incoming traffic and by controlling the traffic as it is proxied to backend servers.
Inherent Protection of the NGINX Event‑Driven Architecture
NGINX is designed to be a “shock absorber” for your site or application. It has a non‑blocking, event‑driven architecture that copes with huge amounts of requests without a noticeable increase in resource utilization.
New requests from the network do not interrupt NGINX from processing ongoing requests, which means that NGINX has capacity available to apply the techniques described below which protect your site or application from attack.
More information about the underlying architecture is available at Inside NGINX: How We Designed for Performance & Scale.
Limiting the Rate of Requests
You can limit the rate at which NGINX accepts incoming requests to a value typical for real users. For example, you might decide that a real user accessing a login page can only make a request every 2 seconds. You can configure NGINX to allow a single client IP address to attempt to login only every 2 seconds (equivalent to 30 requests per minute):
limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m;
server {
# …
location /login.html {
limit_req zone=one;
# …
}
}
The limit_req_zone
directive configures a shared memory zone called one to store the state of requests for the specified key, in this case the client IP address ($binary_remote_addr
). The limit_req
directive in the location
block for /login.html references the shared memory zone.
For a detailed discussion of rate limiting, see Rate Limiting with NGINX on our blog.
Limiting the Number of Connections
You can limit the number of connections that can be opened by a single client IP address, again to a value appropriate for real users. For example, you can allow each client IP address to open no more than 10 connections to the /store area of your website:
limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
# …
location /store/ {
limit_conn addr 10;
# …
}
}
The limit_conn_zone
directive configures a shared memory zone called addr to store requests for the specified key, in this case (as in the previous example) the client IP address, $binary_remote_addr
. The limit_conn
directive in the location
block for /store references the shared memory zone and sets a maximum of 10 connections from each client IP address.
Closing Slow Connections
You can close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible (thus reducing the server’s ability to accept new connections). Slowloris is an example of this type of attack. The client_body_timeout
directive controls how long NGINX waits between writes of the client body, and the client_header_timeout
directive controls how long NGINX waits between writes of client headers. The default for both directives is 60 seconds. This example configures NGINX to wait no more than 5 seconds between writes from the client for either headers or body:
server {
client_body_timeout 5s;
client_header_timeout 5s;
# …
}
Denylisting IP Addresses
If you can identify the client IP addresses being used for an attack, you can denylist them with the deny
directive so that NGINX does not accept their connections or requests. For example, if you have determined that the attacks are coming from the address range 123.123.123.1 through 123.123.123.16:
location / {
deny 123.123.123.0/28;
# …
}
Or if you have determined that an attack is coming from client IP addresses 123.123.123.3, 123.123.123.5, and 123.123.123.7:
location / {
deny 123.123.123.3;
deny 123.123.123.5;
deny 123.123.123.7;
# …
}
Allowlisting IP Addresses
If access to your website or application is allowed only from one or more specific sets or ranges of client IP addresses, you can use the allow
and deny
directives together to allow only those addresses to access the site or application. For example, you can restrict access to only addresses in a specific local network:
location / {
allow 192.168.1.0/24;
deny all;
# …
}
Here, the deny all
directive blocks all client IP addresses that are not in the range specified by the allow
directive.
Using Caching to Smooth Traffic Spikes
You can configure NGINX to absorb much of the traffic spike that results from an attack, by enabling caching and setting certain caching parameters to offload requests from the backend. Some of the helpful settings are:
- The
updating
parameter to theproxy_cache_use_stale
directive tells NGINX that when it needs to fetch an update of a stale cached object, it should send just one request for the update, and continue to serve the stale object to clients who request it during the time it takes to receive the update from the backend server. When repeated requests for a certain file are part of an attack, this dramatically reduces the number of requests to the backend servers. - The key defined by the
proxy_cache_key
directive usually consists of embedded variables (the default key,$scheme$proxy_host$request_uri
, has three variables). If the value includes the$query_string
variable, then an attack that sends random query strings can cause excessive caching. We recommend that you don’t include the$query_string
variable in the key unless you have a particular reason to do so.
Blocking Requests
You can configure NGINX to block several kinds of requests:
- Requests to a specific URL that seems to be targeted
- Requests in which the
User-Agent
header is set to a value that does not correspond to normal client traffic - Requests in which the
Referer
header is set to a value that can be associated with an attack - Requests in which other headers have values that can be associated with an attack
For example, if you determine that a DDoS attack is targeting the URL /foo.php you can block all requests for the page:
location /foo.php {
deny all;
}
Or if you discover that DDoS attack requests have a User-Agent
header value of foo
or bar
, you can block those requests.
location / {
if ($http_user_agent ~* foo|bar) {
return 403;
}
# …
}
The http_name
variable references a request header, in the above example the User-Agent
header. A similar approach can be used with other headers that have values that can be used to identify an attack.
Limiting the Connections to Backend Servers
An NGINX instance can usually handle many more simultaneous connections than the backend servers it is load balancing. With NGINX Plus, you can limit the number of connections to each backend server. For example, if you want to limit NGINX Plus to establishing no more than 200 connections to each of the two backend servers in the website upstream group:
upstream website {
server 192.168.100.1:80 max_conns=200;
server 192.168.100.2:80 max_conns=200;
queue 10 timeout=30s;
}
The max_conns
parameter applied to each server specifies the maximum number of connections that NGINX Plus opens to it. The queue
directive limits the number of requests queued when all the servers in the upstream group have reached their connection limit, and the timeout
parameter specifies how long to retain a request in the queue.
Dealing with Range‑Based Attacks
One method of attack is to send a Range
header with a very large value, which can cause a buffer overflow. For a discussion of how to use NGINX to mitigate this type of attack in a sample case, see Using NGINX and NGINX Plus to Protect Against CVE‑2015‑1635.
Handling High Loads
DDoS attacks usually result in a high traffic load. For tips on tuning NGINX and the operating system to allow the system to handle higher loads, see Tuning NGINX for Performance.
Identifying a DDoS Attack
So far we have focused on what you can use NGINX to help alleviate the effects of a DDoS attack. But how can NGINX help you spot a DDoS attack? The status module provides detailed metrics about the traffic that is being load balanced to backend servers, which you can use to spot unusual traffic patterns. NGINX Plus comes with a status dashboard web page that graphically depicts the current state of the NGINX Plus system (see the example at demo.nginx.com). The same metrics are also available through an API, which you can use to feed the metrics into custom or third‑party monitoring systems where you can do historical trend analysis to spot abnormal patterns and enable alerting.
Summary
NGINX can be used as a valuable part of a DDoS mitigation solution, and NGINX Plus provides additional features for protecting against DDoS attacks and helping to identify when they are occurring.