Web applications behind nginx: signing trusted headers with HMACs

jjpk.me

In your typical web server setup, it is not unusual to find this pattern: a strong front server application (eg. nginx, Apache) to take in and route all the HTTP traffic, and a bunch of more specific web applications which actually generate the responses. In Apache, this is usually achieved using mod_proxy or mod_proxy_uwsgi, while nginx offers ngx_http_proxy_module or ngx_http_uwsgi_module, to only cite a couple.

One common pitfall in this scenario is, by detaching the app from the original HTTP/TCP transaction, a lot of metadata about the request is lost. The typical example is the client's IP address, which will be replaced by that of the front server (when proxying). Obviously, each of the modules cited above comes with settings to allow for request headers and TCP connection information to be passed over.

A problematic scenario

Let's say you've gone with nginx and its HTTP proxy module. Sweet. You quickly work around the client IP issue with the usual bit of configuration:

proxy_set_header X-Real-IP $remote_addr;

This sets the X-Real-IP header for your backend application, which can now perform all sorts of operations with the client's IP. Let's say one of these is an access restriction on your app's admin panel: you only want to allow certain subnets to connect, say 172.16.2.0/24. In your app's logic, you'll enforce restrictions based on the X-Real-IP header.

Now, what happens if a client sends a request and explicitely sets X-Real-IP ?

curl -H "X-Real-IP: 172.16.2.1" http://example.com/admin

... well, nothing: nginx will overwrite that header. But what if...

  • The nginx configuration gets messed up?
  • You switch to Apache and forget all about this?
  • Your backend application binds to 0.0.0.0 instead of 127.0.0.1 and can therefore be reached directly, without going through nginx?

That last one is arguably much easier to overlook, even though cloud firewalls will usually make that save if you don't. Nonetheless, while those scenarios are common and "easily" avoided, they do raise one concern: the backend application is trusting that header blindly. It has no certainty that it's indeed been set by trusted nginx and is basically offloading (part of) its authentication process.

Signing request headers

Usually, when two applications need a way to trust each other, the solution boils down to signing. In our scenario, if nginx and the app had a shared symmetric key laying around, the former could sign the contents of the X-Real-IP header and set the signature in another header, say X-Real-IP-Signature. The backend app could then verify the signature and reject the request if things don't add up.

Turns out: this can be done! Well... it can with nginx. A quick read suggests that it is also possible with HAProxy using its hmac converter, but I was not able to find an easy approach for Apache.

Because IP addresses are boring (who does IP-based authentication these days?) this next example will be about passing TLS client certificates instead. Since nginx usually takes care of TLS termination, it is the only component which can verify the client's claim over a certificate. The backend app, which receives the certificate afterwards, has no way to tell, since it did not handle the TLS handshake. Similarly to our earlier scenario, clients could therefore pass a public certificate in the trusted header, without ever having to prove that they own the associated key.

Here's the plan:

  • nginx will receive the client certificate and quite conveniently make it available in configuration files via the $ssl_client_escaped_cert variable. As the name suggests, it will have been deprived of all possibly pesky characters (URL encoded).
  • This certificate will be written to the trusted X-Client-Cert header.
  • An HMAC-SHA256 signature of that value will also be computed using our secret key and set into the X-Client-Cert-Signature header.
  • The app will receive both headers, recompute the HMAC for X-Client-Cert and match it against the value in X-Client-Cert-Signature. If the values differ, it will immediately reply with HTTP 400.

Adding signing capabilities to nginx

Now I said it was possible, I didn't say it was easy and straightforward. As it so happens, your standard nginx distribution does not come with the tools required to compute an HMAC. It is however available as part of a module: set-misc-nginx. As you can see from that webpage, the module is developed by the people over at OpenResty, so if you're already using their bundled nginx, you're golden: everything's already in the box!

For the rest of us peasants, it's time to compile. You will need:

  • A copy of the nginx source code. You won't need to run that recompiled nginx, but if you want to use your current install, make sure to grab exactly the same version. Let's call the path to the extracted source code $NGINX_PATH.
  • The nginx development kit, simply extract the latest release somewhere. Let's call that $NDK_PATH.
  • The set-misc-nginx module, again, grab the latest release. Let's call its path $MOD_PATH.

If you want to keep running your existing nginx binary (say, the one in your distribution's repositories), log into your server and fetch the compilation command line that was used to compile your nginx: nginx -V. Remove all occurrences of --add-module and --add-dynamic-module as you probably don't have the paths they refer to, then append the result to the ./configure line below.

Next: configure and build nginx along with its modules:

$ cd $NGINX_PATH
$ ./configure --with-http_ssl_module --with-compat --add-dynamic-module=$NDK_PATH --add-dynamic-module=$MOD_PATH
$ make

When you're done (nginx should compile pretty quickly), you'll find the interesting .so files in the objs directory:

  • objs/ndk_http_module.so
  • objs/ngx_http_set_misc_module.so

Copy those files onto your server, for example under a newly-created /usr/local/lib/nginx directory. Next, edit your nginx configuration (usually /etc/nginx/nginx.conf) and add the following lines at the top:

load_module /usr/local/lib/nginx/ndk_http_module.so;
load_module /usr/local/lib/nginx/ngx_http_set_misc_module.so;

Finally, restart your nginx server. At this stage, two issues can occur:

  • You get "module [...] is not binary compatible" and nginx doesn't restart: you have compiled the module using the wrong nginx version (nginx -v) or without the original command line parameters (nginx -V).
  • You get "Exec format error" instead: the machine you compiled on and the one running nginx do not run on the same processor architecture. A typical example is compiling on Intel for a Raspberry Pi (which is ARM). You should recompile on the target server, or try your hands at cross-compiling (good luck with that).

Signing a trusted header

Now that we can sign, let's actually do it. First: the key. There's a lot one could write about how to generate a secret key, let's not. A good size for HMAC-SHA256 is 64 bytes and openssl rand can get you cryptographically secure pseudorandom bytes:

$ openssl rand -hex 64
b46fcc41635e6b0c8f134a0c427cea9186ccb6260a85167225a451fda93f3c2766f99394c0d438d7e750162e67e64231f3277d89b3b0c4851e2bf5381126667c

Store that key in a root-owned, chmod 600 file, say /etc/nginx/signing_key.conf :

set $signing_key_hex "b46fcc41635e6b0c8f134a0c427cea9186ccb6260a85167225a451fda93f3c2766f99394c0d438d7e750162e67e64231f3277d89b3b0c4851e2bf5381126667c";

Next, in your main location configuration block, add the signing logic:

location / {
    [...]

    include /etc/nginx/signing_key.conf;
    set_decode_hex $signing_key $signing_key_hex;

    set_formatted_local_time $signing_nonce_ts "%s";
    set $signing_nonce "$request_id:$signing_nonce_ts";
    set $nonce_and_cert "$signing_nonce:$ssl_client_escaped_cert";

    set_hmac_sha256 $cert_signature $signing_key $nonce_and_cert;
    set_encode_hex $cert_signature $cert_signature;

    proxy_set_header X-Client-Cert $ssl_client_escaped_cert;
    proxy_set_header X-Client-Cert-Nonce $signing_nonce;
    proxy_set_header X-Client-Cert-Signature $cert_signature;

    [...]
}

Here's a breakdown:

  1. We include the key variable from the file created earlier
  2. The key being hex-encoded (2 chars per byte), we start by decoding it to get the actual key bytes
  3. A nonce is generated using the current UNIX timestamp and the unique request ID
  4. The HMAC is computed over the nonce and the certificate, using the secret key
  5. The certificate, nonce and signature are sent over to the backend app

Verifying signatures

Now that the headers are set, the application must verify them. Of course, depending on your stack, this can be done in a myriad of different ways. The flow should be the following:

  1. The app fetches the headers, which have yet to be trusted
  2. It computes and verifies the expected signature from the nonce and certificate headers
  3. If the request ID has been seen before, or the timestamp is too old, the request is discarded
  4. The certificate is then parsed and can be used by the app

For minimal working examples, I tend to go for Python's Flask, which is rather minimalistic in terms of code lines. The example below also requires the expiringdict and cryptography modules. It defines a single route /secret which performs its own authentication logic. Ideally, this process would be inserted as middleware in your application, most likely a view decorator if we're sticking to Flask.

from flask import Flask, request, abort
from expiringdict import ExpiringDict
from urllib.parse import unquote

from cryptography import x509
from cryptography.x509.oid import NameOID
from cryptography.hazmat.backends import default_backend

import hmac
import time
import os


# The secret key shared with nginx is in the NGINX_SIGNING_KEY environment variable
SECRET_KEY = bytes.fromhex(os.environ.get("NGINX_SIGNING_KEY"))

# Requests older than 10s should be dropped
# This limits the history size and therefore the amount of RAM you might use for it
# If you have ~100 requests per second, that's ~1000 entries max, which is fine
# If you have tens of thousands of requests per second... you might want to lower this
# The value only needs to cover the time it takes for your request to go from nginx to the app
# In most environments, that's way below 1s
RID_LIFETIME = 10
RID_MAXSIZE = 2000

# Prepare the Flask app and the request history
app = Flask(__name__)
rid_history = ExpiringDict(max_len=RID_MAXSIZE, max_age_seconds=RID_LIFETIME)


@app.route("/secret")
def secret_page():
    global rid_history

    # Get the to-be-trusted headers from nginx
    encoded_certificate = request.headers.get("X-Client-Cert", "")
    nonce = request.headers.get("X-Client-Cert-Nonce", "")
    signature = request.headers.get("X-Client-Cert-Signature", "")
    if len(encoded_certificate) < 1:
        return "No certificate provided", 496

    # Compute the expected signature from the nonce, certificate and shared secret key
    signature_body = ("%s:%s" % (nonce, encoded_certificate)).encode("utf-8")
    expected_signature = hmac.new(SECRET_KEY, signature_body, 'sha256').hexdigest()
    if signature != expected_signature:
        return "Invalid signature", 400

    # The headers can now be trusted, let's make sure the request is new
    request_id, ts = nonce.split(":")
    if request_id in rid_history or int(ts) < (time.time() - RID_LIFETIME):
        return "Expired or replayed request", 400
    rid_history[request_id] = request_id

    # All good! Let's parse the certificate and say hello!
    certificate = x509.load_pem_x509_certificate(
        unquote(encoded_certificate).encode("utf-8"),
        default_backend()
    )

    # This is where you might want to add actual authentication logic
    # You could for example verify certificate.issuer to make sure it matches a specific CA
    # This is also a good place to look up the certificate in your revocation lists

    cn = certificate.subject.get_attributes_for_oid(NameOID.COMMON_NAME)[0].value
    return f"Hello, {cn}!", 200

Minimal working example

A MWE can be found as a snippet on my Gitlab account. You'll find the nginx configuration files I've used to prototype this, as well as the small Flask app above. To see it at work, run nginx (port 443) and make the voluntary mistake of exposing your app's port (5000) so that it can be reached directly. Aside from that, you'll need:

  • A key/certificate TLS pair for your server, self signed is fine for testing (in the nginx configuration)
  • The same for the client, to be used in curl

If we come to talk to either components without a certificate, we get rejected:

$ curl -k https://localhost/secret
No certificate provided
$ curl http://localhost:5000/secret
No certificate provided

With a client certificate in the handshake now. I've set client.localhost as the CN in my client certificate.

$ curl -k -E clientcert.pem --key clientkey.pem https://localhost/secret
Hello, client.localhost!

Now let's be cheeky... Sending the certificate directly to the app as a header, no handshake:

$ encoded_cert="$(perl -MURI::Escape -e 'print uri_escape(do { local $/; <>});' < clientcert.pem)"
$ curl -H "X-Client-Cert: $encoded_cert" http://localhost:5000/secret
Invalid signature

Okay okay, what if we somehow intercept a nonce and a signature? Let's say we add some extra print calls in the example above and get the values for a successful request through nginx.

$ curl -H "X-Client-Cert: $encoded_cert" \
    -H "X-Client-Cert-Nonce: $nonce" \
    -H "X-Client-Cert-Signature: $signature" \
    http://localhost:5000/secret
Expired or replayed request

Sweet. But what if the attacker gets their hands on the secret key? ... well, then you have bigger issues to deal with right now.

Admittedly, this is a lot of tweaking to assert a level a trust which should already be pretty guaranteed by the network configuration. Nonetheless, if you can afford the extra computation at every request, why not? :-)