Nginx, Letsencrypt, Kong and how it makes life easy

Feb 16, 2017 14:53 · 1094 words · 6 minute read

This Blog post is going to address several topics that were bothering me in the last days and weeks:

  • How do I add most comfortably new SSL Certificates to my server without causing a downtime for the blog?
  • How do I add new services to my server, expose them to the internet, have automatic HTTPS enabled and can how I can I protect them most easily?

My Setup

The Nginx Setup

So, talking about the first point, the downtime was really struggeling me. Everytime I wanted to add a new subdomain to my Nginx, I had to stop it, start the Certbot container, wait for the certificate and restart the Nginx when this was done. I thought I could do better, so I started building a setup with docker-compose. The nginx image is pretty standard, it mounts a custom configuration, two shared volumes and is connected to a dedicated network, that I will explain later on:

nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./conf/:/etc/nginx/conf.d/:ro
      - www_letsencrypt:/var/www/letsencrypt:ro
      - etc_letsencrypt:/etc/letsencrypt:ro
    networks:
      - outside

So my idea here was that the Nginx and the Certbot share the volume etc_letsencrypt for the certificates and www_letsencrypt for the acme challenge (more on that below). As you can see, all volume mounts of the Nginx are readonly, so no attacker will be able to modify any of my certificates or configurations in case he gains access to Nginx.

Now to the Certbot part:

  letsencrypt:
    image: quay.io/letsencrypt/letsencrypt:latest
    command: certonly --agree-tos -n -m $LE_MAIL --webroot -w /www --expand $DOMAIN_ARGS
    volumes:
      - etc_letsencrypt:/etc/letsencrypt
      - www_letsencrypt:/www

To automate the certificate generation, you have add several parameters in the command section:

  • “certonly” - to generate or renew any certificate that is given as a parameter
  • “agree-tos” - so that you don’t have to confirm the ToS manually
  • “-n” - Non-interactive
  • “-m” - with the registration mail address, in my case I store it in an environment variable in the .env file
  • “–webroot -w /www” - Use the Certbot webroot technique, so generate the challenge tokens in /www, which is mounted to /var/www/letsencrypt in the nginx container
  • –expand - Expand already existing certificates
  • $DOMAIN_ARGS - an environment variable that I store in .env next to the docker-compose file. For example:
  • DOMAIN_ARGS=-d domain.com -d sub.domain.com

The last step is the Nginx configuration. The Certbot verifies that you are the owner of a domain with a token that he places in the location /.well-known/acme-challenge/ of your webserver. Since I want to redirect everything else anyway to HTTPS, the Nginx Configuration looks pretty simple:

server {
    listen 80;

    server_name  _ ;

    location /.well-known/acme-challenge/ {
        root  /var/www/letsencrypt;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

In my case, Nginx is only used for the HTTPS termination. It is the reserve proxy for Kong, which running in a separate container. For sure, Kong could do that Job as well, but I like the separation of concerns.

Like that, all certificates for domains in the DOMAIN_ARGS variable will be automatically generated / renewed once I start the letsencrypt container with “docker-compose up letsencrypt”. I set up a cronjob that is doing that every night, that way the certificates are automatically renewed once they are going to expire. You can find all of the sources as well in my Github Repository.

One Remark regarding the docker-compose version: You will see in my examples that all my compose files are already in version 3 except the one of nginx. Why is that the case? Because there is a bug in Dockers Swarm which prevents me from retrieving a users IP address (see here). As I’m aggregating all logs in Elasticsearch and visualize them with Kibana, I wasn’t able to see anymore where my visitors came from with this bug. Thats why the nginx is still working compose version 2.

So this all nice, but how do I actually serve content, like this Blog for example?

Playing with the Monkey

As described initially, I my goal was to find a way to easily expose new services to the internet while keeping the access safe and secure at the same time. That meant:

  • I want HTTPS by default, which is already managed by Nginx above
  • I want to be able to easily add access control
  • Additionally, I want a way to easily aggregate access logs in Elasticsearch, so I can see if this Blog is visited at all

Since some time people are talking about Kong. The idea of a central API Gateway sounded interesting to me and the available plugins convinced me to give it a try.

The setup of Kong with docker is pretty straight forward. Kong allows you to use two databases: Postgres and Cassandra. I tried the two, but for my use case Postgres seems to be more appropriate as it consumes less resources, which is critical on my 4GB VM. So, Postgres setup is pretty standard. I’m persisting Postgres data as I don’t want to recreate my APIs with each DB restart.

  db:
    image: postgres:9.4
    environment:
      - "POSTGRES_USER=kong"
      - "POSTGRES_DB=kong"
    volumes:
      - postgres:/var/lib/postgresql/data
    networks:
      - default

Adding Kong is simple as well:

  kong:
    image: kong
    environment:
      - "KONG_DATABASE=postgres"
      - "KONG_PG_HOST=db"
    ports:
      - "127.0.0.1:8001:8001"
    depends_on:
      - db
    networks:
      - outside
      - default

I expose port 8001 (the Kong Admin Port) to localhost, that way I can easily add new APIs or Plugins with curl.

As you can see, Kong is connected to two networks: default and outside. The default network is the on where Kong communicates with Postgres. The Outside network is shared between Kong and Nginx:

networks:
  default:
    driver: overlay
  outside:
    external:
      name: rp

Additionally, I added as well a very nice dashboard to manage Kong:

  dashboard:
    image: pgbi/kong-dashboard
    networks:
      - default

As you could already assume, I will expose this dashboard with Kong, so I won’t be able to access the dashboard before I do the following curl request:

curl -i -X POST \
  --url http://localhost:8001/apis/ \
  --data 'request_host=dashboard.mydomain.com' \
  --data 'upstream_url=http://dashboard:8080'

After all went well, I could access my beautiful, HTTPS encrypted dashboard under the URL of the request_host parameter!

With Kong, I can now protect all my non-public services like Kibana and Grafana with HTTP Basic Auth, OAuth2 or LDAP. I use the TCP Log Plugin to send access logs directly to Logstash (which I will explain in a later Blog post). It gives me an incredibly easy way to manage my services and to add new functionalities without causing any downtime. You can find my Kong docker-compose file on Github as well: https://github.com/3r1co/kong .

I hope this was interesting for you, see you next time!