Contents

Using Unifi controller on Docker

Following a post on Reddit, I decided to share my config used to run the Unifi controller. Doing so, you won’t have to purchase a Unifi Cloud Controller.

Spoiler alert… I no longer use Unifi hardware, and I’ll share why in a future post.

System Overview

I have 2 servers :

  • local server at home, which runs the Unifi controller in a Docker swarm setup ;
  • remote server online, which runs the NGiNX reverse proxy.

The local server uses firewall rules (via PFSense) in order to filter incoming requests, so that it only accepts request from the remote server.

Home server

As explained above, my local server runs the Unifi server in Docker swarm “mode”, here is the config :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
version: '3'

services:
  unifi:
    image: linuxserver/unifi
    deploy:
     replicas: 1
    hostname: unifi.domain.com
    ports:
      - "3478:3478/udp"
      - "10001:10001/udp"
      - "8080:8080"
      - "8081:8081"
      - "8443:8443"
      - "8843:8843"
      - "8880:8880"
      - "6789:6789"
    environment:
      - PUID=1000 
      - PGID=1000
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./config:/config
    networks:
     default:
         aliases:
          - unifi

As you can see, I expose quite a lot of ports, however they are almost all available only from my local network.

The only exception being the port 8443 that is open to my remote server (via PFSense firewall).

For more information about the Docker image, please check their official page.

Remote server

For the nginx part, I do not use the jwilder version, because as far as I know, it requires ports to be exposed.

I’ll start by showing my container config. This is a docker-compose.yml file :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
version: '2'

services:
  nginx:
    image: nginx
    container_name: nginx
    ports:
      - "443:443"
      - "80:80"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./conf.d:/etc/nginx/conf.d
      - /etc/letsencrypt:/etc/nginx/ssl
      - /srv/data/ssl/dhparam.pem:/etc/nginx/cert/dhparam.pem
    restart: unless-stopped
    networks:
      default:
        aliases:
          - nginx

networks:
  default:
    external:
      name: br0

Let’s explain a little bit what it does :

  • I only expose ports 80 and 443 ;
  • The nginx (official) image mounts volumes /etc/letsencrypt directory + DH param file from host as well, into my container and the conf.d directory (which holds my virtual hosts) ;
  • I create network aliases and separate networks for my “group of apps”.

The network thing is very important. Remember, when I told you that I do not want to expose too many ports. Creating separate networks allow me to do so.

I have all my “web” apps, on the same network (br0, on this example). Which means, the reverse proxy can reach any other container by using its alias (even when containers’s IP change). This is kind of Docker’s internal DNS system.

I see two main benefits by not exposing the ports when my “backend” containers and my reverse proxy containers are running both on the same host  :

  1. I can use the same “internal” ports on all my backend containers (often 80 or 443);
  2. I only expose 80 and 443 from my remote server to the rest of the world!

Setting up nginx virtual host

Now, that I have shared the reverse proxy container, let’s see how it is connecting to my local computer.

This is a nginx vhost config which resides in ./conf.d (symlinked from ./conf.d/vhost) :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
server {
       listen 80;
       server_name unifi.domain.com;
       return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl;
    ssl_certificate /etc/nginx/ssl/live/domain.com/fullchain.pem;
    ssl_certificate_key /etc/nginx/ssl/live/domain.com/privkey.pem;

    ssl_dhparam /etc/nginx/cert/dhparam.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;

    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    server_name unifi.domain.com;
    access_log on;
    location / {
        proxy_pass https://domain.com:8443;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $remote_addr;
        port_in_redirect off;
        proxy_connect_timeout 300;
    }
}

As you can see, the proxy pass directive uses: https://domain.com:8443. Like I explained previously, domain.com points to my home server and port 8443 is only open from my remote server.

If the Unifi controller were running on the same host, then the proxy pass would point to the the network alias of that container without exposing any port. It would look something like below:

1
   proxy_pass https://unifi:8443;

Finally, unifi.domain.com points to my remote server and force redirects to the httpS URL, using wildcard certs from Let’s Encrypt.

Getting it all httpS!

I generated my wildard certificate thanks to Let’s Encrypt Docker image.

Finally SSL certs are automagically renewed via a systemd “job” than runs the official letsencrypt container and renew my cert (wildcard), like below :

1
docker run -v /etc/letsencrypt/:/etc/letsencrypt/ certbot/dns-cloudflare renew --dns-cloudflare --email [email protected] --agree-tos --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini