Adding SSL Certs via CertBot
A few weeks ago I posted about my website configuration using docker and nginx. The one piece I needed to complete for this was adding SSL certificates.
This process proved to be quite a feat during my capstone project in the masters program I was in last year. We had nginx templates to serve different ports depending on if cert files existed or not, ( start_nginx.sh ) and another script which would generate a cert for you:
This whole process used a certbot docker container, and long story short is we may have let the cart get away from the horse with this approach. It seems overly complicated, but it did teach me a lot about docker and certbot work.
Moving back to the original subject, I began to work on adding certs to my portfolio and a few other web applications I have hosted, and nginx acts as a reverse proxy to direct the requests:
┌───────────────────────────────────────────────┐ │ VPS │ │ ┌─────────────────────────────────────────┐ │ │ │ Docker │ │ │ │ ┌─────────┐ ┌────────┐ │ │ │ │ │ Nginx │───▶ │ App1 │ │ │ │ │ │ │───▶ │ App2 │ │ │ │ │ │ │───▶ │ App3 │ │ │ │ │ └─────────┘ └────────┘ │ │ │ └─────────────────────────────────────────┘ │ └───────────────────────────────────────────────┘ ▲ │ User
This graph from my earlier post shows the inital set up, and this would ultimately be my final set up after much confusion. I will briefly cover a few of my approaches, and my failures with them. I decided I would explore options other than certbot to install the certificates, this turned out to be a mistake.
My first approach was to use SWAG. My understanding was I needed to put a SWAG container, between nginx and each application for each domain.
┌──────────────────────────────────────────────┐ │ VPS │ │ ┌────────────────────────────────────────┐ │ │ │ Docker │ │ │ │ ┌─────────┐ ┌────────────┐ │ │ │ │ │ Nginx │─▶ │ SWAG1 │─▶ App1 │ │ │ │ │ │ └────────────┘ │ │ │ │ │ │ ┌────────────┐ │ │ │ │ │ │─▶ │ SWAG2 │─▶ App2 │ │ │ │ │ │ └────────────┘ │ │ │ │ │ │ ┌────────────┐ │ │ │ │ │ │─▶ │ SWAG3 │─▶ App3 │ │ │ │ └─────────┘ └────────────┘ │ │ │ └────────────────────────────────────────┘ │ └──────────────────────────────────────────────┘ ▲ │ User
These SWAG containers stand up a nginx instance on port 80, and use that to verify with Let's Encrypt that you actually own the domain. After much back and forth, I got SWAG to generate some certs, but when going to the sites I was put in a redirect loop that would eventually time out.
To this day I am not entirely sure what benefit SWAG provides over the regular certbot container, and my use cases have not really justified the use of SWAG either.
Anyways, I eventually grew frustrated troubleshooting the redirect loop. The certs seemed good, and my nginx routing seemed to make sense. I think the certs SWAG generated were not entirely aligned with my domain names, and thus the http requests would route from http to https, then back to http, and eventually time out.
My next attempt was to try to use Cloudflare, Cloudflare always seems to be on bleeping computer for some kind of vulnerability or problem, but I am interested in using their tunnels, so I figured I'd dip my toes into the shallow end with some SSL certs.
Cloudflare is pretty slick and offers a lot of analytics. I was surprised my portfolio page seems to get 100+ “unique visitors” a day. I am not sure how this data is aggregated, or what the privacy trade off is. Anyways, it's extremely easy to add cloudflare as a dns record, and have them as middleware. Since cloudflare receives the initial request, you can simply click radio button to enable a partial SSL connection. This will encrypt the request from the end user, to cloud flare, but not encrypt the request from cloudflare, to my VPS. This seemed pretty nice, but also seemed like a hack and not what I wanted. I toyed around with it for a while, and could also not get “Cloudflare origin” certs to work, these would encrypt the last leg of the request.
Anyways, after many failed Cloudflare attempts I decided to go back to certbot. This time I did not use a certbot container, and installed it on the host machine. This seemed much simpler than the approach we used for the capstone project. I downloaded certbot and ran:
certbot certonly --standalone -d bee.engineer
And it made a cert in /etc/letsencrypt, then linked the let's encrypt folder to the nginx container as such:
volumes: – /etc/letsencrypt:/config:ro
I then modified nginx's default.conf so that it new to serve the cert and redirect requests for port 80 to 443:
server { listen 80; server_name bee.engineer www.bee.engineer; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl; server_name bee.engineer www.bee.engineer; ssl_certificate /config/live/bee.engineer/fullchain.pem; ssl_certificate_key /config/live/bee.engineer/privkey.pem; location / { proxy_pass http://portfolio:3000; # Forward requests to the Next.js server proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
This approach worked perfectly, after I changed the dns records to not direct traffic to cloudflare.
Now all the requests to my sites get that spiffy lock.
To renew the certs is quite easy too, you just turn off the nginx server and run
certbot renew
The ultimate test will be in November when the certs expire. I think setting up a cron job to stop the docker containers, run the certbot renew command, and bring the containers back up will be the final solution.
I will also say that it is critical to remember to clear your cookies and cache when working with certs. Even though I work professionally with certs at work, I always seem to forget when I am working on my own projects.
Until next time.