For a long time, I was happy with my DigitalOcean droplet. It had 1 GB of RAM, 1 vCPU and cost 6 USD per month. Sure, that wasn't a lot of power, but it was always enough. But then I built this very portfolio website. And moreso than usual, I paid a lot of attention to speed. I might write another blog post about this sometime, but once I deployed the project to my modest droplet that had served me so well for so long, all of my hopes and dreams came crashing down.
It was slow. Like "images take multiple seconds to load" slow. The main problem was the image optimizations next.js does under the hood. Any time I would access an image that wasn't already cached, The server resources would spike. Not good.
So I upgraded the droplet to 2 GB of RAM. The droplet now cost 12 USD per month. And things improved. The website got a little bit faster, but it still wasn't what I had envisioned.
So I looked at my options. I could upgrade the droplet even more, to 4 GB of RAM and 2 vCPUs, but this would double my costs again to 24 USD per month. Not ideal.
Then, a friend recommended Hetzner, and I decided to move away from DigitalOcean. With Hetzner, for roughtly 4 EUR, I get the same I would get from DigitalOcean for 24 USD.
This is the story of my move to the new server, and how I finally fixed my problems with image optimization and page load speeds. I write this for you, but also as a guide for myself in the future.
First of all, I had to do some work just to get the new server itself working. As a friend recommended to me, I decided to go all in on containerization this time around. The server itself would only handle incoming requests and redirect them to the correct docker container.
I installed docker using this guide, then I created a docker-compose.yml
. This file will act as the root of all of my projects hosted on this server. It itself does not contain any configuration, it just includes the configuration of each project, like this:
include:
- docker-compose-portfolio.yml
- docker-compose-other-project.yml
...
This way, each project could have its own docker-compose.yml
with its own set of servers, and the main compose file wouldn't grow to untenable lengths. For my portfolio site, I already had a docker-compose.yml
from my old server. I tweaked it slightly to pull the latest version of directus:
services:
portfolio-db:
container_name: portfolio-db
image: postgis/postgis:16-master
ports:
- 5061:5432
platform: linux/amd64
volumes:
- ./portfolio/database:/var/lib/postgresql/data
environment:
POSTGRES_USER: "directus"
POSTGRES_PASSWORD: "directus"
POSTGRES_DB: "directus"
healthcheck:
test: ["CMD", "pg_isready", "--host=localhost", "--username=directus"]
interval: 10s
timeout: 5s
retries: 5
start_interval: 5s
start_period: 30s
portfolio-directus:
container_name: portfolio-directus
image: directus/directus:11.9.0
ports:
- 8061:8055
volumes:
- ./portfolio/uploads:/directus/uploads
- ./portfolio/extensions:/directus/extensions
depends_on:
portfolio-db:
condition: service_healthy
environment:
SECRET: "secret"
DB_CLIENT: "pg"
DB_HOST: "portfolio-db"
DB_PORT: "5432"
DB_DATABASE: "directus"
DB_USER: "my-password"
DB_PASSWORD: "directus"
ADMIN_EMAIL: "my@email.com"
ADMIN_PASSWORD: "admin"
PUBLIC_URL: "/"
With this in place, I could run docker compose up
and boom, there's my backend. And if this were a new project, that'd be the end of it, but I was migrating my site with an existing database and existing assets, so I had to move those over separately.
Basically, I had to create a backup and then restore it on the new server.
I created a backup with pg_dump
:
pg_dump -U directus -h localhost:5432 directus >> backup.sql
Then I copied it to my local machine with scp:
scp [user]@[old_ip]:/[path]/backup.sql > ./backup.sql
... and from the local machine to the new server:
scp ./backup.sql > [user]@[new_ip]:/[path]/backup.sql
I could probably have done this slightly more efficiently by copying it directly from one server to the other, but then I'd have to worry about SSH keys again, and nobody wants that. So this is what I did.
Once the backup was on the new server, I could restore it to the new db container. However, there was a slight hiccup, because directus automatically creates a database when it first starts, which makes a lot of sense for new projects, but got in the way now. So I had to delete it first, but there was a second level hiccup: The database was already in use. I didn't care about any potentially catastrophic termination of user sessions, because any user using my directus instance would be me, so I just ran:
docker exec -i portfolio-db psql -U directus -d postgres -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'directus';"
And then, to drop the database:
docker exec -i portfolio-db psql -U directus -d postgres -c "DROP DATABASE IF EXISTS directus;"
Make sure, with both of those commands, that you connect to the postgres
database, not the database you are trying to manipulate. This is because we are destroying the database here; You can't do that while being connected to the database you are trying to destroy.
But I still couldn't use my backup, because while before I had a database with existing data, which got in the way, now I had no database at all. So I had to create a new, empty database:
docker exec -i portfolio-db psql -U directus -d postgres -c "CREATE DATABASE directus;"
Finally, I could restore the backup with this command:
docker exec -i portfolio-db psql -U directus -d directus < /backup.sql
This got me most of the way there, but directus also manages assets like images for me. Those were still missing. Luckily, directus just tosses any uploads into an /uploads
directory, so I could largely follow the same process as with the ./backup.sql
: I copied the assets to my local machine, then copied them from there to the new server. I used scp -r
for this, though it would probably have been faster to first zip the assets and copy the zip.
One more thing though: I had to run those two commands to make sure the containerized directus could access, transform, and write the uploads:
sudo chown -R 1000:1000 .
sudo chmod -R 775 .
With this done, my backend was ready. Now for the frontend.
Dockerizing my Next.js frontend was surprisingly easy. Essentially, I only needed three steps:
- Install dependencies
- Build the application
- Start the application
I wrote a multi-stage Dockerfile to do just that:
FROM node:22-alpine AS base
WORKDIR /app
RUN apk add --no-cache g++ make py3-pip libc6-compat
COPY package*.json ./
EXPOSE 3000
FROM base AS builder
WORKDIR /app
COPY . .
RUN npm run build
FROM base AS production
WORKDIR /app
ENV NODE_ENV=production
RUN npm ci
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/public ./public
COPY --from=builder /app/next.config.ts ./next.config.ts
CMD npm start
Note that the next.config.ts
can be copied to production as-is - no compilation required.
This could probably optimized further, but it works for me.
This Dockerfile works locally by running docker build -t portfolio .
, however, I needed this to run in the GitHub pipeline. Every time I push to the main branch, I want the application to automatically build itself and deploy itself to my server. To do that, I used GitHub Actions:
name: Build and Deploy
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Main Branch
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: main
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: "22.x"
- name: Create env file
run: |
touch .env
echo "${{ secrets.ENV_FILE }}" >> .env
- name: Install Dependencies
run: npm i
- name: Log in to Docker Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: stefanbaumeler
password: ${{ secrets.GHCR_PAT }}
- name: Build and Push Docker Image
run: |
docker build -t portfolio-frontend:latest .
docker tag portfolio-frontend:latest ghcr.io/stefanbaumeler/portfolio-frontend:latest
docker push ghcr.io/stefanbaumeler/portfolio-frontend:latest
- name: Deploy via SSH
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
cd ~
docker compose pull portfolio-frontend
docker compose down portfolio-frontend
docker compose up -d portfolio-frontend
docker images ghcr.io/stefanbaumeler/irienoise-frontend --format '{{.ID}} {{.CreatedAt}}' | sort -k2 -r | tail -n +2 | awk '{print $1}' | xargs -r docker rmi
This does the following steps:
- It checks out the code from the repository
- It makes sure node.js is available for the following steps
- It creates a
.env
file and populates it with the environmental variables I have deposited in the repositories' secrets (see below) - It installs the node dependencies
- It logs into the docker registry using my credentials
- It pushes the built image to the docker registry
- It logs into my server, pulls the image, stops the running container and restarts it with the new image
The last line, which I definitely did not generate with AI, makes sure previous images with the same name are deleted. This is important because every time you run this deployment process, a new image is created. The size of this image depends on your application and how much you optimize it, but my projects are usually around 2GB. If you keep deploying, eventually your disk will be full, the deployment will fail and your server will panic. Which definitely did not happen to me. Nope.
So make sure to include that last line so this does not happen to you either.
You can create secrets for your repository by going to your repository > Settings > Secrets and variables > Actions. Click on "New repository secret". Note that you will no longer be able to see the secret once you have created it.
Key | Value |
---|---|
ENV_FILE |
Literally the entire .env file of the project. There's probably better ways to handle this, but this works for me. |
GHCR_PAT |
The Personal Access Token for the GitHub Container Registry. You can generate it by going to your account settings > Developer settings > Personal access tokens > Tokens (classic) > Generate new token > Generate new token (classic) |
HOST |
The IP of the server I want to deploy my frontend to. |
USERNAME |
The username used to sign into the server. |
SSH_KEY |
The private SSH key used to sign into the server. |
This almost worked. One thing I had to do was log in to my docker container repository from the new server, so it would be authorized to pull the image I created. I did so using this command:
echo "[GHCR_PAT]" docker login ghcr.io -u stefanbaumeler --password-stdin
Now the frontend deployed without issues.
At this point, my backend was ready and the frontend, too. But I could only access either via IP. For the backend, I don't care at all. But if people are going to find my website, obviously it'd need a domain name.
Luckily I already had one. I just had to point it to the new IP. To do so, I had to login to my old hosting and update the NS records there, so they would point to the Nameservers of my new host. Then, in the configuration panel of my new host, I had to add A records to point to my IP.
This switch can be a pain in the ass because of DNS propagation, which might take... a while, depending on the configured TTL of the records, but this time I had to wait only a couple minutes for the switch to take place.
While I was waiting, I set up my nginx, so it could handle the incoming requests and point them to the right docker container.
First, I had to install it with apt install nginx
and start it with systemctl start nginx
. Then, I created a file with the domain name as the name of the file for each of my projects under /etc/nginx/sites-available
.
I largely copied the config over from my previous setup on DigitalOcean.
I symlinked those files to /etc/nginx/sites-enabled
with:
sudo ln -s /etc/nginx/sites-enabled/my-project.com /etc/nginx/sites-available/stefan-baumeler.com
To set up https, I used certbot:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d stefan-baumeler.com -d www.stefan-baumeler.com
This configures nginx with all the required certificates and ports.
Now everything was served over https, but my content got served over http/1.1, which is a performance concern. Upgrading to http2 was rather simple, I just had to change this line in my nginx config, to contain http2
: listen 443 ssl http2;
At this point, everything looked pretty good, but while the site definitely ran faster than on DigitalOcean, images were still a bit janky. They just took a moment to load, even though I did everything possible in my codebase. Ultimately I realized that nginx caching could help here, so I added those three blocks to my config in the server
block, specifically to optimize image loading:
location ~* ^/_next/image {
proxy_cache static_cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
add_header X-Cache-Status $upstream_cache_status;
add_header Cache-Control "public, max-age=31536000, immutable";
proxy_pass http://localhost:3061;
}
location ~* ^/api/prefetch-images {
proxy_cache static_cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
add_header X-Cache-Status $upstream_cache_status;
add_header Cache-Control "public, max-age=31536000, immutable";
proxy_pass http://localhost:3061;
}
location /assets/ {
proxy_cache static_cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
add_header X-Cache-Status $upstream_cache_status;
add_header Cache-Control "public, max-age=31536000, immutable";
proxy_pass http://[Directus Server IP]:8061;
}
The first block caches requests to the Next.js image API. This API gets called if a user arrives on my site for the first time and wants to load one of the images, but internally, it calls the Directus assets API. This is why the third block exists. That block caches the responses from the Directus API.
As for the second block, there's a whole frontend part of this that I won't get into here, but basically /api/prefetch-images
does not return images, but image URLs of images that I know will be needed by the user soon. As those responses are also the same every time (per page), I can cache those too.
I also had to add this to the http
block of my nginx.conf
(not the site specific configuration):
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static_cache:10m inactive=10d max_size=1g;
While the three blocks above tell nginx what to cache, this line tells nginx that it should enable caching in the first place, and how that cache should behave in general.
This greatly increased the speed at which my images get loaded, and finally solved my image problems for good.
If you are new to DevOps, you might find yourself somewhat overwhelmed. But the beauty of this new setup is that it scales really well. Deployments are now super simple, and to add entire new projects, big or small, only a couple lines of code have to change on the server, most of them copy pasted from another project.
On the old server, I had some stuff containerized, other things were running on the root server. Deployment of the frontend worked with rsync, and I started the individual servers with pm2. This meant that each of my projects had a slightly different DevOps setup, and sometimes there were conflicting packages or versions of packages on the server.
Now everything is standardized and isolated. If I set up a new project, I can largely copy-paste the DevOps stuff, like the nginx config, the docker-compose.yml
and the GitHub Actions file, just tweaking everything a little to match the needs of the new project.
Finally, I'm happy I moved away from DigitalOcean. I'm sure they have some advantages too, but it's not advantages I currently require for my projects, and with the move I save roughly 20 CHF a month, so that's a win in my book.