Ubuntu 24.04 LTS Server with Nuxt 4 + NGINX + PM2
Tutorial for Windows (WSL2) / Mac / Linux

The Perfect Ubuntu 24.04 LTS Server

NGINX reverse proxy + Node 24 LTS (NVM) + PM2 + Docker

This guide shows you how to deploy a production-ready Nuxt 4 application with Prisma + SQLite on Ubuntu 24.04 LTS . Perfect for developers who want a secure, fast, and scalable web server with modern tooling.

💡 Recommended Hosting Providers:

  • 🇪🇺 Hetzner Cloud - Best value in Europe (from €3.49/month, EU-based, GDPR-compliant, L3/L4 DDoS protection, ISO 27001 certified) (Recommended)
  • 🇸🇪 HostUp.se - Swedish unmanaged VPS with L3/L4 DDoS protection up to 5 Tbps via CDN77 (75 SEK/month ≈ €6.50)
  • 🌍 DigitalOcean - Global reach with $200 free credit for 60 days (basic DDoS protection)
  • Linode - High-performance VPS with excellent support (basic DDoS protection)

DDoS Protection Note: All listed EU providers include network-level (L3/L4) DDoS protection for volumetric attacks. HostUp.se absorbs up to 5 Tbps via CDN77 scrubbing in Stockholm. Note that L3/L4 protection does not cover application-layer (L7) attacks on HTTPS websites — for L7 protection, consider adding a reverse proxy WAF such as Hetzner's built-in firewall rules or a self-managed solution (e.g., Fail2Ban + NGINX rate limiting).

Recommended VPS Specs for Nuxt 4 Apps: 2 vCPU, 4-8 GB RAM, 50+ GB SSD, 3+ TB bandwidth — handles moderate traffic well. The Hetzner CX33 (4 vCPU, 8 GB RAM, 80 GB NVMe, 20 TB bandwidth, €8.11/month incl. 25% VAT + backup + IPv4) or HostUp.se VPS SM (2 vCPU, 8 GB RAM, 100 GB NVMe, 5 TB bandwidth, 75 SEK/month ≈ €6.50 incl. VAT, backup & IPv4 included) are both solid choices.

⭐ What You'll Learn:

  • Install and configure Ubuntu 24.04 LTS for production
  • Setup NGINX as a reverse proxy with SSL/TLS (Let's Encrypt)
  • Install Node.js 24 with NVM for multi-version support
  • Deploy Nuxt 4 applications with PM2 process manager
  • Optional: Docker setup for containerized deployments
  • Automated backups, monitoring, and maintenance
  • Upgrade path to Ubuntu 26.04 LTS (April 2026)

Quick checklist

  1. Secure access first: create a non-root sudo user, set up SSH keys, install Tailscale VPN, and verify you can SSH over Tailscale before tightening firewalls.
  2. Lock down SSH: disable password auth + root login, and restrict port 22 to tailscale0 only (public SSH closed).
  3. Harden the base OS: update packages, enable unattended security upgrades, set timezone, and install baseline tooling.
  4. Install NGINX (Ubuntu repo is fine).
  5. Install Node 24 with NVM; enable pnpm via corepack; install PM2 (+ log rotation).
  6. NGINX config (in order!): create prerequisite files first (connection_upgrade.conf, admin-tailscale-only.conf), create app directory, add HTTP-only config, run Certbot for SSL, then upgrade to full HTTPS config.
  7. Deploy: upload the deploy folder, install prod deps with a frozen lockfile, generate Prisma client, run migrations/preflight checks.
  8. Run: start via PM2, enable startup, verify health endpoint and logs.
  9. Optional: Docker for Plausible and other services (bind published ports to 127.0.0.1 to avoid Docker+UFW surprises).

🎯 Personalize This Tutorial

Enter your information below to customize all commands throughout this tutorial. Commands will update automatically with your values shown in green .

Enter application name without spaces for PM2 process
Enter your primary domain name for NGINX configuration
Optional secondary domain for www redirect
Enter IPv4 address of your Ubuntu server
Enter username for SSH access
Enter deployment folder name
Enter port number for Node.js application
Your laptop's Tailscale IP for ACL policies (run: tailscale ip -4)

Why Ubuntu 24.04 LTS?

Ubuntu 24.04 LTS is recommended over 25.10:
  • 5 years support until 2029 (vs 9 months for 25.10)
  • Stable and battle-tested for production
  • Direct upgrade path to Ubuntu 26.04 LTS (April 2026)
  • Industry standard for production servers

Ubuntu 25.10 ends support in July 2026, forcing an upgrade at an inconvenient time. With 24.04 LTS, you can upgrade to 26.04 LTS at your convenience after April 2026.

Before You Begin: Prepare Your Local Machine (Recommended: Ubuntu Desktop).

Important: Before setting up the VPS, you need to prepare your local deploy machine (laptop/desktop). This section covers Ubuntu Desktop — if you're on macOS or Windows (WSL2), the steps are similar but package managers differ.

A) Install Required Tools on Your Laptop

# Ubuntu Desktop may not have curl pre-installed
sudo apt update
sudo apt install -y curl openssh-client

# Verify SSH is available
ssh -V

B) Generate an SSH Key Pair (Ed25519)

If you don't already have an SSH key, generate one now. Ed25519 is recommended for its security and performance.

# Generate a new SSH key pair
# The -C flag adds a comment (use your email or identifier)
ssh-keygen -t ed25519 -C "{{tutorial.sudouser}}@laptop"

# When prompted:
#   - Press Enter to accept default location (~/.ssh/id_ed25519)
#   - Enter a strong passphrase (recommended) or press Enter for no passphrase

# View your public key (you'll copy this to the VPS later)
cat ~/.ssh/id_ed25519.pub
Why a passphrase? A passphrase encrypts your private key on disk. If your laptop is hacked, or stolen, attackers can't use your key without the passphrase. You can use ssh-agent to avoid typing it repeatedly.
# Optional: Start ssh-agent and add your key (so you don't type passphrase every time)
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519

# Verify the key was added
ssh-add -l

C) Install Tailscale VPN on Your Laptop

Why Tailscale?

Tailscale creates an encrypted mesh VPN between your devices using WireGuard. Once configured, you'll SSH to your server over Tailscale's private network instead of the public internet.

Security benefits:

  • No public SSH port — After setup, we close port 22 to the internet entirely. Attackers can't even attempt to connect.
  • Fail2Ban becomes optional — With no public SSH, there are no brute-force attempts to block. Fail2Ban is still useful for NGINX rate limiting, but SSH protection is unnecessary.
  • Zero-trust access — Only devices authenticated to your Tailnet can reach the server's SSH port.
  • No port forwarding or firewall holes — Tailscale punches through NAT automatically.
  • Easy multi-device access — Add your phone, tablet, or other laptops to your Tailnet for secure access from anywhere.
  • Audit trail — Tailscale dashboard shows which devices connected and when.
Prerequisites: First, create a Tailscale account at tailscale.com. Your Tailnet is created automatically. Then add your laptop as the first device below — this is the machine you'll SSH from. The VPS will be added as the second device later in the server setup.

Which email to sign up with? Use an email you check regularly (e.g., your primary Gmail). Tailscale's ACL rules reference your login email, so use one you'll remember. Avoid creating a throwaway account — you need it for ongoing admin access.

Already have Tailscale installed from a different account? If you previously used Tailscale with a different account, you must log out first before connecting to your new Tailnet:
# Log out of the old Tailscale account
sudo tailscale logout

# Then connect to your new account
sudo tailscale up
# This will print a new auth URL — open it and log in with your new account
# Install Tailscale on your laptop (first device on your Tailnet)
curl -fsSL https://tailscale.com/install.sh | sh
sudo systemctl enable --now tailscaled

# Connect to your Tailnet
sudo tailscale up

# This prints an auth URL — open it in your browser to link this device.
# You must be logged in to tailscale.com for the link to work.

# Verify connection
tailscale status
tailscale ip -4   # Shows your laptop's Tailscale IP (e.g., {{tutorial.tailscaleip}})
Tip: Copy your Tailscale IP and enter it in the personalization form above — it will be used in NGINX ACL snippets later.

0) Secure Access First (SSH keys + Tailscale + lock down SSH)

Recommended order: create your non-root sudo user and SSH keys, install and test Tailscale first , then disable password login and close public SSH. You don't "disable the root user" on Ubuntu — you disable root SSH login . Keep a recovery console available before tightening rules.

A) First Login and Create a Non-Root Sudo User

Initial connection: Most VPS providers give you root access or an initial user. SSH into your server using the credentials from your provider:

# Connect to VPS using public IP (first time only)
# Use root or the initial user your provider created
ssh root@{{tutorial.ipaddress}}

# Or if your provider created an initial user:
# ssh ubuntu@{{tutorial.ipaddress}}

Create your sudo user: We create a dedicated user for daily administration. adduser will prompt for a password — set a strong password even though we disable SSH password login later .

Why set a password if we disable password authentication?
  • Console access: VPS provider consoles (emergency/VNC/KVM) use password login
  • sudo commands: sudo prompts for your user password by default
  • Recovery: If SSH breaks, password is your fallback via console
  • Local login: Physical/console access always uses password auth

We only disable SSH password authentication — the password still works for sudo and console.

# Create the user (you'll be prompted for password and user info)
sudo adduser {{tutorial.sudouser}}
# → Enter a STRONG password (you'll need it for sudo and console access)
# → Press Enter to skip Full Name, Room Number, etc. (or fill in)

# Add user to sudo group
sudo usermod -aG sudo {{tutorial.sudouser}}

# Switch to the new user
su - {{tutorial.sudouser}}
# → Enter the password you just created

# Verify sudo works (enter your password when prompted)
sudo whoami  # Should output: root

B) Add Your SSH Public Key to the VPS

Now copy your public key from your laptop to the VPS. You have two options:

Open a new terminal tab on your laptop for this step (Ctrl+Shift+T in GNOME Terminal, or click the + button top-left). Keep your VPS session in the other tab — you need a local terminal to run ssh-copy-id, which reads your local key and pushes it to the VPS.
# Option 1: If ssh-copy-id works (need password auth still enabled)
# Run this in a LOCAL terminal on your laptop (not on the VPS):
ssh-copy-id {{tutorial.sudouser}}@{{tutorial.ipaddress}}
# When prompted for a password, enter your VPS user password (not the SSH key passphrase)

# Option 2: Manually paste the key (on the VPS as your sudo user):
mkdir -p ~/.ssh
chmod 700 ~/.ssh
nano ~/.ssh/authorized_keys
# Paste your public key (from: cat ~/.ssh/id_ed25519.pub on your laptop)
# Save and exit (Ctrl+X, Y, Enter)
chmod 600 ~/.ssh/authorized_keys
Test SSH key login before proceeding! Open a NEW terminal on your laptop and verify:
ssh {{tutorial.sudouser}}@{{tutorial.ipaddress}}

You should connect without being asked for a password (or only your SSH key passphrase if you set one). If it asks for the user's password, your key wasn't added correctly.

C) Install and Bring Up Tailscale (on VPS)

# Install Tailscale on the VPS
curl -fsSL https://tailscale.com/install.sh | sh
sudo systemctl enable --now tailscaled

# Bring the VPS into your tailnet (and enable Tailscale SSH)
# IMPORTANT: Hostname must use hyphens, NOT dots!
#   ✅ example-com-vps  (correct)
#   ❌ example.com-vps  (invalid - dots not allowed)
sudo tailscale up --ssh --advertise-tags=tag:web --hostname {{tutorial.tsHostname()}}

# Show your Tailscale IPv4 (100.x by default)
tailscale ip -4
tailscale status
Tailscale hostname rules:
  • Use lowercase letters, numbers, and hyphens only
  • No dots (.) — replace with hyphens (-)
  • Example: example.comexample-com-vps
Changing Tailscale settings later? If you need to change flags (like adding --ssh ), use --reset :
# If you forgot --ssh or need to change settings:
sudo tailscale up --ssh --advertise-tags=tag:web --hostname {{tutorial.tsHostname()}} --reset

Without --reset , Tailscale requires you to specify ALL non-default flags you previously used.

D) Set Tailscale ACL Policy for SSH (Admin Console)

Go to Tailscale Admin → Access controls . You need both an acls rule (network access) and an ssh rule (Tailscale SSH).

Important: Use autogroup:admin instead of your email address in ACL rules. Using raw email addresses (e.g., "src": ["you@gmail.com"]) often causes "tailnet policy does not permit" errors. autogroup:admin automatically includes all Tailnet admins and works reliably.

In the Tailscale admin console, go to Access Controls → click "Edit access controls" (JSON editor). Replace the entire default policy with this — do not try to add rules via the UI, paste the full JSON:

{
  "tagOwners": {
    "tag:web": ["autogroup:admin"]
  },
  "acls": [
    {
      "action": "accept",
      "src":    ["autogroup:admin"],
      "dst":    ["tag:web:22", "tag:web:80", "tag:web:443"]
    }
  ],
  "ssh": [
    {
      "action": "accept",
      "src":    ["autogroup:admin"],
      "dst":    ["tag:web"],
      "users":  ["{{tutorial.sudouser}}", "root"]
    }
  ]
}
What this policy does:
  • tagOwners — defines a tag:web tag owned by all admins (you)
  • acls — allows admin devices to reach tag:web servers on ports 22, 80, 443
  • ssh — allows Tailscale SSH as your sudo user (and root for emergencies)

The VPS must be started with --advertise-tags=tag:web (shown in step C above) to match this policy.

Common pitfalls:
  • Don't use the "Add rules" UI — it can produce invalid JSON with duplicate values. Always use the JSON editor and paste the full policy.
  • Don't use raw email in src (e.g., "src": ["you@gmail.com"]) — this often fails silently. Use autogroup:admin instead.
  • Don't use dots in hostnamesexample.com-vps is invalid, use example-com-vps
  • Don't use IP addresses in dst — use hostnames or tags

After saving ACLs, wait ~30 seconds for propagation. Check tailscale status on both machines.

E) Test SSH over Tailscale (before locking down public SSH)

From your laptop, test Tailscale SSH connection. Use tailscale ssh (not plain ssh) — this uses Tailscale's built-in SSH authenticated via your Tailscale identity, not SSH keys.

# From your laptop — use "tailscale ssh" (not plain "ssh"):
tailscale ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# You should see the Ubuntu welcome banner and a shell prompt.
# If this works, Tailscale SSH is fully functional!
First connection — host key verification:

On first SSH via Tailscale, you'll see a host key verification prompt. Type yes to continue. This is normal and expected.

If you get "Host key verification failed" later (after server rebuild), remove the old key:

ssh-keygen -R {{tutorial.tsHostname()}}
Troubleshooting Tailscale SSH

"tailnet policy does not permit you to SSH to this node"

Your ACL policy is blocking the connection. Check:

  • Did you save the ACL policy in the admin console? (Step D above)
  • Did you start the VPS with --advertise-tags=tag:web? If not, re-run:
    sudo tailscale up --ssh --advertise-tags=tag:web --hostname {{tutorial.tsHostname()}} --reset
  • Is your sudo user listed in the "users" array in the ssh rule?
  • Wait 30-60 seconds after saving ACLs for propagation

"Connection timed out" via hostname but ping works by IP

MagicDNS may need a moment to propagate. Try connecting by IP first:

# Check if the VPS is reachable at all:
tailscale ping {{tutorial.tsHostname()}}

# If ping works, try SSH by IP:
tailscale ssh {{tutorial.sudouser}}@$(tailscale ip -4 {{tutorial.tsHostname()}})

"No identities found" when running ssh-copy-id

You haven't generated SSH keys on your laptop yet. Go back to step A and run:

ssh-keygen -t ed25519 -C "your-email@example.com"

Plain ssh vs tailscale ssh

ssh user@hostname uses traditional SSH keys and connects on port 22.
tailscale ssh user@hostname uses Tailscale's identity-based SSH — no keys needed, authenticated by your Tailscale login. After locking down port 22, only tailscale ssh will work.

F) Lock down SSH + Firewall (close public port 22)

Step 1: Harden SSH server config

sudo nano /etc/ssh/sshd_config.d/99-hardening.conf
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
UsePAM yes
MaxAuthTries 3
LoginGraceTime 30
AllowUsers {{tutorial.sudouser}}
X11Forwarding no
AllowAgentForwarding no
# Reload SSH
sudo systemctl reload ssh

Step 2: Configure firewall — Hetzner Cloud Firewall OR UFW (pick one)

Hetzner Cloud Firewall vs UFW — which to use?

Hetzner Cloud Firewall (recommended for Hetzner users) filters traffic before it reaches your VPS — blocked packets never touch your server. This is more efficient and easier to manage via the Hetzner dashboard.

UFW is a host-level firewall running on your VPS. Use UFW if you're on a non-Hetzner provider that doesn't have a cloud firewall, or if you want defense-in-depth (both layers).

You do NOT need both. If you use Hetzner Cloud Firewall, you can leave UFW inactive. If your provider has no cloud firewall, use UFW.

Option A: Hetzner Cloud Firewall (recommended for Hetzner)

In the Hetzner Cloud Console:

  1. Go to FirewallsCreate Firewall
  2. Name it (e.g., web-server)
  3. Add these Inbound rules:
ProtocolPortSourcePurpose
TCP80Any (0.0.0.0/0, ::/0)HTTP (website + Let's Encrypt)
TCP443Any (0.0.0.0/0, ::/0)HTTPS (website)
UDP41641Any (0.0.0.0/0, ::/0)Tailscale direct connections (optional but recommended)

Do NOT add a rule for port 22. SSH is blocked from the public internet — you access the server via Tailscale SSH instead (which tunnels through Tailscale's encrypted network, not port 22 on the public IP).

  1. Apply to server → select your VPS
  2. Verify from your laptop:
# This should timeout (SSH blocked from public internet) — that's correct!
nc -vz {{tutorial.ipaddress}} 22

# This should still work (Tailscale SSH bypasses the firewall)
tailscale ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}
Option B: UFW (for non-Hetzner providers)
# Check current status (should be inactive on fresh install)
sudo ufw status

# Set default policies (explicit is better than implicit)
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow public web traffic
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Remove any public SSH rules (ignore errors if not present)
sudo ufw delete allow ssh || true
sudo ufw delete allow 22/tcp || true

# Allow SSH ONLY via Tailscale interface (not public internet)
sudo ufw allow in on tailscale0 to any port 22 proto tcp

# Enable firewall
sudo ufw enable

# Verify configuration
sudo ufw status verbose
Useful UFW Commands:
# Check status
sudo ufw status verbose
sudo ufw status numbered    # Shows rule numbers for deletion

# Temporarily disable firewall (for troubleshooting)
sudo ufw disable

# Delete a specific rule by number
sudo ufw delete 3           # Deletes rule #3

# Reset to defaults (removes all rules!)
sudo ufw reset
G) Install Fail2Ban for NGINX Rate Limiting (Optional)
Why Fail2Ban with Tailscale?

With SSH closed to the public internet, Fail2Ban's SSH jail is unnecessary. However, Fail2Ban is still valuable for protecting your web server from:

  • Brute-force login attempts — if your app has a login page
  • Aggressive bots/scrapers — blocking IPs that hammer your site
  • Exploit scanners — blocking IPs probing for vulnerabilities (wp-admin, .env, etc.)
  • DDoS mitigation — rate limiting excessive requests

Install Fail2Ban

# Install Fail2Ban
sudo apt install -y fail2ban

# Enable and start service
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Configure NGINX Jails

# Create local config (never edit jail.conf directly)
sudo nano /etc/fail2ban/jail.local
[DEFAULT]
# Ban for 1 hour (increase for repeat offenders)
bantime = 1h
# 5 failures within 10 minutes triggers ban
findtime = 10m
maxretry = 5

# Whitelist localhost and Tailscale CGNAT range so you never lock yourself out
ignoreip = 127.0.0.1/8 ::1 100.64.0.0/10

# Use nftables for banning (works without UFW)
banaction = nftables-multiport

# Handle IPv6 addresses automatically
allowipv6 = auto

# Let Fail2Ban pick the best log backend
backend = auto

# Disable SSH jail (we use Tailscale instead)
[sshd]
enabled = false

# NGINX rate limiting - ban IPs with too many requests
[nginx-limit-req]
enabled = true
port = http,https
filter = nginx-limit-req
logpath = /var/log/nginx/error.log
maxretry = 10
findtime = 1m
bantime = 1h

# NGINX bad bots - block scanners looking for exploits
[nginx-badbots]
enabled = true
port = http,https
filter = nginx-badbots
logpath = /var/log/nginx/access.log
maxretry = 2
findtime = 1d
bantime = 1w

# NGINX basic auth brute force (disable if you don't use HTTP basic auth)
# [nginx-http-auth]
# enabled = true
# port = http,https
# filter = nginx-http-auth
# logpath = /var/log/nginx/error.log
# maxretry = 5
# findtime = 5m
# bantime = 1h

# Recidive jail - escalate bans for repeat offenders
# (bans IPs that keep getting banned by other jails)
[recidive]
enabled = true
logpath = /var/log/fail2ban.log
banaction = nftables-multiport
bantime = 1w
findtime = 1d
maxretry = 5
Why nftables-multiport instead of ufw?

If you chose Option A (Hetzner Cloud Firewall) and didn't install UFW, banaction = ufw will silently fail — Fail2Ban will think it banned an IP, but no firewall rule is created. Using nftables-multiport works directly with the kernel's nftables framework, regardless of whether UFW is installed.

Create Bad Bots Filter

# Create custom filter for exploit scanners
sudo nano /etc/fail2ban/filter.d/nginx-badbots.conf
[Definition]
# Block requests for common exploit paths.
# Each line MUST start with ^ — Fail2Ban crashes on startup with
# "No failure-id group" if  is missing from any regex line.
#
# Assumes default NGINX log_format:
#   IP - - [date] "GET /path HTTP/1.1" 404 ...
failregex =
  ^ - .*"(GET|POST|HEAD) .*(/wp-admin|/wp-login|/wp-content|/wp-includes|/xmlrpc\.php|/wp-json).*" .*$
  ^ - .*"(GET|POST|HEAD) .*(/phpmyadmin|/phpMyAdmin|/pma|/adminer).*" .*$
  ^ - .*"(GET|POST|HEAD) .*(/\.git/|/\.env|/\.aws|/\.ssh|/\.config|/\.svn|/\.htaccess|/\.htpasswd).*" .*$
  ^ - .*"(GET|POST|HEAD) .*(/cgi-bin/|/server-status|/actuator|/solr|/vendor|/telescope|/debug|/console).*" .*$
  ^ - .*"(GET|POST) .*\.(php|asp|aspx|jsp|cgi)(\?.*)?" (400|403|404) .*$
ignoreregex =
Test your filter before restarting Fail2Ban:
# Dry-run the regex against your actual access log:
sudo fail2ban-regex /var/log/nginx/access.log /etc/fail2ban/filter.d/nginx-badbots.conf

If it shows matches, the regex works. If it shows 0 matches but you know there are bot requests in the log, your log_format may differ — compare a real log line with the regex pattern.

Enable NGINX Rate Limiting (Required for nginx-limit-req jail)

Rate limit zones should already be defined if you followed Section 6A:

# Verify rate limit zones exist (created in Section 6A prerequisites):
cat /etc/nginx/conf.d/00-rate-limit.conf

# Should show:
# limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# limit_req_zone $binary_remote_addr zone=login:10m   rate=2r/s;

# If the file doesn't exist, create it now:
# sudo nano /etc/nginx/conf.d/00-rate-limit.conf
Don't define zones in multiple places!

If you also added limit_req_zone lines in nginx.conf, remove them. Having the same zone name defined twice causes nginx -t to fail with "duplicate zone". Keep zones in one file only (/etc/nginx/conf.d/00-rate-limit.conf).

The rate limits are applied in your site config (Section 6D) with limit_req zone=general and limit_req zone=login.

# Verify and reload
sudo nginx -t
sudo systemctl reload nginx

Restart and Verify

# Restart Fail2Ban
sudo systemctl restart fail2ban

# Check status — you should see nginx-limit-req, nginx-badbots, recidive
sudo fail2ban-client status

# Check specific jail status
sudo fail2ban-client status nginx-limit-req
sudo fail2ban-client status nginx-badbots
sudo fail2ban-client status recidive

# Verify nftables rules are created (Fail2Ban creates its own table)
sudo nft list tables
# You should see: table inet f2b-table

# Inspect the Fail2Ban nftables rules
sudo nft list table inet f2b-table

# Manually unban an IP (if needed)
sudo fail2ban-client set nginx-limit-req unbanip 1.2.3.4

# Watch Fail2Ban log
sudo tail -f /var/log/fail2ban.log
Tip: Start with lenient settings (high maxretry, short bantime) and tighten based on your logs. Check /var/log/fail2ban.log to see what's being caught.
Pitfall: If sudo nft list tables does not show table inet f2b-table after a restart, Fail2Ban is not creating firewall rules. Check /var/log/fail2ban.log for errors — the most common cause is banaction = ufw when UFW is not installed.
Troubleshooting Fail2Ban

Fail2Ban won't start / crashes immediately

# Check service status and recent logs:
sudo systemctl status fail2ban --no-pager
sudo journalctl -u fail2ban -n 200 --no-pager -o cat

# Check Fail2Ban's own log:
sudo tail -n 200 /var/log/fail2ban.log

Common causes:

  • "No failure-id group" — a failregex line is missing <HOST>. Every regex line must start with ^<HOST>.
  • "No file(s) found for nginx-badbots" — the logpath doesn't exist yet. Create an empty log: sudo touch /var/log/nginx/access.log
  • "Unable to find a corresponding IP" — the regex matched but couldn't extract an IP. Check your NGINX log_format.

"Socket missing" right after restart

Fail2Ban takes a few seconds to create its socket. Wait 5 seconds and retry:

sudo systemctl restart fail2ban
sleep 5
sudo fail2ban-client status

Test a filter regex against your real logs

# Shows how many lines match (and how many don't):
sudo fail2ban-regex /var/log/nginx/access.log /etc/fail2ban/filter.d/nginx-badbots.conf

# Test a single log line:
echo '1.2.3.4 - - [13/Feb/2026:09:34:42 +0100] "GET /wp-login.php HTTP/1.1" 404 162' | \
  sudo fail2ban-regex - /etc/fail2ban/filter.d/nginx-badbots.conf
H) Harden Network with sysctl (Optional)
Defense in Depth:

These kernel-level network settings add extra protection against spoofing, redirects, and SYN floods. Ubuntu 24.04 already enables some of these by default (marked ✅), but explicitly setting them ensures consistency.

# Edit sysctl configuration
sudo nano /etc/sysctl.conf
# Add or uncomment these lines:

# === IP Spoofing protection (✅ Ubuntu default) ===
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# === Ignore ICMP broadcast requests (smurf attack prevention) ===
net.ipv4.icmp_echo_ignore_broadcasts = 1

# === Disable source packet routing ===
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# === Disable send redirects (prevents MITM) ===
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# === SYN flood protection (✅ syncookies is Ubuntu default) ===
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# === Log suspicious packets (martians) ===
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# === Ignore ICMP redirects (prevents routing attacks) ===
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0

# === OPTIONAL: Ignore ALL pings (stealth mode) ===
# ⚠️ This can break monitoring tools and make debugging harder
# Uncomment only if you specifically need to hide from ping scans:
# net.ipv4.icmp_echo_ignore_all = 1
# Apply changes
sudo sysctl -p

# Verify a setting
sysctl net.ipv4.tcp_syncookies
⚠️ Docker users: If using Docker, test thoroughly after applying these settings. Some configurations (especially related to forwarding and redirects) can interfere with container networking. If you experience issues, check docker network inspect bridge and review Docker's network documentation.
I) SSH Config + File Transfers Over Tailscale

Create an SSH config on your laptop (optional but convenient)

This lets you type ssh askaieu-com-vps instead of the full command every time.

Important: The config below goes inside a file (~/.ssh/config), not typed directly in the terminal! If you paste Host askaieu-com-vps into a terminal prompt, you'll get "Command not found" — that's because it's config file syntax, not a shell command.
# On your LAPTOP — create/edit the SSH config file:
nano ~/.ssh/config
# Add this block (adjust values to match your setup):
Host {{tutorial.tsHostname()}}
  HostName {{tutorial.tsHostname()}}
  User {{tutorial.sudouser}}
  IdentityFile ~/.ssh/id_ed25519
  IdentitiesOnly yes
# Set correct permissions (SSH requires this)
chmod 600 ~/.ssh/config
Note: With Tailscale SSH, you often don't need this config at all — just use tailscale ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}} which authenticates via your Tailscale identity without SSH keys. The config is useful if you prefer plain ssh or need it for tools like rsync/scp.

Deploy files over Tailscale (rsync/scp)

Yes — rsync, scp, and git all work over Tailscale. Just use the Tailscale hostname instead of the public IP:

# rsync for deployments (recommended — efficient, resumable)
rsync -avz --progress ./{{tutorial.deploydir}}/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/

# Copy a single file
scp ./myfile.txt {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/home/{{tutorial.sudouser}}/

# Copy a directory recursively
scp -r ./{{tutorial.deploydir}}/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/
Troubleshooting file transfers:
  • "tailnet policy does not permit you to SSH as user X" — your laptop username doesn't match the VPS user in the Tailscale ACL. Make sure you specify the correct user: rsync ... {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:... (not your laptop username)
  • Ensure tailscale status shows both machines online
  • If the destination requires sudo, transfer to ~ first, then sudo mv
J) Common Tailscale Commands (Daily Use)

Commands you'll use regularly when working with your VPS:

# SSH into your server (use this to reconnect anytime)
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# Or use Tailscale's built-in SSH client
tailscale ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# Check Tailscale connection status
tailscale status

# Show your devices' Tailscale IPs
tailscale ip -4
tailscale ip -4 {{tutorial.tsHostname()}}

# Ping the VPS over Tailscale
tailscale ping {{tutorial.tsHostname()}}
Server Power Management (run on VPS)
# Restart the server (graceful)
sudo reboot

# Shutdown the server
sudo shutdown now

# Scheduled shutdown in 5 minutes
sudo shutdown +5 "Server going down for maintenance"

# Cancel scheduled shutdown
sudo shutdown -c

# Restart NGINX
sudo systemctl restart nginx

# Restart your Nuxt app (via PM2)
pm2 restart {{tutorial.appname}}

# View server uptime
uptime
Tailscale Troubleshooting
# If Tailscale disconnects, bring it back up
sudo tailscale up --ssh --advertise-tags=tag:web --hostname {{tutorial.tsHostname()}}

# Check Tailscale service status
sudo systemctl status tailscaled

# Restart Tailscale service
sudo systemctl restart tailscaled

# View Tailscale logs
sudo journalctl -u tailscaled -f

# Force re-authentication (if needed)
sudo tailscale logout
sudo tailscale up --ssh --advertise-tags=tag:web --hostname {{tutorial.tsHostname()}}
💡 Tip: Bookmark this page! When returning the next day, you'll typically just need:
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

1) Initial Server Setup

# Update system packages
sudo apt update && sudo apt full-upgrade -y

# Install essential build tools and utilities
sudo apt install -y curl git wget build-essential ca-certificates gnupg lsof unzip \
  htop ncdu lnav

# Set timezone (adjust for your location)
sudo timedatectl set-timezone Europe/Stockholm

# Enable automatic security upgrades (recommended)
sudo apt install -y unattended-upgrades needrestart
sudo dpkg-reconfigure --priority=low unattended-upgrades

# Firewall note:
# - Ports 80/443 should be allowed publicly
# - Port 22 should be allowed ONLY via Tailscale (see Step 0)
# If using UFW: sudo ufw status verbose
# If using Hetzner Cloud Firewall: check the Hetzner console

Security Note: With SSH restricted to Tailscale only (Step 0), you don't need Fail2ban for SSH or a custom SSH port. However, Fail2ban is still recommended for HTTP/HTTPS rate limiting and bot protection — most VPS providers (including Hetzner and HostUp) only include L3/L4 DDoS protection, which does not filter application-layer (L7) attacks on HTTPS websites. See the optional Fail2Ban section in Step 0 for setup.

2) Install NGINX

# Option A (recommended): Ubuntu repository (stable, security supported)
sudo apt install -y nginx

# Option B (optional): Official NGINX repository (newer features)
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list
sudo apt update
sudo apt install -y nginx

# Start and enable NGINX
sudo systemctl start nginx
sudo systemctl enable nginx

# Verify installation
nginx -v
curl http://localhost  # Should show NGINX welcome page

3) Install Node.js 24 with NVM (Multi-Version Support)

Why NVM? Node Version Manager allows you to install and switch between multiple Node.js versions. Perfect for servers hosting multiple applications with different Node requirements.

Which Node version? Use the same version as your local development environment for consistency. Check with node --version on your laptop. Both Node 22 and 24 are LTS versions that work well with Nuxt 4. This tutorial uses Node 24.

# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash

# Load NVM (add to ~/.bashrc for persistence)
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"

# Install Node.js 24 LTS
nvm install 24
nvm use 24
nvm alias default 24

# Verify installation
node --version  # Should show v24.x.x
npm --version

# Install multiple versions if needed (optional)
# nvm install 20    # For apps requiring Node 20
# nvm install 18    # For apps requiring Node 18
# nvm list          # Show all installed versions

4) Install pnpm and PM2

# Enable corepack for pnpm
corepack enable
corepack prepare pnpm@latest --activate

# Verify pnpm
pnpm --version

# Install PM2 globally (NO sudo - NVM installs to ~/.nvm which you own)
npm install -g pm2

# Verify PM2
pm2 --version

# Setup PM2 to start on system boot
pm2 startup systemd
# This outputs a sudo command - COPY AND RUN that command exactly as shown!
# Example output:
# sudo env PATH=$PATH:/home/ubuntu/.nvm/versions/node/v24.13.0/bin pm2 startup systemd -u ubuntu --hp /home/ubuntu

# Optional: Install PM2 log rotation
pm2 install pm2-logrotate
⚠️ Common Pitfall: "sudo: npm: command not found"

When using NVM, never use sudo npm for global installs. NVM installs Node to ~/.nvm/ (your home directory), which sudo can't access.

  • sudo npm install -g pm2 — fails with "command not found"
  • npm install -g pm2 — works because you own ~/.nvm

The pm2 startup command will output a specific sudo command that includes the full path to your NVM Node — copy and run that exact command.

Per-App Node Versions: Use nvm use 20 before starting apps that need Node 20. PM2 will remember the Node version used when you save the process list.

5) Install Docker Engine (Optional)

Docker Benefits: Test production builds locally in WSL2 with identical setup. Not required for basic deployment, but valuable for containerized apps (like Plausible analytics) and testing.

A) Install via APT Repository (Recommended for Production)

Docker's official docs recommend installing via the apt repository for production servers. The convenience script ( get.docker.com ) is explicitly not recommended for production .

# Remove conflicting distro packages (safe even if none installed)
sudo apt remove -y docker.io docker-compose docker-compose-v2 \
  docker-doc podman-docker containerd runc 2>/dev/null || true

# Prerequisites + repo key
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

# Install Docker Engine + Compose plugin
sudo apt install -y docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin

B) Verify Installation

# Check Docker service status
sudo systemctl status docker --no-pager

# Verify versions
sudo docker --version
sudo docker compose version

# Test with hello-world container
sudo docker run --rm hello-world

C) Security Notes

⚠️ Docker Group Warning:

Avoid adding your main user to the docker group on hardened servers. The docker group is effectively root-equivalent — any user in that group can mount the host filesystem and escalate to root.

Instead, use sudo docker for all Docker commands.

D) Firewall Notes (Docker + UFW)

⚠️ Docker Can Bypass UFW:

Docker programs iptables/NAT directly, which can bypass UFW rules for published container ports.

Safe pattern:

  • In docker-compose.yml , publish ports to localhost only : "127.0.0.1:8000:8000"
  • Then reverse-proxy via NGINX (public) or Tailscale-only NGINX (private)

For stricter filtering, Docker recommends putting rules in the DOCKER-USER chain .

E) About the Convenience Script

The curl -fsSL https://get.docker.com | sh method works but is meant for quick testing only . Docker explicitly warns it's not recommended for production . If you're hardening a prod VPS, use the apt repo method above.

F) Note on Plausible Analytics

Planning to run Plausible?

Self-hosted Plausible CE runs via Docker Compose (Postgres + ClickHouse + app).

Important: Plausible CE is not designed for subfolder installs (like /analytics ). Use a subdomain instead:

  • plausible.{{tutorial.domain}} — works perfectly
  • {{tutorial.domain}}/plausible — not supported

6) Configure NGINX Reverse Proxy

⚠️ Important: Follow this order!
  1. Create prerequisite files (snippets, conf.d)
  2. Create application directory
  3. Create HTTP-only NGINX config (for Certbot)
  4. Run Certbot to get SSL certificates
  5. Update to full HTTPS config

Skipping steps will cause nginx -t to fail with "file not found" or "no ssl_certificate" errors.

A) Create Prerequisite Files First

These files must exist BEFORE creating the main NGINX config:

# 1. Create WebSocket upgrade map (required for WebSocket support)
sudo nano /etc/nginx/conf.d/connection_upgrade.conf
# Paste this content:
map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}
# 2. Create admin protection snippet (required by main config)
sudo mkdir -p /etc/nginx/snippets
sudo nano /etc/nginx/snippets/admin-tailscale-only.conf
# Paste this content:
allow {{tutorial.tailscaleip}};  # Your deploy laptop's Tailscale IP
deny all;
# 3. Create proxy snippet (DRY — reused in every location block)
sudo nano /etc/nginx/snippets/proxy_nuxt.conf
# Paste this content:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

proxy_redirect off;
proxy_read_timeout 60s;
# 4. Create rate limit zones (one file, one place — don't duplicate in nginx.conf)
sudo nano /etc/nginx/conf.d/00-rate-limit.conf
# Paste this content:
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m   rate=2r/s;
# 6. Create Let's Encrypt webroot directory
sudo mkdir -p /var/www/letsencrypt/.well-known/acme-challenge

# 7. Create application directory
sudo mkdir -p /var/www/{{tutorial.domain}}
sudo chown -R $USER:$USER /var/www/{{tutorial.domain}}

# 8. Create maintenance page directory
sudo mkdir -p /var/www/maintenance

B) Create HTTP-Only Config (Pre-SSL)

Start with HTTP only — Certbot needs this to validate your domain:

sudo nano /etc/nginx/sites-available/{{tutorial.domain}}
# === STAGE 1: HTTP-ONLY CONFIG (use until Certbot runs) ===

upstream nuxt_app {
  server 127.0.0.1:{{tutorial.appport}};
  keepalive 64;
}

server {
  listen 80;
  listen [::]:80;
  server_name {{tutorial.domain}} {{tutorial.domain2}};

  # Let's Encrypt validation (REQUIRED for Certbot)
  location ^~ /.well-known/acme-challenge/ {
    root /var/www/letsencrypt;
  }

  # Temporary: proxy to app (will become redirect after SSL)
  location / {
    proxy_pass http://nuxt_app;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
# Enable the site
sudo ln -s /etc/nginx/sites-available/{{tutorial.domain}} /etc/nginx/sites-enabled/
# Remove default site (optional)
sudo rm -f /etc/nginx/sites-enabled/default

# Test configuration - should pass now!
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx
✅ Checkpoint: At this point, sudo nginx -t should succeed. If it fails, check that:
  • /etc/nginx/conf.d/connection_upgrade.conf exists
  • /etc/nginx/conf.d/00-rate-limit.conf exists
  • /etc/nginx/snippets/admin-tailscale-only.conf exists
  • /etc/nginx/snippets/proxy_nuxt.conf exists
  • There are no SSL blocks in your config yet
502 Bad Gateway? That's expected at this stage!

NGINX is proxying to 127.0.0.1:{{tutorial.appport}}, but your Nuxt app isn't running yet. You'll see 502 Bad Gateway until you deploy and start your app (Section 7-9).

To verify NGINX → upstream wiring works before deploying, start a temporary test server:

# On the VPS — start a quick test server:
node -e "require('http').createServer((req,res)=>res.end('OK from {{tutorial.appport}}\\n')).listen({{tutorial.appport}},'127.0.0.1'); console.log('listening')"

# From your laptop — test directly (bypassing any CDN/proxy):
curl -v http://{{tutorial.ipaddress}} -H 'Host: {{tutorial.domain}}'
# Should return "OK from {{tutorial.appport}}"

# Stop the test server with Ctrl+C when done
Using Cloudflare? Point DNS directly to your VPS first.

If your domain's DNS is proxied through Cloudflare (orange cloud icon), curl http://{{tutorial.domain}} will hit Cloudflare's servers, not your VPS — and Let's Encrypt HTTP-01 validation will fail.

Before running Certbot:

  1. In Cloudflare DNS → change your A record to "DNS only" (grey cloud, not orange)
  2. Disable "Always Use HTTPS" and any redirect rules in Cloudflare
  3. Wait a few minutes, then verify:
    dig +short A {{tutorial.domain}}
    # Should show your VPS IP: {{tutorial.ipaddress}}
    # NOT Cloudflare IPs (188.114.x.x, 104.x.x.x, etc.)
  4. Only then run sudo certbot --nginx

You can re-enable Cloudflare proxy after Certbot succeeds if you want Cloudflare's CDN/DDoS protection, but for SSL setup the domain must point directly to your VPS.

Testing Without DNS? Edit Your Hosts File (Optional)
When to use this:

If your domain's DNS isn't configured yet, you can temporarily edit your local hosts file to test that NGINX is working. This maps the domain to your server's IP on your machine only.

⚠️ Limitations: This is local-only testing. Certbot/SSL still requires real DNS because Let's Encrypt needs to verify domain ownership via public DNS.

Linux / macOS

# Edit hosts file
sudo nano /etc/hosts

# Add a line mapping your domain to the server IP:
{{tutorial.ipaddress}}  {{tutorial.domain}}  {{tutorial.domain2}}

# Save and exit (Ctrl+O, Enter, Ctrl+X in nano)

# Test - should now connect to your server
curl http://{{tutorial.domain}}
# Or open http://{{tutorial.domain}} in your browser

Windows 10/11

# 1. Open Notepad as Administrator:
#    - Press Win key, type "Notepad"
#    - Right-click → "Run as administrator"

# 2. In Notepad: File → Open, navigate to:
#    C:\Windows\System32\drivers\etc\hosts
#    (Change file filter from "*.txt" to "All Files")

# 3. Add a line at the bottom:
{{tutorial.ipaddress}}  {{tutorial.domain}}  {{tutorial.domain2}}

# 4. Save the file

# 5. Flush DNS cache (in Command Prompt as Admin):
ipconfig /flushdns

Find Your Server's IP

# On the VPS, if you don't know the public IP:
curl -4 ifconfig.me

# Or check network interface:
ip addr show eth0 | grep "inet " | awk '{print $2}' | cut -d/ -f1
Remember to remove the hosts entry after DNS is configured, otherwise your machine will always use the hardcoded IP even if the server moves.

C) Get SSL Certificate (Run Certbot Now)

With HTTP config working, get your SSL certificate:

# Install Certbot
sudo apt install -y certbot python3-certbot-nginx

# Get certificate (Certbot will modify your NGINX config)
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}}
# Follow prompts:
# - Enter email address
# - Agree to terms
# - Choose redirect option: Yes (redirect HTTP to HTTPS)

# Verify SSL works
sudo nginx -t
sudo systemctl reload nginx
Certbot will mangle your Stage 1 config!

Certbot injects SSL directives (listen 443, ssl_certificate, etc.) directly into your Stage 1 config. The result looks messy — SSL blocks appear in odd places, return 301 gets mixed with listen 443, etc. This is expected. Don't try to fix it — immediately replace the entire file with the clean Stage 2 config below.

ERR_QUIC_PROTOCOL_ERROR in Chrome?

If Chrome shows "ERR_QUIC_PROTOCOL_ERROR" after enabling SSL, this is because Chrome tries QUIC (HTTP/3 over UDP) and caches the attempt. NGINX doesn't support QUIC/HTTP3 by default. Fix: open chrome://flags/#enable-quic and set it to "Disabled", or just wait — Chrome will fall back to HTTP/2 after the QUIC attempt times out. This is a browser issue, not a server issue.

D) Create Maintenance Page

Create a maintenance page before the full config — it's referenced by the error_page directive:

sudo nano /var/www/maintenance/index.html
<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>Maintenance</title>
  <style>
    body { font-family: system-ui, sans-serif; display: flex; justify-content: center;
           align-items: center; min-height: 100vh; margin: 0; background: #f5f5f5; }
    .box { text-align: center; padding: 2rem; }
    h1 { color: #333; }
    p { color: #666; }
  </style>
</head>
<body>
  <div class="box">
    <h1>Under Maintenance</h1>
    <p>We'll be back shortly. Please try again in a few minutes.</p>
  </div>
</body>
</html>
UTF-8 matters!

Always include <meta charset="utf-8"> in your HTML. Without it, characters like ellipsis (...) and em-dashes show as garbled text (Deploying…). Use plain ASCII in maintenance pages to be safe — write "..." instead of the Unicode ellipsis character.

E) Upgrade to Full Production Config (Post-SSL)

After Certbot succeeds, replace the entire file with the clean config including caching, security headers, and maintenance mode:

sudo nano /etc/nginx/sites-available/{{tutorial.domain}}
# === STAGE 2: FULL HTTPS CONFIG (after Certbot) ===
# Replace the ENTIRE file content with this clean config.

upstream nuxt_app {
  server 127.0.0.1:{{tutorial.appport}};
  keepalive 64;
}

# --- Maintenance mode toggle ---
# To enable maintenance: sudo touch /var/www/maintenance/enabled
# To disable:            sudo rm /var/www/maintenance/enabled

# HTTP -> HTTPS redirect (keep ACME for cert renewals)
server {
  listen 80;
  listen [::]:80;
  server_name {{tutorial.domain}} {{tutorial.domain2}};

  location ^~ /.well-known/acme-challenge/ {
    root /var/www/letsencrypt;
  }

  return 301 https://$host$request_uri;
}

# Main HTTPS server
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name {{tutorial.domain}};

  # SSL certs (adjust path if Certbot used a different name)
  ssl_certificate /etc/letsencrypt/live/{{tutorial.domain}}/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/{{tutorial.domain}}/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  # Security headers (keep CSP in-app via Nuxt)
  add_header X-Frame-Options "SAMEORIGIN" always;
  add_header X-Content-Type-Options "nosniff" always;
  add_header Referrer-Policy "strict-origin-when-cross-origin" always;
  add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
  add_header X-XSS-Protection "0" always;

  # HSTS — avoid 'preload' until all subdomains are HTTPS forever
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

  # Root for static files (Nuxt .output/public)
  root /var/www/{{tutorial.domain}}/app/.output/public;

  # --- Maintenance mode: if /var/www/maintenance/enabled exists, show maintenance page ---
  error_page 503 @maintenance;
  location @maintenance {
    root /var/www/maintenance;
    try_files /index.html =503;
    internal;
  }
  set $maintenance 0;
  if (-f /var/www/maintenance/enabled) {
    set $maintenance 1;
  }

  # Protect /admin/analytics (VPN-only)
  location ^~ /admin/analytics {
    include /etc/nginx/snippets/admin-tailscale-only.conf;
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  # Nuxt build assets (content-hashed — cache forever)
  location ^~ /_nuxt/ {
    try_files $uri =404;
    expires 1y;
    add_header Cache-Control "public, max-age=31536000, immutable" always;
    access_log off;
  }

  # Static assets (not always fingerprinted)
  location ~* \.(webp|jpg|jpeg|png|gif|ico|svg)$ {
    try_files $uri @app;
    expires 30d;
    add_header Cache-Control "public, max-age=2592000" always;
    access_log off;
  }
  location ~* \.(woff|woff2|ttf|eot|otf)$ {
    try_files $uri @app;
    expires 1y;
    add_header Cache-Control "public, max-age=31536000" always;
    access_log off;
  }
  location ~* \.(css|js)$ {
    try_files $uri @app;
    expires 7d;
    add_header Cache-Control "public, max-age=604800" always;
    access_log off;
  }

  # Fallback: proxy to Nuxt app
  location @app {
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  # Login/auth pages (stricter rate limit)
  location ~ ^/(login|auth) {
    if ($maintenance) { return 503; }
    limit_req zone=login burst=5 nodelay;
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  # Default: proxy everything else to Nuxt
  location / {
    if ($maintenance) { return 503; }
    limit_req zone=general burst=20 nodelay;
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;

    # If upstream is down and maintenance is NOT enabled, show 502
    error_page 502 504 =502 /50x-fallback.html;
  }

  # Health check (bypasses maintenance mode)
  location = /health {
    access_log off;
    return 200 "healthy\n";
    add_header Content-Type text/plain;
  }

  # Deny access to hidden files (.env, .git, etc.)
  location ~ /\. {
    deny all;
    access_log off;
    log_not_found off;
  }
}

# Redirect www -> non-www (only if using www subdomain)
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name {{tutorial.domain2}};

  ssl_certificate /etc/letsencrypt/live/{{tutorial.domain}}/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/{{tutorial.domain}}/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  return 301 https://{{tutorial.domain}}$request_uri;
}
# Test and reload
sudo nginx -t
sudo systemctl reload nginx

Note: If Certbot created different SSL paths, check with sudo certbot certificates and adjust the paths in your config accordingly.

Maintenance mode usage:
# Enable maintenance mode (shows maintenance page to all visitors):
sudo touch /var/www/maintenance/enabled
sudo systemctl reload nginx

# Disable maintenance mode (back to normal):
sudo rm /var/www/maintenance/enabled
sudo systemctl reload nginx

The /health endpoint always responds (even during maintenance) so monitoring tools keep working. The maintenance page is served directly by NGINX — no Nuxt app needed.

Why include /etc/nginx/snippets/proxy_nuxt.conf everywhere?

Instead of repeating 8 lines of proxy headers in every location block, we put them in one file. If you need to change a header (e.g., add proxy_read_timeout), you change it once.

Multi-Domain Setup (same Nuxt app, different branding)
When to use this:

If you have multiple domains (e.g., {{tutorial.domain}} + {{tutorial.domain2}}) pointing to the same Nuxt app, and your app switches logos/content based on the Host header. Each domain keeps its own URL — no redirect.

Step 1: Get SSL certs for the extra domain

# Add the extra domain to your existing certificate:
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}}

# Or get a separate certificate:
sudo certbot --nginx -d {{tutorial.domain2}}

Step 2: Create a separate site config

Keep each domain in its own config file — more flexible if you later move a domain to another server.

sudo nano /etc/nginx/sites-available/{{tutorial.domain2}}
# {{tutorial.domain2}} — proxies to the SAME Nuxt app as {{tutorial.domain}}
# The Nuxt app reads the Host header to switch branding.

server {
  listen 80;
  listen [::]:80;
  server_name {{tutorial.domain2}};

  location ^~ /.well-known/acme-challenge/ {
    root /var/www/letsencrypt;
  }

  return 301 https://{{tutorial.domain2}}$request_uri;
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name {{tutorial.domain2}};

  ssl_certificate /etc/letsencrypt/live/{{tutorial.domain2}}/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/{{tutorial.domain2}}/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  # Same security headers
  add_header X-Frame-Options "SAMEORIGIN" always;
  add_header X-Content-Type-Options "nosniff" always;
  add_header Referrer-Policy "strict-origin-when-cross-origin" always;
  add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
  add_header X-XSS-Protection "0" always;
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

  # Same root — Nuxt build assets are shared
  root /var/www/{{tutorial.domain}}/app/.output/public;

  # Maintenance mode (same toggle file)
  error_page 503 @maintenance;
  location @maintenance {
    root /var/www/maintenance;
    try_files /index.html =503;
    internal;
  }
  set $maintenance 0;
  if (-f /var/www/maintenance/enabled) {
    set $maintenance 1;
  }

  # Same static asset caching
  location ^~ /_nuxt/ {
    try_files $uri =404;
    expires 1y;
    add_header Cache-Control "public, max-age=31536000, immutable" always;
    access_log off;
  }

  location @app {
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  location ~ ^/(login|auth) {
    if ($maintenance) { return 503; }
    limit_req zone=login burst=5 nodelay;
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  location / {
    if ($maintenance) { return 503; }
    limit_req zone=general burst=20 nodelay;
    proxy_pass http://nuxt_app;
    include /etc/nginx/snippets/proxy_nuxt.conf;
  }

  location = /health {
    access_log off;
    return 200 "healthy\n";
    add_header Content-Type text/plain;
  }

  location ~ /\. {
    deny all;
    access_log off;
    log_not_found off;
  }
}
# Enable the site and reload
sudo ln -s /etc/nginx/sites-available/{{tutorial.domain2}} /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Key point: Both configs use the same upstream nuxt_app (defined in your primary domain's config). The Host header is passed through via proxy_nuxt.conf, so your Nuxt app receives Host: {{tutorial.domain2}} vs Host: {{tutorial.domain}} and can switch branding accordingly.
Separate configs vs one config?

Keeping each domain in its own file is recommended:

  • Easy to move a domain to another server later (just copy the file)
  • Can disable a domain without affecting others (rm /etc/nginx/sites-enabled/{{tutorial.domain2}})
  • Each domain can have its own SSL cert, rate limits, or special rules
  • Cleaner git history when changes are domain-specific
F) NGINX Performance Tuning (Optional)

These optimizations improve response times and reduce bandwidth. Add them to your main NGINX config.

Enable Gzip Compression

# Edit main NGINX config
sudo nano /etc/nginx/nginx.conf
# Find and uncomment/add in the http { } block:

##
# Gzip Settings
##
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
    application/atom+xml
    application/geo+json
    application/javascript
    application/json
    application/ld+json
    application/manifest+json
    application/rdf+xml
    application/rss+xml
    application/xhtml+xml
    application/xml
    font/eot
    font/otf
    font/ttf
    image/svg+xml
    text/css
    text/javascript
    text/plain
    text/xml;

Enable SSL Stapling

OCSP stapling improves SSL handshake performance. Certbot's options-ssl-nginx.conf may already include this, but you can add it to your server block if not:

# Add inside your server { listen 443 ssl ... } block:

# OCSP Stapling (faster SSL handshakes)
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;

Verify and Reload

# Test configuration
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx

# Test gzip is working (look for "Content-Encoding: gzip")
curl -H "Accept-Encoding: gzip" -I https://{{tutorial.domain}}
What's already optimized in our config:
  • HTTP/2 — enabled via listen 443 ssl http2
  • Static file caching — configured for images, fonts, CSS/JS
  • Security headers — X-Frame-Options, X-Content-Type-Options, HSTS, Referrer-Policy
  • Connection keepalive — via upstream keepalive 64

Advanced: API Microcaching (Optional)

For high-traffic APIs with responses that don't change every request, microcaching can dramatically reduce load:

# Add to http { } block in nginx.conf:
proxy_cache_path /tmp/cache-{{tutorial.appname}} levels=1:2 keys_zone=microcache:10m max_size=100m inactive=1h use_temp_path=off;

# Add to your site config, in specific location blocks:
location /api/ {
    # Cache for 1 second (microcache)
    proxy_cache microcache;
    proxy_cache_valid 200 1s;
    proxy_cache_use_stale updating;
    proxy_cache_background_update on;
    proxy_cache_lock on;

    # Show cache status in response header (for debugging)
    add_header X-Cache-Status $upstream_cache_status;

    proxy_pass http://nuxt_app;
    # ... other proxy settings
}
⚠️ Microcaching caveats:
  • Only use for endpoints that return the same data for all users
  • Don't use for authenticated/personalized responses
  • Test thoroughly — caching bugs can be hard to debug

7) Deploy Nuxt Application

A) Create Secure Directory Structure

Create the directory structure that separates code from persistent data:

# Create shared directory for persistent files
# (app/ will be created by git clone or rsync in the next step)
sudo mkdir -p /var/www/{{tutorial.domain}}/shared
sudo mkdir -p /var/www/{{tutorial.domain}}/logs/archives

# Set ownership to your user (so you can deploy without sudo)
sudo chown -R $USER:$USER /var/www/{{tutorial.domain}}

# Lock shared/ directory — prevents other local users from listing its contents
chmod 700 /var/www/{{tutorial.domain}}/shared

# Verify structure and permissions
tree -apug --dirsfirst -L 2 /var/www/{{tutorial.domain}}
Don't have tree? Install it: sudo apt install -y tree. The flags: -a (show hidden), -p (permissions), -u (owner), -g (group), --dirsfirst, -L 2 (depth).

Your server will have this structure after deployment:

/var/www/{{tutorial.domain}}/
├── [drwxr-xr-x] app/                    # Created by git clone or rsync (Step B)
│   ├── .output/            # Nuxt build output
│   ├── prisma/
│   ├── package.json
│   ├── pnpm-lock.yaml
│   ├── ecosystem.config.cjs
│   └── .env -> ../shared/.env   # Symlink (Step E)
├── [drwx------] shared/                 # chmod 700 — only your user can access
│   ├── [-rw-------] .env                # chmod 600 — secrets, NEVER in git
│   └── [-rw-------] db.sqlite           # chmod 600 — created by Prisma migrate
└── [drwxr-xr-x] logs/                   # PM2 logs (optional)
    └── archives/
Why chmod 700 on shared/?

Even if .env is chmod 600, a 755 directory still leaks file names (.env, db.sqlite) to other local users, and if you accidentally create a file with weaker permissions it's immediately exposed. 700 is the clean default when everything runs as a single user.

PM2 runs as {{tutorial.sudouser}} — no dedicated user needed.

NVM, Node, and PM2 are all installed under your sudo user. The app reads .env via symlink and accesses db.sqlite via the absolute DATABASE_URL path. Both live in shared/ and survive redeployments (replacing app/ contents).

Verify everything is accessible (run as {{tutorial.sudouser}}, no sudo):

# Should print "readable" / "writable" — if not, check ownership
test -r /var/www/{{tutorial.domain}}/shared/.env && echo ".env readable"
test -w /var/www/{{tutorial.domain}}/shared/db.sqlite && echo "db writable" || echo "db not yet created (normal before first migration)"

B) Upload or Clone Your Project

Option 1: Clone from Git (git was installed in Section 1)

# On the VPS, clone directly into the app/ directory:
cd /var/www/{{tutorial.domain}}
git clone https://github.com/yourusername/{{tutorial.appname}}.git app

# "app" at the end names the target directory (instead of the repo name)
# No chown needed — you own the parent directory (set in Step A),
# so app/ and all its contents are automatically owned by your user

Option 2: Upload deploy folder (recommended for pre-built apps)

Local folder structure: Before running these commands, make sure your local {{tutorial.deploydir}}/ folder contains your built Nuxt app:
your-project/
└── {{tutorial.deploydir}}/
    ├── .output/          # Nuxt build output
    ├── prisma/
    │   └── schema.prisma # Database schema (NO db.sqlite!)
    ├── package.json
    ├── pnpm-lock.yaml
    └── ecosystem.config.cjs

⚠️ Security: Do NOT include .env or db.sqlite in deploy folder. These are created/managed on the server in the shared/ directory.

# Run these commands FROM YOUR LOCAL MACHINE (laptop/WSL2)
# First, cd to the PARENT folder of your deploy directory:
cd /path/to/your-project

# Using rsync (recommended - creates app/ directory automatically):
rsync -avz --progress {{tutorial.deploydir}}/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/

# Or using scp (need to create app/ directory first on server):
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}} "mkdir -p /var/www/{{tutorial.domain}}/app"
scp -r {{tutorial.deploydir}}/* {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/

C) Required Files in Deploy Folder

Ensure these files/folders are present in your deploy:

NOT in deploy folder (security):
  • .env — Created on server in shared/ directory
  • db.sqlite — Created on server, persists across deployments

D) Configure Environment (in shared/)

# Create .env in shared directory (NOT in app directory!)
nano /var/www/{{tutorial.domain}}/shared/.env

Example production .env:

# Database (points to shared directory)
DATABASE_URL="file:/var/www/{{tutorial.domain}}/shared/db.sqlite"

# Session (generate with: openssl rand -hex 32)
NUXT_SESSION_PASSWORD="your-64-character-hex-password-here"

# Production URL
NUXT_PUBLIC_SITE_URL="https://{{tutorial.domain}}"

# Node environment
NODE_ENV="production"

# Optional: OAuth providers
# NUXT_OAUTH_GITHUB_CLIENT_ID="..."
# NUXT_OAUTH_GITHUB_CLIENT_SECRET="..."
# Lock down permissions (owner read/write only)
chmod 700 /var/www/{{tutorial.domain}}/shared
chmod 600 /var/www/{{tutorial.domain}}/shared/.env
# db.sqlite will be locked after first migration (Step 8B)

E) Create Symlink for .env

Link the shared .env to your app directory so the app can find it:

# Create symlink from app to shared .env
ln -sf /var/www/{{tutorial.domain}}/shared/.env /var/www/{{tutorial.domain}}/app/.env

# Verify symlink
ls -la /var/www/{{tutorial.domain}}/app/.env
# Should show: .env -> /var/www/{{tutorial.domain}}/shared/.env
Why symlinks?
  • Your .env survives when you redeploy (replace app/ contents)
  • Your database persists across deployments
  • Secrets are never in your deploy artifacts or git history
F) Store .env Files Securely (Local Development)
Best Practice: Store your .env files outside your project directory. This prevents accidental commits and keeps secrets separate from code.

Recommended Directory Structure

# Create a keys directory for all your project secrets
# Windows:
C:\keys\
├── {{tutorial.appname}}\
│   └── .env           # Dev/staging secrets for this project
├── another-project\
│   └── .env
└── ssh\               # Optional: backup of SSH keys

# Linux/macOS:
~/keys/
├── {{tutorial.appname}}/
│   └── .env
└── another-project/
    └── .env

Configure Your App to Use External .env

# Option 1: Use --dotenv flag with pnpm/npm
pnpm dev --dotenv C:/keys/{{tutorial.appname}}/.env         # Windows
pnpm dev --dotenv ~/keys/{{tutorial.appname}}/.env          # Linux/Mac

# Option 2: Set in package.json scripts
"scripts": {
  "dev": "nuxt dev --dotenv ../keys/.env",
  "dev:local": "nuxt dev --dotenv C:/keys/{{tutorial.appname}}/.env"
}

# Option 3: Symlink (Linux/Mac only)
ln -s ~/keys/{{tutorial.appname}}/.env .env

Benefits

  • Can't accidentally commit — .env is physically outside the git repo
  • Survives project deletion — secrets persist if you delete/reclone
  • Easy to backup — one keys/ folder for all projects
  • Clear separation — code vs secrets are in different locations
Still add .env to .gitignore: Even with external storage, keep .env in .gitignore as a safety net in case someone creates a local .env file.

8) Install Dependencies and Setup Database

A) Install Production Dependencies

# Navigate to app directory
cd /var/www/{{tutorial.domain}}/app

# Ensure correct Node version is active
nvm use 24

# Install dependencies (frozen lockfile for reproducibility)
pnpm install --frozen-lockfile

# Or for production-only dependencies:
# pnpm install --prod --frozen-lockfile

B) Setup Prisma and Database

# Generate Prisma Client
pnpm dlx prisma generate

# Run database migrations (creates db.sqlite in shared/ via DATABASE_URL)
pnpm dlx prisma migrate deploy

# Optional: Seed database with initial data
pnpm db:seed

# Verify database connectivity
pnpm run preflight:db

# Verify database was created in shared directory
ls -la /var/www/{{tutorial.domain}}/shared/db.sqlite

# Lock down database file (owner read/write only)
chmod 600 /var/www/{{tutorial.domain}}/shared/db.sqlite

C) Build Application (if not pre-built)

# If you cloned from Git and need to build:
pnpm build

# Verify build succeeded
ls -lh .output/server/index.mjs

D) Run Preflight Checks

# Verify runtime dependencies
pnpm run preflight

# Check database connectivity
pnpm run preflight:db

# Both should complete without errors

9) Start Application with PM2

A) Initial Start

# Navigate to app directory
cd /var/www/{{tutorial.domain}}/app

# Ensure correct Node version
nvm use 24

# Delete existing PM2 process if it exists
pm2 delete {{tutorial.appname}} || true

# Start application using ecosystem config
pm2 start ecosystem.config.cjs --env production

# Save PM2 process list (survives reboots)
pm2 save

# View logs
pm2 logs {{tutorial.appname}} --lines 200

B) Verify Application is Running

# Check PM2 status
pm2 list

# Test application locally
curl -i http://127.0.0.1:{{tutorial.appport}}/

# Should return 200 OK with HTML content

# Monitor application
pm2 monit

C) PM2 Management Commands

# Reload app after code changes (zero-downtime)
pm2 reload ecosystem.config.cjs --env production --update-env

# Restart app (brief downtime)
pm2 restart {{tutorial.appname}}

# Stop app
pm2 stop {{tutorial.appname}}

# View real-time logs
pm2 logs {{tutorial.appname}} --lines 100

# Clear logs
pm2 flush

# Application info
pm2 info {{tutorial.appname}}

Multiple Node Versions: If running multiple apps with different Node versions: nvm use 20 && pm2 start app1.js , then nvm use 24 && pm2 start app2.js . PM2 remembers which Node binary to use for each app.

Advanced: Release-Based Deployment (Zero-Downtime)
When to use this: For production sites with CI/CD pipelines where you need instant rollback, zero-downtime deploys, and audit trails of previous releases.

Quick Setup Script

Download and run the setup script to create the full directory structure with correct permissions in one go:

# Upload the script to your VPS (from your laptop):
scp setup-release-dirs.sh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:~/

# On the VPS — run it:
chmod +x ~/setup-release-dirs.sh
./setup-release-dirs.sh {{tutorial.domain}} {{tutorial.sudouser}}
What the script does: Creates releases/, shared/ (chmod 700), logs/archives/, maintenance page, template .env (chmod 600), sets ownership, and migrates from app/ layout if it exists. Safe to re-run.

Release-Based Directory Structure

Instead of deploying directly to app/, deploy to timestamped release directories:

/var/www/{{tutorial.domain}}/
├── releases/                    # Versioned releases
│   ├── 20260125_143052_abc123/  # Timestamp + commit hash
│   ├── 20260124_092130_def456/
│   └── 20260123_181500_ghi789/
├── current -> releases/20260125.../  # Symlink to active release
├── [drwx------] shared/              # chmod 700 — only deploy user
│   ├── [-rw-------] .env             # chmod 600 — secrets
│   ├── [-rw-------] db.sqlite        # chmod 600 — database
│   └── images/                       # User uploads (if any)
└── logs/                             # Persistent logs
    ├── err.log
    ├── out.log
    └── archives/                     # Per-release log archives

Manual Setup (if not using the script above)

# Create release directories
sudo mkdir -p /var/www/{{tutorial.domain}}/releases
sudo mkdir -p /var/www/{{tutorial.domain}}/shared
sudo mkdir -p /var/www/{{tutorial.domain}}/logs/archives

# Set ownership and lock down shared/
sudo chown -R $USER:$USER /var/www/{{tutorial.domain}}
chmod 700 /var/www/{{tutorial.domain}}/shared

# Migrate from simple app/ structure (if upgrading)
# mv /var/www/{{tutorial.domain}}/app /var/www/{{tutorial.domain}}/releases/initial
# ln -sfn /var/www/{{tutorial.domain}}/releases/initial /var/www/{{tutorial.domain}}/current

Deploy Script Example

#!/bin/bash
# deploy.sh - Run from CI/CD or locally
set -e

DOMAIN="{{tutorial.domain}}"
BASE_DIR="/var/www/$DOMAIN"
RELEASE_ID="$(date +%Y%m%d_%H%M%S)_$(git rev-parse --short HEAD)"
RELEASE_DIR="$BASE_DIR/releases/$RELEASE_ID"

echo "Deploying release: $RELEASE_ID"

# 1. Create release directory
mkdir -p "$RELEASE_DIR"

# 2. Copy/upload application files (adjust for your setup)
rsync -avz --exclude='.env' --exclude='node_modules' ./ "$RELEASE_DIR/"

# 3. Link shared resources
ln -sfn "$BASE_DIR/shared/.env" "$RELEASE_DIR/.env"
ln -sfn "$BASE_DIR/shared/db.sqlite" "$RELEASE_DIR/prisma/db.sqlite"
# ln -sfn "$BASE_DIR/shared/images" "$RELEASE_DIR/.data/images"  # If using uploads

# 4. Install dependencies
cd "$RELEASE_DIR"
pnpm install --frozen-lockfile --prod

# 5. Generate Prisma client
pnpm dlx prisma generate

# 6. Run migrations (if any)
pnpm dlx prisma migrate deploy

# 6b. Lock db.sqlite after first migration creates it
chmod 600 "$BASE_DIR/shared/db.sqlite" 2>/dev/null || true

# 7. Archive current logs (before switching)
if [ -L "$BASE_DIR/current" ]; then
  PREV_RELEASE=$(basename $(readlink "$BASE_DIR/current"))
  mkdir -p "$BASE_DIR/logs/archives/$PREV_RELEASE"
  cp "$BASE_DIR/logs"/*.log "$BASE_DIR/logs/archives/$PREV_RELEASE/" 2>/dev/null || true
fi

# 8. Switch symlink (atomic operation = zero downtime)
ln -sfn "$RELEASE_DIR" "$BASE_DIR/current"

# 9. Reload PM2 (zero-downtime with cluster mode)
pm2 reload ecosystem.config.cjs --update-env

# 10. Health check
sleep 3
curl -sf http://127.0.0.1:{{tutorial.appport}}/health || echo "Warning: Health check failed"

# 11. Cleanup old releases (keep last 5)
cd "$BASE_DIR/releases"
ls -t | tail -n +6 | xargs -r rm -rf

echo "✅ Deployed: $RELEASE_ID"

Instant Rollback

# List available releases
ls -la /var/www/{{tutorial.domain}}/releases/

# Rollback to previous release
PREV_RELEASE=$(ls -t /var/www/{{tutorial.domain}}/releases/ | sed -n '2p')
ln -sfn /var/www/{{tutorial.domain}}/releases/$PREV_RELEASE /var/www/{{tutorial.domain}}/current
pm2 reload ecosystem.config.cjs --update-env

# Verify rollback
readlink /var/www/{{tutorial.domain}}/current
Update NGINX + PM2 paths when switching to release-based layout!

The simple layout uses app/. The release layout uses current/ (a symlink to the active release). After migrating, update these references:

WhatSimple layout (app/)Release layout (current/)
NGINX root /var/www/{{tutorial.domain}}/app/.output/public /var/www/{{tutorial.domain}}/current/.output/public
PM2 cwd /var/www/{{tutorial.domain}}/app /var/www/{{tutorial.domain}}/current
.env symlink app/.env -> ../shared/.env Created per release by deploy script
# After updating NGINX config:
sudo nginx -t && sudo systemctl reload nginx

# After updating ecosystem.config.cjs:
cd /var/www/{{tutorial.domain}}/current
pm2 delete {{tutorial.appname}} || true
pm2 start ecosystem.config.cjs --env production
pm2 save

Update ecosystem.config.cjs for Release Layout + Cluster Mode

// For zero-downtime reloads, use cluster mode
// cwd points to the current symlink — PM2 follows it on reload
module.exports = {
  apps: [{
    name: "{{tutorial.appname}}",
    cwd: "/var/www/{{tutorial.domain}}/current",
    script: ".output/server/index.mjs",
    instances: 2,                    // Cluster mode (2+ instances)
    exec_mode: "cluster",
    env_production: {
      NODE_ENV: "production",
      PORT: {{tutorial.appport}}
    },
    // Use absolute paths for logs (outside releases)
    error_file: "/var/www/{{tutorial.domain}}/logs/err.log",
    out_file: "/var/www/{{tutorial.domain}}/logs/out.log",
    log_date_format: "YYYY-MM-DD HH:mm:ss Z",
    merge_logs: true
  }]
};
Benefits:
  • Zero-downtime — PM2 cluster reload keeps app running during deploy
  • Instant rollback — Just switch symlink, no rebuild needed
  • Audit trail — Keep last N releases for debugging
  • Atomic deploys — Symlink switch is atomic operation
  • Log preservation — Logs archived per release
10) SSL Verification and Maintenance
Note: If you followed Section 6 correctly, SSL is already configured. This section covers verification and ongoing maintenance.

A) Verify SSL Setup

# Check certificate status
sudo certbot certificates

# Test auto-renewal (dry run)
sudo certbot renew --dry-run

# Test NGINX configuration
sudo nginx -t

B) Auto-Renewal

Certbot automatically sets up a systemd timer for renewal. Verify it's active:

# Check renewal timer status
sudo systemctl status certbot.timer

# List scheduled timers
sudo systemctl list-timers | grep certbot

# Manually renew (if needed)
sudo certbot renew

# Renewal happens automatically before expiration (usually 30 days)

C) Test SSL Configuration

# Test HTTPS is working
curl -I https://{{tutorial.domain}}

# Check SSL grade (external tool)
# Visit: https://www.ssllabs.com/ssltest/analyze.html?d={{tutorial.domain}}

# Verify redirect from HTTP to HTTPS
curl -I http://{{tutorial.domain}}
# Should show: HTTP/1.1 301 Moved Permanently
# Location: https://{{tutorial.domain}}/

D) If SSL Certificate Expires or Fails

# Force renewal
sudo certbot renew --force-renewal

# If renewal fails, re-run certbot
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}}

# Check NGINX error logs
sudo tail -50 /var/log/nginx/error.log

# Check Certbot logs
sudo tail -50 /var/log/letsencrypt/letsencrypt.log

E) Add Additional Domains Later

# To add a new subdomain to existing certificate
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}} -d newsubdomain.{{tutorial.domain}}

# Or expand existing certificate
sudo certbot --expand -d {{tutorial.domain}} -d newsubdomain.{{tutorial.domain}}
11) Monitoring and Maintenance

A) Install Monitoring Tools

# System monitoring
sudo apt install -y htop ncdu

# Log viewing
sudo apt install -y lnav

# PM2 log rotation
pm2 install pm2-logrotate

B) Viewing Logs

# NGINX error log (check for config issues, upstream errors)
sudo tail -50 /var/log/nginx/error.log

# NGINX access log (see incoming requests)
sudo tail -50 /var/log/nginx/access.log

# Watch logs in real-time (Ctrl+C to stop)
sudo tail -f /var/log/nginx/error.log

# PM2 application logs
pm2 logs {{tutorial.appname}} --lines 100

# System logs (auth, kernel, etc.)
sudo journalctl -xe --no-pager | tail -50

# Fail2Ban log (if installed)
sudo tail -50 /var/log/fail2ban.log

# Use lnav for interactive log viewing (installed above)
sudo lnav /var/log/nginx/

C) Regular Maintenance Tasks

# Weekly: Update packages
sudo apt update && sudo apt upgrade -y

# Weekly: Check disk space
df -h
ncdu /var/www

# Weekly: Review application logs
pm2 logs --lines 100

# Monthly: Check for security updates
sudo apt list --upgradable

# Monthly: Verify SSL certificate
sudo certbot certificates

# Monthly: Review PM2 status
pm2 status

D) Automated Backups

# Create backup script
sudo nano /usr/local/bin/backup-{{tutorial.appname}}.sh
#!/bin/bash
# Backup script for {{tutorial.appname}} application

BACKUP_DIR="/backup/{{tutorial.appname}}"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Backup application files
tar -czf $BACKUP_DIR/app_$DATE.tar.gz /var/www/{{tutorial.domain}}/.output /var/www/{{tutorial.domain}}/prisma

# Backup database
cp /var/www/{{tutorial.domain}}/prisma/db.sqlite $BACKUP_DIR/db_$DATE.sqlite

# Backup environment file
cp /var/www/{{tutorial.domain}}/.env $BACKUP_DIR/env_$DATE

# Keep only last 7 days of backups
find $BACKUP_DIR -name "app_*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "db_*.sqlite" -mtime +7 -delete
find $BACKUP_DIR -name "env_*" -mtime +7 -delete

echo "Backup completed: $DATE"
# Make executable
sudo chmod +x /usr/local/bin/backup-{{tutorial.appname}}.sh
# Add to crontab (daily at 2 AM)
sudo crontab -e
# Add line: 0 2 * * * /usr/local/bin/backup-{{tutorial.appname}}.sh >> /var/log/backup-{{tutorial.appname}}.log 2>&1
12) Upgrade to Ubuntu 26.04 LTS (April 2026)

Timeline: Ubuntu 26.04 LTS releases in April 2026. Wait 1 month for initial bugs to be fixed, then plan your upgrade.

A) Pre-Upgrade Preparation

# 1. Full backup of everything
sudo rsync -av /var/www/ /backup/www/
cp /var/www/{{tutorial.domain}}/prisma/db.sqlite /backup/db-$(date +%F).sqlite
tar -czf /backup/env-$(date +%F).tar.gz /var/www/{{tutorial.domain}}/.env

# 2. Update current system fully
sudo apt update && sudo apt full-upgrade -y

# 3. Remove unnecessary packages
sudo apt autoremove -y
sudo apt autoclean

# 4. Stop application
pm2 stop all

# 5. Test backup restoration (recommended)
# Restore to test server first if possible

B) Perform Upgrade

# Run release upgrade
sudo do-release-upgrade

# Follow prompts (takes 30-60 minutes)
# Say Yes to most questions
# Review config file changes carefully

# System will reboot automatically

C) Post-Upgrade Verification

# After reboot, SSH back in

# Verify Ubuntu version
lsb_release -a  # Should show 26.04

# Check NGINX
sudo systemctl status nginx
nginx -v

# Check Node.js (NVM should preserve versions)
nvm list
node --version

# Check PM2
pm2 list

# Start application
pm2 resurrect  # Restore saved PM2 processes
pm2 start all

# Verify application
curl https://{{tutorial.domain}}
pm2 logs

# Test all features thoroughly

D) If Something Goes Wrong

# If upgrade fails, you can restore from backup
# Boot from rescue mode or previous kernel
# Restore files from /backup/

# If application doesn't start:
# 1. Check PM2 logs: pm2 logs
# 2. Regenerate dependencies: rm -rf node_modules && pnpm install
# 3. Regenerate Prisma: pnpm dlx prisma generate
# 4. Check NGINX config: sudo nginx -t
13) Docker Alternative Deployment (Optional)

Docker Deployment: Instead of PM2, you can run your Nuxt app in Docker containers. Useful for isolation, easier scaling, and consistent environments.

A) Basic Docker Deployment

# Create docker-compose.yml in /var/www/{{tutorial.domain}}
version: '3.8'

services:
  nuxt-app:
    image: node:22-alpine
    container_name: {{tutorial.appname}}-production
    working_dir: /app
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=production
      - DATABASE_URL=file:./prisma/db.sqlite
    command: sh -c "pnpm install --frozen-lockfile && pnpm dlx prisma generate && node .output/server/index.mjs"
    ports:
      - "{{tutorial.appport}}:{{tutorial.appport}}"
    restart: unless-stopped

# Start with Docker
docker compose up -d

# View logs
docker compose logs -f

# Stop
docker compose down

B) When to Use Docker vs PM2

Scenario Recommended Reason
Single Nuxt app PM2 Simpler, lighter, faster restarts
Multiple apps, different Node versions NVM + PM2 Easy version switching
Microservices architecture Docker Isolation, orchestration
Need PostgreSQL/Redis/etc Docker All services in one compose file
Kubernetes deployment Docker Container images required
Troubleshooting

Common Issues and Solutions

NGINX Config Fails: "No such file or directory"

# Error: open() "/etc/nginx/snippets/admin-tailscale-only.conf" failed (2: No such file or directory)

# Solution: Create the missing snippet file BEFORE adding the NGINX config
sudo mkdir -p /etc/nginx/snippets
sudo nano /etc/nginx/snippets/admin-tailscale-only.conf

# Add content:
allow {{tutorial.tailscaleip}};
deny all;

# Similarly for connection_upgrade.conf:
sudo nano /etc/nginx/conf.d/connection_upgrade.conf

# Add content:
map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

NGINX Config Fails: "no ssl_certificate is defined"

# Error: no "ssl_certificate" is defined for the "listen ... ssl" directive

# Cause: You're using the HTTPS config before running Certbot

# Solution 1: Use HTTP-only config first (recommended)
# - See Section 6B for the HTTP-only starter config
# - Run Certbot to get certificates
# - Then upgrade to full HTTPS config

# Solution 2: Comment out the SSL server blocks temporarily
# In your NGINX config, comment out all "server { listen 443 ssl..." blocks
# until after Certbot runs

# After Certbot succeeds, verify certs exist:
sudo ls -la /etc/letsencrypt/live/{{tutorial.domain}}/

# Then update your config with the full HTTPS version

502 Bad Gateway

# Check PM2 logs
pm2 logs {{tutorial.appname}} --lines 100
# Verify app is running
pm2 list

# Test app directly
curl -i http://127.0.0.1:{{tutorial.appport}}/

# Check NGINX configuration
sudo nginx -t

# Restart application
pm2 restart {{tutorial.appname}}

Missing Node Modules

# Ensure lockfile exists
ls -lh pnpm-lock.yaml

# Reinstall dependencies
rm -rf node_modules
nvm use 24
pnpm install --frozen-lockfile

# Restart app
pm2 restart {{tutorial.appname}}

Database Connection Errors

# Verify DATABASE_URL in .env
cat .env | grep DATABASE_URL

# Check database file exists
ls -lh prisma/db.sqlite

# Regenerate Prisma client
pnpm dlx prisma generate

# Run preflight check
pnpm run preflight:db

Prisma Client Not Generated

# Generate Prisma client
pnpm dlx prisma generate

# If using migrations
pnpm dlx prisma migrate deploy

# Restart application
pm2 restart {{tutorial.appname}}

Wrong Node Version

# Check current Node version
node --version

# Switch to Node 24
nvm use 24

# Set as default
nvm alias default 24

# Restart PM2 with correct Node
pm2 delete {{tutorial.appname}}
nvm use 24
pm2 start ecosystem.config.cjs --env production
pm2 save

SSL Certificate Issues

# Check certificate status
sudo certbot certificates

# Renew if needed
sudo certbot renew

# Test NGINX config
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx

High Memory Usage

# Check PM2 memory
pm2 list

# Monitor in real-time
pm2 monit

# Restart app to free memory
pm2 restart {{tutorial.appname}}

# Check system memory
free -h
htop

Port Already in Use

# Find process using port {{tutorial.appport}}
sudo lsof -i :{{tutorial.appport}}

# Kill process (if needed)
sudo kill -9 PID

# Or change port in ecosystem.config.cjs
# Update NGINX proxy_pass accordingly

"sudo: npm: command not found" (NVM users)

# This happens because NVM installs Node to ~/.nvm which sudo can't access
# WRONG:
sudo npm install -g pm2  # ❌ Fails

# CORRECT (no sudo needed - you own ~/.nvm):
npm install -g pm2  # ✅ Works

# For PM2 startup, run pm2 startup and copy the FULL command it outputs:
pm2 startup systemd
# Then run the sudo command it gives you, which includes the full NVM path:
# sudo env PATH=$PATH:/home/youruser/.nvm/versions/node/v24.13.0/bin pm2 startup systemd -u youruser --hp /home/youruser

Mysterious "Permission Denied" Errors (AppArmor)

What is AppArmor?

Ubuntu 24.04 has AppArmor enabled by default — a Mandatory Access Control (MAC) system that restricts what programs can access, even if file permissions allow it. NGINX, Docker, and other services have pre-configured profiles that usually "just work."

# Check if AppArmor is enabled
sudo aa-enabled  # Should return "Yes"

# List all AppArmor profiles and their status
sudo aa-status

# Check if AppArmor is blocking something (look for DENIED entries)
sudo dmesg | grep -i apparmor
# Or check audit log
sudo grep apparmor /var/log/syslog | tail -20

# Common output when AppArmor blocks access:
# apparmor="DENIED" operation="open" profile="nginx" name="/some/path"

Common AppArmor scenarios:

# NGINX can't read files from unusual location?
# The NGINX AppArmor profile restricts which directories it can access.
# Check current NGINX profile:
sudo cat /etc/apparmor.d/usr.sbin.nginx

# Quick fix: Put profile in complain mode (logs but doesn't block)
sudo aa-complain /usr/sbin/nginx

# Better fix: Add your path to the profile
sudo nano /etc/apparmor.d/local/usr.sbin.nginx
# Add:  /var/www/your-custom-path/** r,

# Reload AppArmor
sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.nginx

# Return to enforce mode after testing
sudo aa-enforce /usr/sbin/nginx
Node.js/PM2 Note: Node.js and PM2 typically run unconfined (no AppArmor profile), so they're rarely affected. If you want to sandbox your Node app, you'd need to create a custom profile — but this is advanced and usually unnecessary for typical deployments.

Additional Resources