Ubuntu 24.04 LTS Server with Nuxt 4 + NGINX + PM2
Tutorial for Windows (WSL2) / Mac / Linux

The Perfect Ubuntu 24.04 LTS Server

NGINX reverse proxy + Node 24 LTS (NVM) + PM2 + Docker

This guide shows you how to deploy a production-ready Nuxt 4 application with Prisma + SQLite on Ubuntu 24.04 LTS . Perfect for developers who want a secure, fast, and scalable web server with modern tooling.

💡 Recommended Hosting Providers:

  • 🇪🇺 Hetzner Cloud - Best value in Europe (from €4.15/month, EU-based, GDPR-compliant, basic DDoS protection)
  • 🛡️ HostUp.se - Swedish VPS with advanced L7 DDoS protection , geo-blocking dashboard, and real-time threat controls (75 SEK/month ≈ €6.50)
  • 🌍 DigitalOcean - Global reach with $200 free credit for 60 days (basic DDoS protection)
  • Linode - High-performance VPS with excellent support (basic DDoS protection)

DDoS Protection Note: Hetzner, DigitalOcean, and Linode offer basic network-level (L3/L4) DDoS protection for volumetric attacks. HostUp.se provides advanced application-layer (L7) filtering with a real-time dashboard to block sophisticated attacks, geo-block countries, and adjust thresholds instantly — similar to what you'd get with Cloudflare but built into the VPS.

Recommended VPS Specs for Nuxt 4 Apps: 2 vCPU, 4-8 GB RAM, 50+ GB SSD, 3+ TB bandwidth — handles moderate traffic well. The HostUp.se VPS SM (2 vCPU, 8 GB RAM, 100 GB NVMe, 5 TB bandwidth, 75 SEK/month) handles most sites to a very good price.

⭐ What You'll Learn:

  • Install and configure Ubuntu 24.04 LTS for production
  • Setup NGINX as a reverse proxy with SSL/TLS (Let's Encrypt)
  • Install Node.js 24 with NVM for multi-version support
  • Deploy Nuxt 4 applications with PM2 process manager
  • Optional: Docker setup for containerized deployments
  • Automated backups, monitoring, and maintenance
  • Upgrade path to Ubuntu 26.04 LTS (April 2026)

Quick checklist

  1. Secure access first: create a non-root sudo user, set up SSH keys, install Tailscale VPN, and verify you can SSH over Tailscale before tightening firewalls.
  2. Lock down SSH: disable password auth + root login, and restrict port 22 to tailscale0 only (public SSH closed).
  3. Harden the base OS: update packages, enable unattended security upgrades, set timezone, and install baseline tooling.
  4. Install NGINX (Ubuntu repo is fine).
  5. Install Node 24 with NVM; enable pnpm via corepack; install PM2 (+ log rotation).
  6. NGINX config (in order!): create prerequisite files first (connection_upgrade.conf, admin-tailscale-only.conf), create app directory, add HTTP-only config, run Certbot for SSL, then upgrade to full HTTPS config.
  7. Deploy: upload the deploy folder, install prod deps with a frozen lockfile, generate Prisma client, run migrations/preflight checks.
  8. Run: start via PM2, enable startup, verify health endpoint and logs.
  9. Optional: Docker for Plausible and other services (bind published ports to 127.0.0.1 to avoid Docker+UFW surprises).

🎯 Personalize This Tutorial

Enter your information below to customize all commands throughout this tutorial. Commands will update automatically with your values shown in green .

Enter application name without spaces for PM2 process
Enter your primary domain name for NGINX configuration
Optional secondary domain for www redirect
Enter IPv4 address of your Ubuntu server
Enter username for SSH access
Enter deployment folder name
Enter port number for Node.js application
Your laptop's Tailscale IP for ACL policies (run: tailscale ip -4)

Why Ubuntu 24.04 LTS?

Ubuntu 24.04 LTS is recommended over 25.10:
  • 5 years support until 2029 (vs 9 months for 25.10)
  • Stable and battle-tested for production
  • Direct upgrade path to Ubuntu 26.04 LTS (April 2026)
  • Industry standard for production servers

Ubuntu 25.10 ends support in July 2026, forcing an upgrade at an inconvenient time. With 24.04 LTS, you can upgrade to 26.04 LTS at your convenience after April 2026.

Before You Begin: Prepare Your Local Machine (Recommended: Ubuntu Desktop).

Important: Before setting up the VPS, you need to prepare your local deploy machine (laptop/desktop). This section covers Ubuntu Desktop — if you're on macOS or Windows (WSL2), the steps are similar but package managers differ.

A) Install Required Tools on Your Laptop

# Ubuntu Desktop may not have curl pre-installed
sudo apt update
sudo apt install -y curl openssh-client

# Verify SSH is available
ssh -V

B) Generate an SSH Key Pair (Ed25519)

If you don't already have an SSH key, generate one now. Ed25519 is recommended for its security and performance.

# Generate a new SSH key pair
# The -C flag adds a comment (use your email or identifier)
ssh-keygen -t ed25519 -C "{{tutorial.sudouser}}@laptop"

# When prompted:
#   - Press Enter to accept default location (~/.ssh/id_ed25519)
#   - Enter a strong passphrase (recommended) or press Enter for no passphrase

# View your public key (you'll copy this to the VPS later)
cat ~/.ssh/id_ed25519.pub
Why a passphrase? A passphrase encrypts your private key on disk. If your laptop is stolen, attackers can't use your key without the passphrase. You can use ssh-agent to avoid typing it repeatedly.
# Optional: Start ssh-agent and add your key (so you don't type passphrase every time)
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519

# Verify the key was added
ssh-add -l

C) Install Tailscale VPN on Your Laptop

Why Tailscale?

Tailscale creates an encrypted mesh VPN between your devices using WireGuard. Once configured, you'll SSH to your server over Tailscale's private network instead of the public internet.

Security benefits:

  • No public SSH port — After setup, we close port 22 to the internet entirely. Attackers can't even attempt to connect.
  • Fail2Ban becomes optional — With no public SSH, there are no brute-force attempts to block. Fail2Ban is still useful for NGINX rate limiting, but SSH protection is unnecessary.
  • Zero-trust access — Only devices authenticated to your Tailnet can reach the server's SSH port.
  • No port forwarding or firewall holes — Tailscale punches through NAT automatically.
  • Easy multi-device access — Add your phone, tablet, or other laptops to your Tailnet for secure access from anywhere.
  • Audit trail — Tailscale dashboard shows which devices connected and when.
# Install Tailscale (Ubuntu Desktop)
curl -fsSL https://tailscale.com/install.sh | sh
sudo systemctl enable --now tailscaled

# Connect to your Tailnet
sudo tailscale up

# This will open a browser for authentication.
# Log in with your Tailscale account (Google, Microsoft, GitHub, etc.)

# Verify connection
tailscale status
tailscale ip -4   # Shows your laptop's Tailscale IP (e.g., {{tutorial.tailscaleip}})
Tip: Copy your Tailscale IP and enter it in the personalization form above — it will be used in NGINX ACL snippets later.
D) Store .env Files Securely (Local Development)
Best Practice: Store your .env files outside your project directory. This prevents accidental commits and keeps secrets separate from code.

Recommended Directory Structure

# Create a keys directory for all your project secrets
# Windows:
C:\keys\
├── {{tutorial.appname}}\
│   └── .env           # Dev/staging secrets for this project
├── another-project\
│   └── .env
└── ssh\               # Optional: backup of SSH keys

# Linux/macOS:
~/keys/
├── {{tutorial.appname}}/
│   └── .env
└── another-project/
    └── .env

Configure Your App to Use External .env

# Option 1: Use --dotenv flag with pnpm/npm
pnpm dev --dotenv C:/keys/{{tutorial.appname}}/.env         # Windows
pnpm dev --dotenv ~/keys/{{tutorial.appname}}/.env          # Linux/Mac

# Option 2: Set in package.json scripts
"scripts": {
  "dev": "nuxt dev --dotenv ../keys/.env",
  "dev:local": "nuxt dev --dotenv C:/keys/{{tutorial.appname}}/.env"
}

# Option 3: Symlink (Linux/Mac only)
ln -s ~/keys/{{tutorial.appname}}/.env .env

Benefits

  • Can't accidentally commit — .env is physically outside the git repo
  • Survives project deletion — secrets persist if you delete/reclone
  • Easy to backup — one keys/ folder for all projects
  • Clear separation — code vs secrets are in different locations
Still add .env to .gitignore: Even with external storage, keep .env in .gitignore as a safety net in case someone creates a local .env file.

0) Secure Access First (SSH keys + Tailscale + lock down SSH)

Recommended order: create your non-root sudo user and SSH keys, install and test Tailscale first , then disable password login and close public SSH. You don't "disable the root user" on Ubuntu — you disable root SSH login . Keep a recovery console available before tightening rules.

A) First Login and Create a Non-Root Sudo User

Initial connection: Most VPS providers give you root access or an initial user. SSH into your server using the credentials from your provider:

# Connect to VPS using public IP (first time only)
# Use root or the initial user your provider created
ssh root@{{tutorial.ipaddress}}

# Or if your provider created an initial user:
# ssh ubuntu@{{tutorial.ipaddress}}

Create your sudo user: We create a dedicated user for daily administration. adduser will prompt for a password — set a strong password even though we disable SSH password login later .

Why set a password if we disable password authentication?
  • Console access: VPS provider consoles (emergency/VNC/KVM) use password login
  • sudo commands: sudo prompts for your user password by default
  • Recovery: If SSH breaks, password is your fallback via console
  • Local login: Physical/console access always uses password auth

We only disable SSH password authentication — the password still works for sudo and console.

# Create the user (you'll be prompted for password and user info)
sudo adduser {{tutorial.sudouser}}
# → Enter a STRONG password (you'll need it for sudo and console access)
# → Press Enter to skip Full Name, Room Number, etc. (or fill in)

# Add user to sudo group
sudo usermod -aG sudo {{tutorial.sudouser}}

# Switch to the new user
su - {{tutorial.sudouser}}
# → Enter the password you just created

# Verify sudo works (enter your password when prompted)
sudo whoami  # Should output: root

B) Add Your SSH Public Key to the VPS

Now copy your public key from your laptop to the VPS. You have two options:

# Option 1: If ssh-copy-id works (need password auth still enabled)
# Run this FROM YOUR LAPTOP (not on the VPS):
ssh-copy-id {{tutorial.sudouser}}@{{tutorial.ipaddress}}

# Option 2: Manually paste the key (on the VPS as your sudo user):
mkdir -p ~/.ssh
chmod 700 ~/.ssh
nano ~/.ssh/authorized_keys
# Paste your public key (from: cat ~/.ssh/id_ed25519.pub on your laptop)
# Save and exit (Ctrl+X, Y, Enter)
chmod 600 ~/.ssh/authorized_keys
Test SSH key login before proceeding! Open a NEW terminal on your laptop and verify:
ssh {{tutorial.sudouser}}@{{tutorial.ipaddress}}

You should connect without being asked for a password (or only your SSH key passphrase if you set one). If it asks for the user's password, your key wasn't added correctly.

C) Install and Bring Up Tailscale (on VPS)

# Install Tailscale on the VPS
curl -fsSL https://tailscale.com/install.sh | sh
sudo systemctl enable --now tailscaled

# Bring the VPS into your tailnet (and enable Tailscale SSH)
# IMPORTANT: Hostname must use hyphens, NOT dots!
#   ✅ example-com-vps  (correct)
#   ❌ example.com-vps  (invalid - dots not allowed)
sudo tailscale up --ssh --hostname {{tutorial.tsHostname()}}

# Show your Tailscale IPv4 (100.x by default)
tailscale ip -4
tailscale status
Tailscale hostname rules:
  • Use lowercase letters, numbers, and hyphens only
  • No dots (.) — replace with hyphens (-)
  • Example: example.comexample-com-vps
Changing Tailscale settings later? If you need to change flags (like adding --ssh ), use --reset :
# If you forgot --ssh or need to change settings:
sudo tailscale up --ssh --hostname {{tutorial.tsHostname()}} --reset

Without --reset , Tailscale requires you to specify ALL non-default flags you previously used.

D) Set Tailscale ACL Policy for SSH (Admin Console)

Go to Tailscale Admin → Access controls . You need both an acls rule (network access) and an ssh rule (Tailscale SSH).

Important ACL syntax:
  • Use your Tailscale login email (e.g., you@gmail.com )
  • Use the Tailscale hostname (with hyphens, not your domain)
  • The dst in ACLs uses hostname:port format
  • The dst in SSH rules uses just the hostname (no port)

Example minimal policy (replace with your actual values):

{
  "acls": [
    // Allow your user to access the VPS on port 22 (and all other ports)
    {
      "action": "accept",
      "src": ["you@gmail.com"],
      "dst": ["{{tutorial.tsHostname()}}:*"]
    }
  ],
  "ssh": [
    // Allow Tailscale SSH to your VPS as your sudo user
    {
      "action": "accept",
      "src": ["you@gmail.com"],
      "dst": ["{{tutorial.tsHostname()}}"],
      "users": ["{{tutorial.sudouser}}"]
    }
  ]
}
Common ACL errors:
  • "dst": ["example.com-vps:22"] — dots in hostname (use hyphens)
  • "dst": ["100.64.0.1:22"] — IP instead of hostname
  • "dst": ["tag:server:22"] — tag doesn't exist yet
  • "dst": ["example-com-vps:22"] — correct hostname format

After saving ACLs, wait ~30 seconds for propagation. Check tailscale status on both machines.

E) Test SSH over Tailscale (before locking down public SSH)

From your laptop, test Tailscale SSH connection. This uses Tailscale's built-in SSH (authenticated via your Tailscale identity), not your SSH keys.

# From your deploy laptop (on Tailscale)
# Use the Tailscale hostname (with hyphens):
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# Or use the Tailscale IP directly (e.g., {{tutorial.tailscaleip}}):
ssh {{tutorial.sudouser}}@$(tailscale ip -4 {{tutorial.tsHostname()}})
First connection - host key verification:

On first SSH via Tailscale, you'll see a host key verification prompt. Type yes to continue. This is normal and expected.

If you get "Host key verification failed" later (after server rebuild), remove the old key:

ssh-keygen -R {{tutorial.tsHostname()}}

F) Lock down SSH + UFW (close public port 22)

# 1) Harden SSH server config (drop-in file)
sudo nano /etc/ssh/sshd_config.d/99-hardening.conf
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
UsePAM yes
MaxAuthTries 3
LoginGraceTime 30
AllowUsers {{tutorial.sudouser}}
X11Forwarding no
AllowAgentForwarding no
# Reload SSH
sudo systemctl reload ssh
# 2) Configure UFW Firewall

# Check current status (should be inactive on fresh install)
sudo ufw status

# Set default policies (explicit is better than implicit)
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow public web traffic
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Alternative: use NGINX app profiles instead of port numbers
# sudo ufw allow 'Nginx Full'    # Both HTTP and HTTPS
# sudo ufw allow 'Nginx HTTP'    # HTTP only (port 80)
# sudo ufw allow 'Nginx HTTPS'   # HTTPS only (port 443)

# Remove any public SSH rules (ignore errors if not present)
sudo ufw delete allow ssh || true
sudo ufw delete allow 22/tcp || true

# Allow SSH ONLY via Tailscale interface (not public internet)
sudo ufw allow in on tailscale0 to any port 22 proto tcp

# Enable firewall
sudo ufw enable

# Verify configuration
sudo ufw status verbose
Useful UFW Commands:
# Check status
sudo ufw status
sudo ufw status verbose
sudo ufw status numbered    # Shows rule numbers for deletion

# Temporarily disable firewall (for troubleshooting)
sudo ufw disable

# Re-enable firewall
sudo ufw enable

# Delete a specific rule by number
sudo ufw status numbered
sudo ufw delete 3           # Deletes rule #3

# Reset to defaults (removes all rules!)
sudo ufw reset
G) Install Fail2Ban for NGINX Rate Limiting (Optional)
Why Fail2Ban with Tailscale?

With SSH closed to the public internet, Fail2Ban's SSH jail is unnecessary. However, Fail2Ban is still valuable for protecting your web server from:

  • Brute-force login attempts — if your app has a login page
  • Aggressive bots/scrapers — blocking IPs that hammer your site
  • Exploit scanners — blocking IPs probing for vulnerabilities (wp-admin, .env, etc.)
  • DDoS mitigation — rate limiting excessive requests

Install Fail2Ban

# Install Fail2Ban
sudo apt install -y fail2ban

# Enable and start service
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

Configure NGINX Jails

# Create local config (never edit jail.conf directly)
sudo nano /etc/fail2ban/jail.local
[DEFAULT]
# Ban for 1 hour (increase for repeat offenders)
bantime = 1h
# 5 failures within 10 minutes triggers ban
findtime = 10m
maxretry = 5
# Use UFW for banning
banaction = ufw

# Disable SSH jail (we use Tailscale instead)
[sshd]
enabled = false

# NGINX rate limiting - ban IPs with too many requests
[nginx-limit-req]
enabled = true
port = http,https
filter = nginx-limit-req
logpath = /var/log/nginx/error.log
maxretry = 10
findtime = 1m
bantime = 1h

# NGINX bad bots - block scanners looking for exploits
[nginx-badbots]
enabled = true
port = http,https
filter = nginx-badbots
logpath = /var/log/nginx/access.log
maxretry = 2
findtime = 1d
bantime = 1w

# NGINX 4xx errors - block IPs causing many client errors
[nginx-http-auth]
enabled = true
port = http,https
filter = nginx-http-auth
logpath = /var/log/nginx/error.log
maxretry = 5
findtime = 5m
bantime = 1h

Create Bad Bots Filter

# Create custom filter for exploit scanners
sudo nano /etc/fail2ban/filter.d/nginx-badbots.conf
[Definition]
# Block requests for common exploit paths
failregex = ^ -.*"(GET|POST|HEAD).*(wp-admin|wp-login|\.env|\.git|phpMyAdmin|phpmyadmin|admin\.php|shell\.php|\.aws|\.config).*".*$
            ^ -.*"(GET|POST).*\.(php|asp|aspx|jsp|cgi).*" (404|403)
ignoreregex =

Enable NGINX Rate Limiting (Required for nginx-limit-req jail)

Add rate limiting to your NGINX config if not already present:

# Add to /etc/nginx/nginx.conf in http block:
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

# In your site config, add to location blocks:
location / {
    limit_req zone=general burst=20 nodelay;
    # ... rest of config
}

# For login pages (stricter):
location /login {
    limit_req zone=login burst=5 nodelay;
    # ... rest of config
}

Restart and Verify

# Restart Fail2Ban
sudo systemctl restart fail2ban

# Check status
sudo fail2ban-client status

# Check specific jail status
sudo fail2ban-client status nginx-limit-req

# View banned IPs
sudo fail2ban-client status nginx-badbots

# Manually unban an IP (if needed)
sudo fail2ban-client set nginx-limit-req unbanip 1.2.3.4

# Watch Fail2Ban log
sudo tail -f /var/log/fail2ban.log
Tip: Start with lenient settings (high maxretry, short bantime) and tighten based on your logs. Check /var/log/fail2ban.log to see what's being caught.
H) Harden Network with sysctl (Optional)
Defense in Depth:

These kernel-level network settings add extra protection against spoofing, redirects, and SYN floods. Ubuntu 24.04 already enables some of these by default (marked ✅), but explicitly setting them ensures consistency.

# Edit sysctl configuration
sudo nano /etc/sysctl.conf
# Add or uncomment these lines:

# === IP Spoofing protection (✅ Ubuntu default) ===
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# === Ignore ICMP broadcast requests (smurf attack prevention) ===
net.ipv4.icmp_echo_ignore_broadcasts = 1

# === Disable source packet routing ===
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# === Disable send redirects (prevents MITM) ===
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# === SYN flood protection (✅ syncookies is Ubuntu default) ===
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# === Log suspicious packets (martians) ===
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# === Ignore ICMP redirects (prevents routing attacks) ===
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0

# === OPTIONAL: Ignore ALL pings (stealth mode) ===
# ⚠️ This can break monitoring tools and make debugging harder
# Uncomment only if you specifically need to hide from ping scans:
# net.ipv4.icmp_echo_ignore_all = 1
# Apply changes
sudo sysctl -p

# Verify a setting
sysctl net.ipv4.tcp_syncookies
⚠️ Docker users: If using Docker, test thoroughly after applying these settings. Some configurations (especially related to forwarding and redirects) can interfere with container networking. If you experience issues, check docker network inspect bridge and review Docker's network documentation.
I) File Transfers Over Tailscale (scp/rsync)

Once Tailscale SSH is working, you can transfer files using the Tailscale hostname:

# Copy a file to the VPS
scp ./myfile.txt {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/home/{{tutorial.sudouser}}/

# Copy a directory recursively
scp -r ./deploy/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/

# rsync for efficient syncing (recommended for deployments)
rsync -avz --progress ./deploy/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/
If scp/rsync fails with "Permission denied":
  • Ensure you're using the Tailscale hostname (with hyphens), not the public IP
  • Check that your ACL policy allows access (Section D above)
  • Verify tailscale status shows both machines online
  • If the destination directory requires sudo, transfer to ~ first, then move with sudo
J) Common Tailscale Commands (Daily Use)

Commands you'll use regularly when working with your VPS:

# SSH into your server (use this to reconnect anytime)
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# Or use Tailscale's built-in SSH client
tailscale ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

# Check Tailscale connection status
tailscale status

# Show your devices' Tailscale IPs
tailscale ip -4
tailscale ip -4 {{tutorial.tsHostname()}}

# Ping the VPS over Tailscale
tailscale ping {{tutorial.tsHostname()}}
Server Power Management (run on VPS)
# Restart the server (graceful)
sudo reboot

# Shutdown the server
sudo shutdown now

# Scheduled shutdown in 5 minutes
sudo shutdown +5 "Server going down for maintenance"

# Cancel scheduled shutdown
sudo shutdown -c

# Restart NGINX
sudo systemctl restart nginx

# Restart your Nuxt app (via PM2)
pm2 restart {{tutorial.appname}}

# View server uptime
uptime
Tailscale Troubleshooting
# If Tailscale disconnects, bring it back up
sudo tailscale up --ssh --hostname {{tutorial.tsHostname()}}

# Check Tailscale service status
sudo systemctl status tailscaled

# Restart Tailscale service
sudo systemctl restart tailscaled

# View Tailscale logs
sudo journalctl -u tailscaled -f

# Force re-authentication (if needed)
sudo tailscale logout
sudo tailscale up --ssh --hostname {{tutorial.tsHostname()}}
💡 Tip: Bookmark this page! When returning the next day, you'll typically just need:
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}}

1) Initial Server Setup

# Update system packages
sudo apt update && sudo apt full-upgrade -y

# Install essential build tools and utilities
sudo apt install -y curl git wget build-essential ca-certificates gnupg lsof unzip \
  htop ncdu lnav

# Set timezone (adjust for your location)
sudo timedatectl set-timezone Europe/Stockholm

# Enable automatic security upgrades (recommended)
sudo apt install -y unattended-upgrades needrestart
sudo dpkg-reconfigure --priority=low unattended-upgrades

# Firewall note:
# - Ports 80/443 should be allowed publicly
# - Port 22 should be allowed ONLY on tailscale0 (see Step 0)
sudo ufw status verbose

Security Note: With SSH restricted to Tailscale only (Step 0), you don't need Fail2ban for SSH or a custom SSH port. If your VPS provider offers L7 DDoS protection (like HostUp's advanced filtering ), Fail2ban for HTTP is also unnecessary — brute-force protection happens at the network edge.

2) Install NGINX

# Option A (recommended): Ubuntu repository (stable, security supported)
sudo apt install -y nginx

# Option B (optional): Official NGINX repository (newer features)
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
    | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list
sudo apt update
sudo apt install -y nginx

# Start and enable NGINX
sudo systemctl start nginx
sudo systemctl enable nginx

# Verify installation
nginx -v
curl http://localhost  # Should show NGINX welcome page

3) Install Node.js 24 with NVM (Multi-Version Support)

Why NVM? Node Version Manager allows you to install and switch between multiple Node.js versions. Perfect for servers hosting multiple applications with different Node requirements.

Which Node version? Use the same version as your local development environment for consistency. Check with node --version on your laptop. Both Node 22 and 24 are LTS versions that work well with Nuxt 4. This tutorial uses Node 24.

# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash

# Load NVM (add to ~/.bashrc for persistence)
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"

# Install Node.js 24 LTS
nvm install 24
nvm use 24
nvm alias default 24

# Verify installation
node --version  # Should show v24.x.x
npm --version

# Install multiple versions if needed (optional)
# nvm install 20    # For apps requiring Node 20
# nvm install 18    # For apps requiring Node 18
# nvm list          # Show all installed versions

4) Install pnpm and PM2

# Enable corepack for pnpm
corepack enable
corepack prepare pnpm@latest --activate

# Verify pnpm
pnpm --version

# Install PM2 globally (NO sudo - NVM installs to ~/.nvm which you own)
npm install -g pm2

# Verify PM2
pm2 --version

# Setup PM2 to start on system boot
pm2 startup systemd
# This outputs a sudo command - COPY AND RUN that command exactly as shown!
# Example output:
# sudo env PATH=$PATH:/home/ubuntu/.nvm/versions/node/v24.13.0/bin pm2 startup systemd -u ubuntu --hp /home/ubuntu

# Optional: Install PM2 log rotation
pm2 install pm2-logrotate
⚠️ Common Pitfall: "sudo: npm: command not found"

When using NVM, never use sudo npm for global installs. NVM installs Node to ~/.nvm/ (your home directory), which sudo can't access.

  • sudo npm install -g pm2 — fails with "command not found"
  • npm install -g pm2 — works because you own ~/.nvm

The pm2 startup command will output a specific sudo command that includes the full path to your NVM Node — copy and run that exact command.

Per-App Node Versions: Use nvm use 20 before starting apps that need Node 20. PM2 will remember the Node version used when you save the process list.

5) Install Docker Engine (Optional)

Docker Benefits: Test production builds locally in WSL2 with identical setup. Not required for basic deployment, but valuable for containerized apps (like Plausible analytics) and testing.

A) Install via APT Repository (Recommended for Production)

Docker's official docs recommend installing via the apt repository for production servers. The convenience script ( get.docker.com ) is explicitly not recommended for production .

# Remove conflicting distro packages (safe even if none installed)
sudo apt remove -y docker.io docker-compose docker-compose-v2 \
  docker-doc podman-docker containerd runc 2>/dev/null || true

# Prerequisites + repo key
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

# Install Docker Engine + Compose plugin
sudo apt install -y docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin

B) Verify Installation

# Check Docker service status
sudo systemctl status docker --no-pager

# Verify versions
sudo docker --version
sudo docker compose version

# Test with hello-world container
sudo docker run --rm hello-world

C) Security Notes

⚠️ Docker Group Warning:

Avoid adding your main user to the docker group on hardened servers. The docker group is effectively root-equivalent — any user in that group can mount the host filesystem and escalate to root.

Instead, use sudo docker for all Docker commands.

D) Firewall Notes (Docker + UFW)

⚠️ Docker Can Bypass UFW:

Docker programs iptables/NAT directly, which can bypass UFW rules for published container ports.

Safe pattern:

  • In docker-compose.yml , publish ports to localhost only : "127.0.0.1:8000:8000"
  • Then reverse-proxy via NGINX (public) or Tailscale-only NGINX (private)

For stricter filtering, Docker recommends putting rules in the DOCKER-USER chain .

E) About the Convenience Script

The curl -fsSL https://get.docker.com | sh method works but is meant for quick testing only . Docker explicitly warns it's not recommended for production . If you're hardening a prod VPS, use the apt repo method above.

F) Note on Plausible Analytics

Planning to run Plausible?

Self-hosted Plausible CE runs via Docker Compose (Postgres + ClickHouse + app).

Important: Plausible CE is not designed for subfolder installs (like /analytics ). Use a subdomain instead:

  • plausible.{{tutorial.domain}} — works perfectly
  • {{tutorial.domain}}/plausible — not supported

6) Configure NGINX Reverse Proxy

⚠️ Important: Follow this order!
  1. Create prerequisite files (snippets, conf.d)
  2. Create application directory
  3. Create HTTP-only NGINX config (for Certbot)
  4. Run Certbot to get SSL certificates
  5. Update to full HTTPS config

Skipping steps will cause nginx -t to fail with "file not found" or "no ssl_certificate" errors.

A) Create Prerequisite Files First

These files must exist BEFORE creating the main NGINX config:

# 1. Create WebSocket upgrade map (required for WebSocket support)
sudo nano /etc/nginx/conf.d/connection_upgrade.conf
# Paste this content:
map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}
# 2. Create admin protection snippet (required by main config)
sudo mkdir -p /etc/nginx/snippets
sudo nano /etc/nginx/snippets/admin-tailscale-only.conf
# Paste this content:
allow {{tutorial.tailscaleip}};  # Your deploy laptop's Tailscale IP
deny all;
# 3. Create Let's Encrypt webroot directory
sudo mkdir -p /var/www/letsencrypt/.well-known/acme-challenge

# 4. Create application directory
sudo mkdir -p /var/www/{{tutorial.domain}}
sudo chown -R $USER:$USER /var/www/{{tutorial.domain}}

B) Create HTTP-Only Config (Pre-SSL)

Start with HTTP only — Certbot needs this to validate your domain:

sudo nano /etc/nginx/sites-available/{{tutorial.domain}}
# === STAGE 1: HTTP-ONLY CONFIG (use until Certbot runs) ===

upstream nuxt_app {
  server 127.0.0.1:{{tutorial.appport}};
  keepalive 64;
}

server {
  listen 80;
  listen [::]:80;
  server_name {{tutorial.domain}} {{tutorial.domain2}};

  # Let's Encrypt validation (REQUIRED for Certbot)
  location ^~ /.well-known/acme-challenge/ {
    root /var/www/letsencrypt;
  }

  # Temporary: proxy to app (will become redirect after SSL)
  location / {
    proxy_pass http://nuxt_app;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}
# Enable the site
sudo ln -s /etc/nginx/sites-available/{{tutorial.domain}} /etc/nginx/sites-enabled/
# Remove default site (optional)
sudo rm -f /etc/nginx/sites-enabled/default

# Test configuration - should pass now!
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx
✅ Checkpoint: At this point, sudo nginx -t should succeed. If it fails, check that:
  • /etc/nginx/conf.d/connection_upgrade.conf exists
  • /etc/nginx/snippets/admin-tailscale-only.conf exists
  • There are no SSL blocks in your config yet
Testing Without DNS? Edit Your Hosts File (Optional)
When to use this:

If your domain's DNS isn't configured yet, you can temporarily edit your local hosts file to test that NGINX is working. This maps the domain to your server's IP on your machine only.

⚠️ Limitations: This is local-only testing. Certbot/SSL still requires real DNS because Let's Encrypt needs to verify domain ownership via public DNS.

Linux / macOS

# Edit hosts file
sudo nano /etc/hosts

# Add a line mapping your domain to the server IP:
{{tutorial.ipaddress}}  {{tutorial.domain}}  {{tutorial.domain2}}

# Save and exit (Ctrl+O, Enter, Ctrl+X in nano)

# Test - should now connect to your server
curl http://{{tutorial.domain}}
# Or open http://{{tutorial.domain}} in your browser

Windows 10/11

# 1. Open Notepad as Administrator:
#    - Press Win key, type "Notepad"
#    - Right-click → "Run as administrator"

# 2. In Notepad: File → Open, navigate to:
#    C:\Windows\System32\drivers\etc\hosts
#    (Change file filter from "*.txt" to "All Files")

# 3. Add a line at the bottom:
{{tutorial.ipaddress}}  {{tutorial.domain}}  {{tutorial.domain2}}

# 4. Save the file

# 5. Flush DNS cache (in Command Prompt as Admin):
ipconfig /flushdns

Find Your Server's IP

# On the VPS, if you don't know the public IP:
curl -4 ifconfig.me

# Or check network interface:
ip addr show eth0 | grep "inet " | awk '{print $2}' | cut -d/ -f1
Remember to remove the hosts entry after DNS is configured, otherwise your machine will always use the hardcoded IP even if the server moves.

C) Get SSL Certificate (Run Certbot Now)

With HTTP config working, get your SSL certificate:

# Install Certbot
sudo apt install -y certbot python3-certbot-nginx

# Get certificate (Certbot will modify your NGINX config)
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}}
# Follow prompts:
# - Enter email address
# - Agree to terms
# - Choose redirect option: Yes (redirect HTTP to HTTPS)

# Verify SSL works
sudo nginx -t
sudo systemctl reload nginx

D) Upgrade to Full Production Config (Post-SSL)

After Certbot succeeds, replace with the full config including caching and security headers:

sudo nano /etc/nginx/sites-available/{{tutorial.domain}}
# === STAGE 2: FULL HTTPS CONFIG (after Certbot) ===

upstream nuxt_app {
  server 127.0.0.1:{{tutorial.appport}};
  keepalive 64;
}

server {
  listen 80;
  listen [::]:80;
  server_name {{tutorial.domain}} {{tutorial.domain2}};

  # Let Certbot validate (keep for renewals)
  location ^~ /.well-known/acme-challenge/ {
    root /var/www/letsencrypt;
  }

  # Redirect to HTTPS
  return 301 https://{{tutorial.domain}}$request_uri;
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name {{tutorial.domain}};

  # SSL certs (added by Certbot - adjust path if different)
  ssl_certificate /etc/letsencrypt/live/{{tutorial.domain}}/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/{{tutorial.domain}}/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  # Security headers (keep CSP in-app)
  add_header X-Frame-Options "SAMEORIGIN" always;
  add_header X-Content-Type-Options "nosniff" always;
  add_header Referrer-Policy "strict-origin-when-cross-origin" always;
  add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
  add_header X-XSS-Protection "0" always;

  # HSTS: avoid 'preload' until all subdomains are HTTPS forever
  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

  # Root directory for static files (Nuxt .output/public)
  root /var/www/{{tutorial.domain}}/.output/public;

  # Protect /admin/analytics (VPN-only)
  location ^~ /admin/analytics {
    include /etc/nginx/snippets/admin-tailscale-only.conf;

    proxy_pass http://nuxt_app;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }

  # Nuxt build assets (hashed)
  location ^~ /_nuxt/ {
    try_files $uri =404;
    expires 1y;
    add_header Cache-Control "public, max-age=31536000, immutable" always;
    access_log off;
  }

  # Other static assets (not always fingerprinted)
  location ~* \.(webp|jpg|jpeg|png|gif|ico|svg)$ {
    try_files $uri @app;
    expires 30d;
    add_header Cache-Control "public, max-age=2592000" always;
    access_log off;
  }
  location ~* \.(woff|woff2|ttf|eot|otf)$ {
    try_files $uri @app;
    expires 1y;
    add_header Cache-Control "public, max-age=31536000" always;
    access_log off;
  }
  location ~* \.(css|js)$ {
    try_files $uri @app;
    expires 7d;
    add_header Cache-Control "public, max-age=604800" always;
    access_log off;
  }

  location @app {
    proxy_pass http://nuxt_app;
    proxy_http_version 1.1;

    # WebSocket support
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_redirect off;
    proxy_read_timeout 86400;
  }

  # Default: proxy everything else to Nuxt
  location / {
    proxy_pass http://nuxt_app;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }

  # Health check endpoint
  location = /health {
    access_log off;
    return 200 "healthy\n";
    add_header Content-Type text/plain;
  }

  # Deny access to hidden files
  location ~ /\. {
    deny all;
    access_log off;
    log_not_found off;
  }
}

# Redirect HTTPS www -> non-www (only if using www subdomain)
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name {{tutorial.domain2}};

  ssl_certificate /etc/letsencrypt/live/{{tutorial.domain}}/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/{{tutorial.domain}}/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

  return 301 https://{{tutorial.domain}}$request_uri;
}
# Test and reload
sudo nginx -t
sudo systemctl reload nginx

Note: If Certbot created different SSL paths, check with sudo certbot certificates and adjust the paths in your config accordingly.

E) NGINX Performance Tuning (Optional)

These optimizations improve response times and reduce bandwidth. Add them to your main NGINX config.

Enable Gzip Compression

# Edit main NGINX config
sudo nano /etc/nginx/nginx.conf
# Find and uncomment/add in the http { } block:

##
# Gzip Settings
##
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
    application/atom+xml
    application/geo+json
    application/javascript
    application/json
    application/ld+json
    application/manifest+json
    application/rdf+xml
    application/rss+xml
    application/xhtml+xml
    application/xml
    font/eot
    font/otf
    font/ttf
    image/svg+xml
    text/css
    text/javascript
    text/plain
    text/xml;

Enable SSL Stapling

OCSP stapling improves SSL handshake performance. Certbot's options-ssl-nginx.conf may already include this, but you can add it to your server block if not:

# Add inside your server { listen 443 ssl ... } block:

# OCSP Stapling (faster SSL handshakes)
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;

Verify and Reload

# Test configuration
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx

# Test gzip is working (look for "Content-Encoding: gzip")
curl -H "Accept-Encoding: gzip" -I https://{{tutorial.domain}}
What's already optimized in our config:
  • HTTP/2 — enabled via listen 443 ssl http2
  • Static file caching — configured for images, fonts, CSS/JS
  • Security headers — X-Frame-Options, X-Content-Type-Options, HSTS, Referrer-Policy
  • Connection keepalive — via upstream keepalive 64

Advanced: API Microcaching (Optional)

For high-traffic APIs with responses that don't change every request, microcaching can dramatically reduce load:

# Add to http { } block in nginx.conf:
proxy_cache_path /tmp/cache-{{tutorial.appname}} levels=1:2 keys_zone=microcache:10m max_size=100m inactive=1h use_temp_path=off;

# Add to your site config, in specific location blocks:
location /api/ {
    # Cache for 1 second (microcache)
    proxy_cache microcache;
    proxy_cache_valid 200 1s;
    proxy_cache_use_stale updating;
    proxy_cache_background_update on;
    proxy_cache_lock on;

    # Show cache status in response header (for debugging)
    add_header X-Cache-Status $upstream_cache_status;

    proxy_pass http://nuxt_app;
    # ... other proxy settings
}
⚠️ Microcaching caveats:
  • Only use for endpoints that return the same data for all users
  • Don't use for authenticated/personalized responses
  • Test thoroughly — caching bugs can be hard to debug

7) Deploy Nuxt Application

A) Create Secure Directory Structure

Create the directory structure that separates code from persistent data:

# Create shared directory for persistent files
# (app/ will be created by git clone or rsync in the next step)
sudo mkdir -p /var/www/{{tutorial.domain}}/shared

# Set ownership to your user
sudo chown -R $USER:$USER /var/www/{{tutorial.domain}}

# Verify structure
ls -la /var/www/{{tutorial.domain}}

Your server will have this structure after deployment:

/var/www/{{tutorial.domain}}/
├── app/                    # Created by git clone or rsync (Step B)
└── shared/                 # Persistent data (survives deployments)
    ├── .env                # Secrets - NEVER in git or deploy
    └── db.sqlite           # Database - persists across deploys

B) Upload or Clone Your Project

Option 1: Clone from Git (git was installed in Section 1)

# On the VPS, clone directly into the app/ directory:
cd /var/www/{{tutorial.domain}}
git clone https://github.com/yourusername/{{tutorial.appname}}.git app

# "app" at the end names the target directory (instead of the repo name)
# No chown needed — you own the parent directory (set in Step A),
# so app/ and all its contents are automatically owned by your user

Option 2: Upload deploy folder (recommended for pre-built apps)

Local folder structure: Before running these commands, make sure your local {{tutorial.deploydir}}/ folder contains your built Nuxt app:
your-project/
└── {{tutorial.deploydir}}/
    ├── .output/          # Nuxt build output
    ├── prisma/
    │   └── schema.prisma # Database schema (NO db.sqlite!)
    ├── package.json
    ├── pnpm-lock.yaml
    └── ecosystem.config.cjs

⚠️ Security: Do NOT include .env or db.sqlite in deploy folder. These are created/managed on the server in the shared/ directory.

# Run these commands FROM YOUR LOCAL MACHINE (laptop/WSL2)
# First, cd to the PARENT folder of your deploy directory:
cd /path/to/your-project

# Using rsync (recommended - creates app/ directory automatically):
rsync -avz --progress {{tutorial.deploydir}}/ {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/

# Or using scp (need to create app/ directory first on server):
ssh {{tutorial.sudouser}}@{{tutorial.tsHostname()}} "mkdir -p /var/www/{{tutorial.domain}}/app"
scp -r {{tutorial.deploydir}}/* {{tutorial.sudouser}}@{{tutorial.tsHostname()}}:/var/www/{{tutorial.domain}}/app/

C) Required Files in Deploy Folder

Ensure these files/folders are present in your deploy:

NOT in deploy folder (security):
  • .env — Created on server in shared/ directory
  • db.sqlite — Created on server, persists across deployments

D) Configure Environment (in shared/)

# Create .env in shared directory (NOT in app directory!)
nano /var/www/{{tutorial.domain}}/shared/.env

Example production .env:

# Database (points to shared directory)
DATABASE_URL="file:/var/www/{{tutorial.domain}}/shared/db.sqlite"

# Session (generate with: openssl rand -hex 32)
NUXT_SESSION_PASSWORD="your-64-character-hex-password-here"

# Production URL
NUXT_PUBLIC_SITE_URL="https://{{tutorial.domain}}"

# Node environment
NODE_ENV="production"

# Optional: OAuth providers
# NUXT_OAUTH_GITHUB_CLIENT_ID="..."
# NUXT_OAUTH_GITHUB_CLIENT_SECRET="..."
# Lock down permissions (owner read/write only)
chmod 600 /var/www/{{tutorial.domain}}/shared/.env

E) Create Symlink for .env

Link the shared .env to your app directory so the app can find it:

# Create symlink from app to shared .env
ln -sf /var/www/{{tutorial.domain}}/shared/.env /var/www/{{tutorial.domain}}/app/.env

# Verify symlink
ls -la /var/www/{{tutorial.domain}}/app/.env
# Should show: .env -> /var/www/{{tutorial.domain}}/shared/.env
Why symlinks?
  • Your .env survives when you redeploy (replace app/ contents)
  • Your database persists across deployments
  • Secrets are never in your deploy artifacts or git history

8) Install Dependencies and Setup Database

A) Install Production Dependencies

# Navigate to app directory
cd /var/www/{{tutorial.domain}}/app

# Ensure correct Node version is active
nvm use 24

# Install dependencies (frozen lockfile for reproducibility)
pnpm install --frozen-lockfile

# Or for production-only dependencies:
# pnpm install --prod --frozen-lockfile

B) Setup Prisma and Database

# Generate Prisma Client
pnpm dlx prisma generate

# Run database migrations (creates db.sqlite in shared/ via DATABASE_URL)
pnpm dlx prisma migrate deploy

# Optional: Seed database with initial data
pnpm db:seed

# Verify database connectivity
pnpm run preflight:db

# Verify database was created in shared directory
ls -la /var/www/{{tutorial.domain}}/shared/db.sqlite

C) Build Application (if not pre-built)

# If you cloned from Git and need to build:
pnpm build

# Verify build succeeded
ls -lh .output/server/index.mjs

D) Run Preflight Checks

# Verify runtime dependencies
pnpm run preflight

# Check database connectivity
pnpm run preflight:db

# Both should complete without errors

9) Start Application with PM2

A) Initial Start

# Navigate to app directory
cd /var/www/{{tutorial.domain}}/app

# Ensure correct Node version
nvm use 24

# Delete existing PM2 process if it exists
pm2 delete {{tutorial.appname}} || true

# Start application using ecosystem config
pm2 start ecosystem.config.cjs --env production

# Save PM2 process list (survives reboots)
pm2 save

# View logs
pm2 logs {{tutorial.appname}} --lines 200

B) Verify Application is Running

# Check PM2 status
pm2 list

# Test application locally
curl -i http://127.0.0.1:{{tutorial.appport}}/

# Should return 200 OK with HTML content

# Monitor application
pm2 monit

C) PM2 Management Commands

# Reload app after code changes (zero-downtime)
pm2 reload ecosystem.config.cjs --env production --update-env

# Restart app (brief downtime)
pm2 restart {{tutorial.appname}}

# Stop app
pm2 stop {{tutorial.appname}}

# View real-time logs
pm2 logs {{tutorial.appname}} --lines 100

# Clear logs
pm2 flush

# Application info
pm2 info {{tutorial.appname}}

Multiple Node Versions: If running multiple apps with different Node versions: nvm use 20 && pm2 start app1.js , then nvm use 24 && pm2 start app2.js . PM2 remembers which Node binary to use for each app.

Advanced: Release-Based Deployment (Zero-Downtime)
When to use this: For production sites with CI/CD pipelines where you need instant rollback, zero-downtime deploys, and audit trails of previous releases.

Release-Based Directory Structure

Instead of deploying directly to app/, deploy to timestamped release directories:

/var/www/{{tutorial.domain}}/
├── releases/                    # Versioned releases
│   ├── 20260125_143052_abc123/  # Timestamp + commit hash
│   ├── 20260124_092130_def456/
│   └── 20260123_181500_ghi789/
├── current → releases/20260125_143052_abc123/  # Symlink to active
├── shared/                      # Persistent data (unchanged)
│   ├── .env
│   ├── db.sqlite
│   └── images/                  # User uploads
└── logs/                        # Persistent logs
    ├── err.log
    ├── out.log
    └── archives/                # Per-release log archives

Setup Release Structure

# Create release directories
sudo mkdir -p /var/www/{{tutorial.domain}}/releases
sudo mkdir -p /var/www/{{tutorial.domain}}/logs/archives

# Migrate from simple structure (if upgrading)
mv /var/www/{{tutorial.domain}}/app /var/www/{{tutorial.domain}}/releases/initial
ln -sfn /var/www/{{tutorial.domain}}/releases/initial /var/www/{{tutorial.domain}}/current

Deploy Script Example

#!/bin/bash
# deploy.sh - Run from CI/CD or locally
set -e

DOMAIN="{{tutorial.domain}}"
BASE_DIR="/var/www/$DOMAIN"
RELEASE_ID="$(date +%Y%m%d_%H%M%S)_$(git rev-parse --short HEAD)"
RELEASE_DIR="$BASE_DIR/releases/$RELEASE_ID"

echo "Deploying release: $RELEASE_ID"

# 1. Create release directory
mkdir -p "$RELEASE_DIR"

# 2. Copy/upload application files (adjust for your setup)
rsync -avz --exclude='.env' --exclude='node_modules' ./ "$RELEASE_DIR/"

# 3. Link shared resources
ln -sfn "$BASE_DIR/shared/.env" "$RELEASE_DIR/.env"
ln -sfn "$BASE_DIR/shared/db.sqlite" "$RELEASE_DIR/prisma/db.sqlite"
# ln -sfn "$BASE_DIR/shared/images" "$RELEASE_DIR/.data/images"  # If using uploads

# 4. Install dependencies
cd "$RELEASE_DIR"
pnpm install --frozen-lockfile --prod

# 5. Generate Prisma client
pnpm dlx prisma generate

# 6. Run migrations (if any)
pnpm dlx prisma migrate deploy

# 7. Archive current logs (before switching)
if [ -L "$BASE_DIR/current" ]; then
  PREV_RELEASE=$(basename $(readlink "$BASE_DIR/current"))
  mkdir -p "$BASE_DIR/logs/archives/$PREV_RELEASE"
  cp "$BASE_DIR/logs"/*.log "$BASE_DIR/logs/archives/$PREV_RELEASE/" 2>/dev/null || true
fi

# 8. Switch symlink (atomic operation = zero downtime)
ln -sfn "$RELEASE_DIR" "$BASE_DIR/current"

# 9. Reload PM2 (zero-downtime with cluster mode)
pm2 reload ecosystem.config.cjs --update-env

# 10. Health check
sleep 3
curl -sf http://127.0.0.1:{{tutorial.appport}}/health || echo "Warning: Health check failed"

# 11. Cleanup old releases (keep last 5)
cd "$BASE_DIR/releases"
ls -t | tail -n +6 | xargs -r rm -rf

echo "✅ Deployed: $RELEASE_ID"

Instant Rollback

# List available releases
ls -la /var/www/{{tutorial.domain}}/releases/

# Rollback to previous release
PREV_RELEASE=$(ls -t /var/www/{{tutorial.domain}}/releases/ | sed -n '2p')
ln -sfn /var/www/{{tutorial.domain}}/releases/$PREV_RELEASE /var/www/{{tutorial.domain}}/current
pm2 reload ecosystem.config.cjs --update-env

# Verify rollback
readlink /var/www/{{tutorial.domain}}/current

Update ecosystem.config.cjs for Cluster Mode

// For zero-downtime reloads, use cluster mode
module.exports = {
  apps: [{
    name: "{{tutorial.appname}}",
    script: ".output/server/index.mjs",
    instances: 2,                    // Cluster mode (2+ instances)
    exec_mode: "cluster",
    env_production: {
      NODE_ENV: "production",
      PORT: {{tutorial.appport}}
    },
    // Use absolute paths for logs
    error_file: "/var/www/{{tutorial.domain}}/logs/err.log",
    out_file: "/var/www/{{tutorial.domain}}/logs/out.log",
    log_date_format: "YYYY-MM-DD HH:mm:ss Z",
    merge_logs: true
  }]
};
Benefits:
  • Zero-downtime — PM2 cluster reload keeps app running during deploy
  • Instant rollback — Just switch symlink, no rebuild needed
  • Audit trail — Keep last N releases for debugging
  • Atomic deploys — Symlink switch is atomic operation
  • Log preservation — Logs archived per release
10) SSL Verification and Maintenance
Note: If you followed Section 6 correctly, SSL is already configured. This section covers verification and ongoing maintenance.

A) Verify SSL Setup

# Check certificate status
sudo certbot certificates

# Test auto-renewal (dry run)
sudo certbot renew --dry-run

# Test NGINX configuration
sudo nginx -t

B) Auto-Renewal

Certbot automatically sets up a systemd timer for renewal. Verify it's active:

# Check renewal timer status
sudo systemctl status certbot.timer

# List scheduled timers
sudo systemctl list-timers | grep certbot

# Manually renew (if needed)
sudo certbot renew

# Renewal happens automatically before expiration (usually 30 days)

C) Test SSL Configuration

# Test HTTPS is working
curl -I https://{{tutorial.domain}}

# Check SSL grade (external tool)
# Visit: https://www.ssllabs.com/ssltest/analyze.html?d={{tutorial.domain}}

# Verify redirect from HTTP to HTTPS
curl -I http://{{tutorial.domain}}
# Should show: HTTP/1.1 301 Moved Permanently
# Location: https://{{tutorial.domain}}/

D) If SSL Certificate Expires or Fails

# Force renewal
sudo certbot renew --force-renewal

# If renewal fails, re-run certbot
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}}

# Check NGINX error logs
sudo tail -50 /var/log/nginx/error.log

# Check Certbot logs
sudo tail -50 /var/log/letsencrypt/letsencrypt.log

E) Add Additional Domains Later

# To add a new subdomain to existing certificate
sudo certbot --nginx -d {{tutorial.domain}} -d {{tutorial.domain2}} -d newsubdomain.{{tutorial.domain}}

# Or expand existing certificate
sudo certbot --expand -d {{tutorial.domain}} -d newsubdomain.{{tutorial.domain}}
11) Monitoring and Maintenance

A) Install Monitoring Tools

# System monitoring
sudo apt install -y htop ncdu

# Log viewing
sudo apt install -y lnav

# PM2 log rotation
pm2 install pm2-logrotate

B) Viewing Logs

# NGINX error log (check for config issues, upstream errors)
sudo tail -50 /var/log/nginx/error.log

# NGINX access log (see incoming requests)
sudo tail -50 /var/log/nginx/access.log

# Watch logs in real-time (Ctrl+C to stop)
sudo tail -f /var/log/nginx/error.log

# PM2 application logs
pm2 logs {{tutorial.appname}} --lines 100

# System logs (auth, kernel, etc.)
sudo journalctl -xe --no-pager | tail -50

# Fail2Ban log (if installed)
sudo tail -50 /var/log/fail2ban.log

# Use lnav for interactive log viewing (installed above)
sudo lnav /var/log/nginx/

C) Regular Maintenance Tasks

# Weekly: Update packages
sudo apt update && sudo apt upgrade -y

# Weekly: Check disk space
df -h
ncdu /var/www

# Weekly: Review application logs
pm2 logs --lines 100

# Monthly: Check for security updates
sudo apt list --upgradable

# Monthly: Verify SSL certificate
sudo certbot certificates

# Monthly: Review PM2 status
pm2 status

D) Automated Backups

# Create backup script
sudo nano /usr/local/bin/backup-{{tutorial.appname}}.sh
#!/bin/bash
# Backup script for {{tutorial.appname}} application

BACKUP_DIR="/backup/{{tutorial.appname}}"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Backup application files
tar -czf $BACKUP_DIR/app_$DATE.tar.gz /var/www/{{tutorial.domain}}/.output /var/www/{{tutorial.domain}}/prisma

# Backup database
cp /var/www/{{tutorial.domain}}/prisma/db.sqlite $BACKUP_DIR/db_$DATE.sqlite

# Backup environment file
cp /var/www/{{tutorial.domain}}/.env $BACKUP_DIR/env_$DATE

# Keep only last 7 days of backups
find $BACKUP_DIR -name "app_*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "db_*.sqlite" -mtime +7 -delete
find $BACKUP_DIR -name "env_*" -mtime +7 -delete

echo "Backup completed: $DATE"
# Make executable
sudo chmod +x /usr/local/bin/backup-{{tutorial.appname}}.sh
# Add to crontab (daily at 2 AM)
sudo crontab -e
# Add line: 0 2 * * * /usr/local/bin/backup-{{tutorial.appname}}.sh >> /var/log/backup-{{tutorial.appname}}.log 2>&1
12) Upgrade to Ubuntu 26.04 LTS (April 2026)

Timeline: Ubuntu 26.04 LTS releases in April 2026. Wait 1 month for initial bugs to be fixed, then plan your upgrade.

A) Pre-Upgrade Preparation

# 1. Full backup of everything
sudo rsync -av /var/www/ /backup/www/
cp /var/www/{{tutorial.domain}}/prisma/db.sqlite /backup/db-$(date +%F).sqlite
tar -czf /backup/env-$(date +%F).tar.gz /var/www/{{tutorial.domain}}/.env

# 2. Update current system fully
sudo apt update && sudo apt full-upgrade -y

# 3. Remove unnecessary packages
sudo apt autoremove -y
sudo apt autoclean

# 4. Stop application
pm2 stop all

# 5. Test backup restoration (recommended)
# Restore to test server first if possible

B) Perform Upgrade

# Run release upgrade
sudo do-release-upgrade

# Follow prompts (takes 30-60 minutes)
# Say Yes to most questions
# Review config file changes carefully

# System will reboot automatically

C) Post-Upgrade Verification

# After reboot, SSH back in

# Verify Ubuntu version
lsb_release -a  # Should show 26.04

# Check NGINX
sudo systemctl status nginx
nginx -v

# Check Node.js (NVM should preserve versions)
nvm list
node --version

# Check PM2
pm2 list

# Start application
pm2 resurrect  # Restore saved PM2 processes
pm2 start all

# Verify application
curl https://{{tutorial.domain}}
pm2 logs

# Test all features thoroughly

D) If Something Goes Wrong

# If upgrade fails, you can restore from backup
# Boot from rescue mode or previous kernel
# Restore files from /backup/

# If application doesn't start:
# 1. Check PM2 logs: pm2 logs
# 2. Regenerate dependencies: rm -rf node_modules && pnpm install
# 3. Regenerate Prisma: pnpm dlx prisma generate
# 4. Check NGINX config: sudo nginx -t
13) Docker Alternative Deployment (Optional)

Docker Deployment: Instead of PM2, you can run your Nuxt app in Docker containers. Useful for isolation, easier scaling, and consistent environments.

A) Basic Docker Deployment

# Create docker-compose.yml in /var/www/{{tutorial.domain}}
version: '3.8'

services:
  nuxt-app:
    image: node:22-alpine
    container_name: {{tutorial.appname}}-production
    working_dir: /app
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=production
      - DATABASE_URL=file:./prisma/db.sqlite
    command: sh -c "pnpm install --frozen-lockfile && pnpm dlx prisma generate && node .output/server/index.mjs"
    ports:
      - "{{tutorial.appport}}:{{tutorial.appport}}"
    restart: unless-stopped

# Start with Docker
docker compose up -d

# View logs
docker compose logs -f

# Stop
docker compose down

B) When to Use Docker vs PM2

Scenario Recommended Reason
Single Nuxt app PM2 Simpler, lighter, faster restarts
Multiple apps, different Node versions NVM + PM2 Easy version switching
Microservices architecture Docker Isolation, orchestration
Need PostgreSQL/Redis/etc Docker All services in one compose file
Kubernetes deployment Docker Container images required
Troubleshooting

Common Issues and Solutions

NGINX Config Fails: "No such file or directory"

# Error: open() "/etc/nginx/snippets/admin-tailscale-only.conf" failed (2: No such file or directory)

# Solution: Create the missing snippet file BEFORE adding the NGINX config
sudo mkdir -p /etc/nginx/snippets
sudo nano /etc/nginx/snippets/admin-tailscale-only.conf

# Add content:
allow {{tutorial.tailscaleip}};
deny all;

# Similarly for connection_upgrade.conf:
sudo nano /etc/nginx/conf.d/connection_upgrade.conf

# Add content:
map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

NGINX Config Fails: "no ssl_certificate is defined"

# Error: no "ssl_certificate" is defined for the "listen ... ssl" directive

# Cause: You're using the HTTPS config before running Certbot

# Solution 1: Use HTTP-only config first (recommended)
# - See Section 6B for the HTTP-only starter config
# - Run Certbot to get certificates
# - Then upgrade to full HTTPS config

# Solution 2: Comment out the SSL server blocks temporarily
# In your NGINX config, comment out all "server { listen 443 ssl..." blocks
# until after Certbot runs

# After Certbot succeeds, verify certs exist:
sudo ls -la /etc/letsencrypt/live/{{tutorial.domain}}/

# Then update your config with the full HTTPS version

502 Bad Gateway

# Check PM2 logs
pm2 logs {{tutorial.appname}} --lines 100
# Verify app is running
pm2 list

# Test app directly
curl -i http://127.0.0.1:{{tutorial.appport}}/

# Check NGINX configuration
sudo nginx -t

# Restart application
pm2 restart {{tutorial.appname}}

Missing Node Modules

# Ensure lockfile exists
ls -lh pnpm-lock.yaml

# Reinstall dependencies
rm -rf node_modules
nvm use 24
pnpm install --frozen-lockfile

# Restart app
pm2 restart {{tutorial.appname}}

Database Connection Errors

# Verify DATABASE_URL in .env
cat .env | grep DATABASE_URL

# Check database file exists
ls -lh prisma/db.sqlite

# Regenerate Prisma client
pnpm dlx prisma generate

# Run preflight check
pnpm run preflight:db

Prisma Client Not Generated

# Generate Prisma client
pnpm dlx prisma generate

# If using migrations
pnpm dlx prisma migrate deploy

# Restart application
pm2 restart {{tutorial.appname}}

Wrong Node Version

# Check current Node version
node --version

# Switch to Node 24
nvm use 24

# Set as default
nvm alias default 24

# Restart PM2 with correct Node
pm2 delete {{tutorial.appname}}
nvm use 24
pm2 start ecosystem.config.cjs --env production
pm2 save

SSL Certificate Issues

# Check certificate status
sudo certbot certificates

# Renew if needed
sudo certbot renew

# Test NGINX config
sudo nginx -t

# Reload NGINX
sudo systemctl reload nginx

High Memory Usage

# Check PM2 memory
pm2 list

# Monitor in real-time
pm2 monit

# Restart app to free memory
pm2 restart {{tutorial.appname}}

# Check system memory
free -h
htop

Port Already in Use

# Find process using port {{tutorial.appport}}
sudo lsof -i :{{tutorial.appport}}

# Kill process (if needed)
sudo kill -9 PID

# Or change port in ecosystem.config.cjs
# Update NGINX proxy_pass accordingly

"sudo: npm: command not found" (NVM users)

# This happens because NVM installs Node to ~/.nvm which sudo can't access
# WRONG:
sudo npm install -g pm2  # ❌ Fails

# CORRECT (no sudo needed - you own ~/.nvm):
npm install -g pm2  # ✅ Works

# For PM2 startup, run pm2 startup and copy the FULL command it outputs:
pm2 startup systemd
# Then run the sudo command it gives you, which includes the full NVM path:
# sudo env PATH=$PATH:/home/youruser/.nvm/versions/node/v24.13.0/bin pm2 startup systemd -u youruser --hp /home/youruser

Mysterious "Permission Denied" Errors (AppArmor)

What is AppArmor?

Ubuntu 24.04 has AppArmor enabled by default — a Mandatory Access Control (MAC) system that restricts what programs can access, even if file permissions allow it. NGINX, Docker, and other services have pre-configured profiles that usually "just work."

# Check if AppArmor is enabled
sudo aa-enabled  # Should return "Yes"

# List all AppArmor profiles and their status
sudo aa-status

# Check if AppArmor is blocking something (look for DENIED entries)
sudo dmesg | grep -i apparmor
# Or check audit log
sudo grep apparmor /var/log/syslog | tail -20

# Common output when AppArmor blocks access:
# apparmor="DENIED" operation="open" profile="nginx" name="/some/path"

Common AppArmor scenarios:

# NGINX can't read files from unusual location?
# The NGINX AppArmor profile restricts which directories it can access.
# Check current NGINX profile:
sudo cat /etc/apparmor.d/usr.sbin.nginx

# Quick fix: Put profile in complain mode (logs but doesn't block)
sudo aa-complain /usr/sbin/nginx

# Better fix: Add your path to the profile
sudo nano /etc/apparmor.d/local/usr.sbin.nginx
# Add:  /var/www/your-custom-path/** r,

# Reload AppArmor
sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.nginx

# Return to enforce mode after testing
sudo aa-enforce /usr/sbin/nginx
Node.js/PM2 Note: Node.js and PM2 typically run unconfined (no AppArmor profile), so they're rarely affected. If you want to sandbox your Node app, you'd need to create a custom profile — but this is advanced and usually unnecessary for typical deployments.

Additional Resources