FAQ
Frequently asked questions grouped by topic. Each entry describes the symptom, explains why it happens, and provides step-by-step resolution.
Table of Contents
Setup
- Port 3100 Already Allocated
- Restic Init Fails with SSH_FX_FAILURE
- Restic Repository Does Not Exist
- Restic Fails with "Configuration key HETZNER_SSH_KEY_PATH does not exist"
- Restic Hangs or Fails on Non-Standard SSH Port
- GPG Key Not Found During Dry Run
- .env Changes Not Taking Effect
- Dev and Prod Containers Conflict
- Dry Run Shows "No database dumper registered"
- Dry Run Shows "No notifier registered"
- SSH Warning About Post-Quantum Key Exchange
- First Backup Checklist
- pg_dump Version Mismatch
- GPG "Unusable public key" During Encrypt Stage
- Docker Cannot Reach Hetzner Storage Box (Port 23 Blocked)
- What is docker_network in projects.yml?
- How to Connect backupctl to Another Docker Compose Project's Database
- Docker Image Fails on ARM64 / Apple Silicon
- Notifications Disabled — "No notifier registered"
- How to Verify Backup Integrity
- When Do Config Changes Require a Restart?
- GPG Encryption — Which Key Goes Where?
Setup
Port 3100 Already Allocated
Symptom:
Error response from daemon: Bind for 0.0.0.0:3100 failed: port is already allocatedWhy: Another container or process is already using port 3100. This commonly happens when:
- The dev environment (
docker-compose.dev.yml) is running and you try to start production (docker-compose.yml) - A previous container didn't shut down cleanly
Fix:
- Check what's using the port:
docker ps --format '{{.Names}}\t{{.Ports}}' | grep 3100- If it's a backupctl container, stop it first:
# Stop dev environment
scripts/dev.sh down
# Stop production
docker compose down- If it's another process entirely:
lsof -i :3100- Alternatively, change the port in
.env:
APP_PORT=3200Rule of thumb: Never run
docker-compose.yml(prod) anddocker-compose.dev.yml(dev) simultaneously. They share the same port, database container name (backupctl-audit-db), and volume (backupctl-audit-data).
Restic Init Fails with SSH_FX_FAILURE
Symptom:
Fatal: create repository at sftp:u547206@host:/backups/myproject failed:
sftp: "Failure" (SSH_FX_FAILURE)Why: The remote directory does not exist on the Hetzner Storage Box, and restic cannot create parent directories over SFTP.
Fix:
Create the directories manually via SFTP before running restic init:
# Connect to the storage box and create directories
docker exec -i backupctl-dev sftp -i /home/node/.ssh/id_ed25519 \
-P 23 -o StrictHostKeyChecking=accept-new \
u547206@u547206.your-storagebox.de <<'EOF'
mkdir backups
mkdir backups/myproject
bye
EOFThen initialize the restic repository:
scripts/dev.sh cli restic myproject initExpected output:
created restic repository 51daba18a8 at sftp:u547206@host:backups/myproject
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.Tip: For each new project, repeat the
mkdir+restic initsteps. You only need to do this once per project.
Restic Repository Does Not Exist
Symptom:
Fatal: repository does not exist: unable to open config file: Lstat: file does not exist
Is there a repository at the following location?
sftp:u547206@host:backups/myprojectWhy: The restic repository has not been initialized yet. Every new project needs a one-time restic init before backups can run.
Fix:
# Dev environment
scripts/dev.sh cli restic myproject init
# Production
backupctl restic myproject initAlso check the repository path format. Hetzner Storage Boxes use relative paths from the user's home directory:
# Relative to the storage box user's home (avoid a leading slash, e.g. not `/backups/...`)
restic:
repository_path: backups/myprojectThe resulting SFTP URI should look like sftp:user@host:backups/myproject (no leading / after the colon).
Restic Fails with "Configuration key HETZNER_SSH_KEY_PATH does not exist"
Symptom:
TypeError: Configuration key "HETZNER_SSH_KEY_PATH" does not existThis appears both during startup recovery and when running commands.
Why: The HETZNER_SSH_KEY_PATH environment variable is missing from .env. This variable tells restic which SSH private key to use for SFTP connections.
Fix:
Add the variable to .env. The path must be the key's location inside the container, not on the host:
HETZNER_SSH_KEY_PATH=/home/node/.ssh/id_ed25519This works because docker-compose.dev.yml mounts ./ssh-keys:/home/node/.ssh:ro, so your local ssh-keys/id_ed25519 becomes /home/node/.ssh/id_ed25519 inside the container.
After adding the variable, restart the container to pick it up:
scripts/dev.sh restart # dev
# or
docker compose restart # prodVerify the mapping:
Host (your machine) Container
───────────────────────── ──────────────────────
./ssh-keys/id_ed25519 → /home/node/.ssh/id_ed25519
./ssh-keys/id_ed25519.pub → /home/node/.ssh/id_ed25519.pub
./ssh-keys/config → /home/node/.ssh/config
./ssh-keys/known_hosts → /home/node/.ssh/known_hostsRestic Hangs or Fails on Non-Standard SSH Port
Symptom:
Restic commands hang for 30+ seconds then fail, or SSH shows "Connection refused" despite the storage box being reachable from the host.
Why: Hetzner Storage Boxes use SSH port 23 (not the standard 22). If the SSH port is not passed to restic's SSH subprocess, it defaults to port 22 and times out.
Fix:
Ensure HETZNER_SSH_PORT is set in .env:
HETZNER_SSH_PORT=23backupctl passes this to restic via the RESTIC_SSH_COMMAND environment variable, which constructs the full SSH command:
ssh -i /home/node/.ssh/id_ed25519 -p 23 -o StrictHostKeyChecking=accept-newVerify SSH connectivity from inside the container:
docker exec backupctl-dev ssh -i /home/node/.ssh/id_ed25519 \
-p 23 -o StrictHostKeyChecking=accept-new \
u547206@u547206.your-storagebox.de lsIf this works, restic will too.
Alternatively, use an SSH config file (ssh-keys/config):
Host u547206.your-storagebox.de
User u547206
Port 23
IdentityFile /home/node/.ssh/id_ed25519
StrictHostKeyChecking accept-newGPG Key Not Found During Dry Run
Symptom:
GPG key not found: Command "gpg --list-keys user@example.com" failed:
gpg: error reading key: No public keyWhy: The GPG public key for the configured recipient is not in the container's GPG keyring. This happens when:
- The key file is missing from
gpg-keys/ - The key file has the wrong extension (must be
.pubor.gpg) - The container started before the key was placed in the directory
Fix:
Step 1 — Place the GPG public key in the gpg-keys/ directory:
# Export from your local keyring
gpg --export --armor backup@company.com > ./gpg-keys/backup.pub
# Or copy an existing key file
cp /path/to/backup-key.pub.gpg ./gpg-keys/Step 2 — Verify the file is there with the correct extension:
ls -la gpg-keys/
# Should show: backupctl-backup.pub (or .gpg)Step 3 — Restart the container. GpgKeyManager auto-imports all .pub and .gpg files from gpg-keys/ on startup:
scripts/dev.sh restartLook for the import log line:
[GpgKeyManager] Auto-imported 1 GPG key(s) from ./gpg-keysStep 4 — Verify the key is in the keyring:
docker exec backupctl-dev gpg --list-keysExpected:
/root/.gnupg/pubring.kbx
-------------------------
pub ed25519 2026-03-15 [SC]
AB12CD34EF56...
uid [unknown] Backup Key <backup@company.com>Step 5 — Confirm the recipient in projects.yml matches the key's UID or email:
encryption:
enabled: true
type: gpg
recipient: backup@company.com # Must match the GPG key.env Changes Not Taking Effect
Symptom:
You add or change a variable in .env, but the application still uses the old value. For example, adding HETZNER_SSH_KEY_PATH but still getting "Configuration key does not exist."
Why: Docker Compose reads env_file at container start time, not continuously. Changes to .env require a container restart.
Fix:
# Dev environment
scripts/dev.sh restart
# Production
docker compose restartImportant:
docker compose restartis enough — you don't need to rebuild. The.envfile is read by Docker Compose (viaenv_file: .env) and injected as OS environment variables. NestJSConfigModulepicks them up fromprocess.env.
Verify the variable is set inside the container:
docker exec backupctl-dev printenv | grep HETZNER
# HETZNER_SSH_HOST=u547206.your-storagebox.de
# HETZNER_SSH_USER=u547206
# HETZNER_SSH_PORT=23
# HETZNER_SSH_KEY_PATH=/home/node/.ssh/id_ed25519Dev and Prod Containers Conflict
Symptom:
Starting production containers while dev is running causes errors:
- Port 3100 already allocated
backupctl-audit-dbcontainer gets recreated- Data in the audit database is lost
- Orphan container warnings
Why: Both docker-compose.yml and docker-compose.dev.yml share:
- Port
3100(configurable viaAPP_PORT) - Container name
backupctl-audit-db - Volume name
backupctl-audit-data - Network name
backupctl-network
This is by design — dev and prod are mutually exclusive environments.
Fix:
Always stop one before starting the other:
# Switch from dev to prod
scripts/dev.sh down
scripts/backupctl-manage.sh deploy
# Switch from prod to dev
docker compose down
scripts/dev.sh upQuick check — which environment is running?
docker ps --format '{{.Names}}' | grep backupctlbackupctl-dev— dev environment is runningbackupctl— prod environment is running
Dry Run Shows "No database dumper registered"
Symptom:
No database dumper registered for type: postgresWhy: The DumperBootstrapService didn't run. This service registers database adapter factories (postgres, mysql, mongo) into the DumperRegistry on startup.
Fix:
- Verify the container started cleanly — check for startup errors:
docker logs backupctl-dev 2>&1 | head -30- If using the dev environment, ensure the source code is mounted correctly:
docker exec backupctl-dev ls /app/src/domain/backup/infrastructure/adapters/dumpers/
# Should list: dumper-bootstrap.service.ts, postgres-dump.adapter.ts, etc.- Restart the container:
scripts/dev.sh restart- Run the dry-run again:
scripts/dev.sh cli run myproject --dry-runDry Run Shows "No notifier registered"
Symptom:
No notifier registered for type: slackWhy: The NotifierBootstrapService registers notifier adapters from .env config on startup. If the required env var is missing, the adapter is skipped.
Fix:
For Slack, ensure SLACK_WEBHOOK_URL is set in .env:
NOTIFICATION_TYPE=slack
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/T.../B.../xxxFor Webhook, ensure WEBHOOK_URL is set:
NOTIFICATION_TYPE=webhook
WEBHOOK_URL=https://your-server.com/backup-webhookFor Email, ensure SMTP is configured:
NOTIFICATION_TYPE=email
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=backupctl@example.com
SMTP_PASSWORD=secret
SMTP_TO=admin@example.com
SMTP_FROM=backupctl@example.comAfter adding the variables, restart:
scripts/dev.sh restartSSH Warning About Post-Quantum Key Exchange
Symptom:
Every SSH/restic command shows:
** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.htmlWhy: This is an informational warning from OpenSSH 9.x+. The SSH server (Hetzner Storage Box) doesn't support post-quantum key exchange yet. This does not affect functionality — connections still work and are encrypted with classical algorithms.
What to do: Nothing. This is safe to ignore. The warning will disappear once Hetzner upgrades their SSH servers to support post-quantum algorithms.
If you want to suppress the warning, add to ssh-keys/config:
Host *.your-storagebox.de
PQCWarning noNote:
PQCWarningrequires OpenSSH 9.9+. Older versions will ignore this option silently.
First Backup Checklist
Before running your first real backup, walk through this checklist. Each item maps to a dry-run check.
Step 1: Configuration
scripts/dev.sh cli config validateVerify config/projects.yml loads without errors and all ${} variables resolve.
Step 2: Dry Run
scripts/dev.sh cli run myproject --dry-runAll 6 checks should pass:
=== Dry Run: myproject ===
Config loaded — project config is valid
Database dumper — adapter found for type: postgres
Notifier — adapter found for type: slack
Restic repo — repository accessible
Disk space — XX GB free (minimum: 5 GB)
GPG key — key found for recipient (if encryption enabled)
All checks passed — myproject is ready for backup.Step 3: Verify Database Connectivity
The dry-run checks that a dumper is registered, but doesn't test the actual database connection. Verify manually:
# PostgreSQL
docker exec backupctl-dev pg_isready -h <db-host> -p <db-port> -U <db-user>
# MySQL
docker exec backupctl-dev mysqladmin ping -h <db-host> -P <db-port> -u <db-user> -p
# MongoDB
docker exec backupctl-dev mongosh --host <db-host> --port <db-port> --eval "db.runCommand({ping:1})"Important: The database host must be reachable from inside the Docker network. If your database runs on the host machine, use
host.docker.internal(macOS/Windows) or the host's Docker bridge IP (Linux).
Step 4: Run the Backup
scripts/dev.sh cli run myprojectStep 5: Verify
# Check the audit log
scripts/dev.sh cli status myproject --last 1
# List remote snapshots
scripts/dev.sh cli snapshots myproject --last 1Quick Reference: Required .env Variables
| Variable | Example | Purpose |
|---|---|---|
AUDIT_DB_PASSWORD | eR199naK... | Audit database password |
HETZNER_SSH_HOST | u547206.your-storagebox.de | Storage box hostname |
HETZNER_SSH_USER | u547206 | Storage box SSH user |
HETZNER_SSH_PORT | 23 | Storage box SSH port |
HETZNER_SSH_KEY_PATH | /home/node/.ssh/id_ed25519 | SSH key path inside container |
RESTIC_PASSWORD | pNJ7bFj0... | Restic repository encryption password |
SLACK_WEBHOOK_URL | https://hooks.slack.com/... | Slack notification webhook (if using slack) |
| Per-project DB password | MYPROJECT_DB_PASSWORD=... | Referenced via ${...} in projects.yml |
Quick Reference: Required Files
backupctl/
├── .env # All secrets and configuration
├── config/
│ └── projects.yml # Project backup definitions
├── ssh-keys/
│ ├── id_ed25519 # SSH private key (chmod 600)
│ ├── id_ed25519.pub # SSH public key
│ ├── config # SSH client config (host, port, key)
│ └── known_hosts # Storage box host key
└── gpg-keys/
└── backup.pub # GPG public key (if encryption enabled)pg_dump Version Mismatch ("server version: 17.x; pg_dump version: 9.x")
Symptom:
pg_dump: server version: 17.9; pg_dump version: 9.4.14
pg_dump: aborting because of server version mismatchWhy: pg_dump requires the client version to be >= the server version. Alpine Linux's default postgresql-client package ships an ancient version (9.4). If your target database runs PostgreSQL 14+, the dump will fail.
Fix: backupctl's Dockerfiles already install postgresql17-client from Alpine edge, which includes pg_dump 17. If you see this error, your container is using a stale image.
Rebuild the container:
# Dev
scripts/dev.sh restart
# Production
docker compose up -d --buildVerify inside the container:
docker exec backupctl-dev pg_dump --version
# pg_dump (PostgreSQL) 17.9Note: If you're running backupctl without Docker (local development), ensure your system
pg_dumpmatches or exceeds the target database version. On macOS:brew install postgresql@17.
GPG "Unusable public key" During Encrypt Stage
Symptom:
gpg: CF7D15E776A1FD1E: There is no assurance this key belongs to the named user
gpg: encryption failed: Unusable public keyWhy: GPG requires trust to be set on imported public keys before using them for encryption. In a non-interactive container environment, keys imported from files have "unknown" trust level by default.
Fix: backupctl already passes --trust-model always to GPG, so this should not occur in normal operation. If you see this error, your container image is outdated.
Rebuild:
scripts/dev.sh restart # dev
docker compose up -d --build # productionIf you're running GPG commands manually inside the container and hit this, add --trust-model always:
gpg --batch --yes --trust-model always --encrypt \
--recipient backup@company.com \
--output file.dump.gpg file.dumpDocker Cannot Reach Hetzner Storage Box (Port 23 Blocked)
Symptom:
From the Mac terminal, SSH to Hetzner works. From inside Docker, all TCP connections to the storage box are refused:
# Works from Mac
nc -z u547206.your-storagebox.de 23 # succeeds
# Fails from Docker
docker exec backupctl-dev nc -z u547206.your-storagebox.de 23 # failsWhy: This is an ISP/router issue, not a Hetzner or Docker issue. Many ISPs block outbound TCP port 23 (telnet) on IPv4. Your Mac connects to Hetzner over IPv6 (bypassing the block), but the Docker VM (Colima or Docker Desktop) only supports IPv4 outbound.
You can verify this:
# Host IPv4 — blocked
nc -4 -z -w 3 u547206.your-storagebox.de 23 # fails
# Host IPv6 — works
nc -6 -z -w 3 u547206.your-storagebox.de 23 # succeeds
# Docker always uses IPv4
docker exec backupctl-dev curl -s ifconfig.me # shows IPv4 address
curl -s ifconfig.me # shows IPv6 address (different!)Fix: Use a socat relay on the Mac host to bridge IPv4 traffic to Hetzner over IPv6:
# Install socat (one-time)
brew install socat
# Get the storage box IPv6 address
dig AAAA u547206.your-storagebox.de +short
# e.g., 2a01:4f8:2b01:ac::2
# Start the relay
socat "TCP4-LISTEN:2323,fork,reuseaddr" "TCP6:[2a01:4f8:2b01:ac::2]:23" &Then configure docker-compose.dev.yml to route through the relay:
environment:
HETZNER_SSH_HOST: host.docker.internal
HETZNER_SSH_PORT: "2323"See the Development Guide for full setup instructions.
Note: This is a macOS development-only workaround. On Linux production servers, Docker shares the host's network and IPv6 works natively.
What is docker_network in projects.yml?
Question: What is the docker_network field in projects.yml and when do I need it?
Answer: docker_network is an optional field that tells backupctl which Docker network to join in order to reach a project's database.
projects:
- name: my-app
docker_network: myapp_default # optional
database:
host: postgres # hostname on that networkWhen you need it:
- Your database runs in a separate Docker Compose stack (e.g., your application's own
docker-compose.yml) - The database container is on a different Docker network than backupctl
When you don't need it:
- The database is on the host machine (use
host.docker.internalas the host) - The database is already on the same Docker network as backupctl
On scripts/dev.sh up and restart, the startup script automatically runs docker network connect for each project's declared network. The same logic runs in production via scripts/backupctl-manage.sh deploy.
To see available networks:
docker network lsHow to Connect backupctl to Another Docker Compose Project's Database
Symptom: scripts/dev.sh cli run myproject --dry-run fails because the database host is unreachable. The database runs in another Docker Compose project.
Why: Each Docker Compose project creates its own isolated network. Containers on different networks cannot reach each other by default.
Fix:
Step 1 — Find the target network name:
docker network ls | grep myapp
# myapp_defaultStep 2 — Add docker_network to the project in config/projects.yml:
projects:
- name: my-app
docker_network: myapp_default
database:
host: postgres # the service name in the other docker-compose.yml
port: 5432
# ...Step 3 — Restart the dev environment:
scripts/dev.sh restartThe script will automatically connect to the network:
✔ Connected to network: myapp_defaultStep 4 — Verify connectivity:
docker exec backupctl-dev pg_isready -h postgres -p 5432
# postgres:5432 - accepting connectionsDocker Image Fails on ARM64 / Apple Silicon
Symptom:
exec /usr/local/bin/restic: exec format errorOr docker pull fails with:
no matching manifest for linux/arm64/v8Why: The Docker image was built only for AMD64 (x86_64). ARM64 servers (AWS Graviton, Apple Silicon Macs, Oracle Ampere) need a native ARM64 image.
Fix:
Starting from v0.1.3, backupctl publishes multi-architecture images (AMD64 + ARM64). Pull the latest:
docker compose pull
docker compose up -dIf you're running from source, Docker Buildx handles multi-arch automatically:
docker compose up -d --buildNotifications Disabled — "No notifier registered for type: slack"
Symptom:
No notifier registered for type: slackBackup fails during dry-run or actual run.
Why: The notification type is set to slack (either explicitly or via the default), but SLACK_WEBHOOK_URL is missing from .env. The NotifierBootstrapService only registers a notifier adapter when its required env var is present.
Fix:
Option A — Configure the notifier:
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/T.../B.../xxxOption B — Disable notifications (if you don't need them):
Remove the notification block from projects.yml and don't set NOTIFICATION_TYPE in .env. The system will log "Notifications disabled" and skip notification steps.
Then restart:
docker compose restart backupctlHow to Verify Backup Integrity (Full Round-Trip Test)
Question: How do I verify that my backup is not corrupted and can actually be restored?
Answer: Run a full round-trip test: backup → restic check → restore → decrypt → compare checksums.
Step 1 — Restic repository integrity:
backupctl restic myproject checkExpected: no errors were found
Step 2 — Restore the latest snapshot:
docker exec backupctl sh -c 'mkdir -p /tmp/verify && \
RESTIC_PASSWORD=your-restic-password restic \
-r sftp:user@host:backups/myproject \
-o sftp.command="ssh -p 23 -i /home/node/.ssh/id_ed25519 -F /home/node/.ssh/config user@host -s sftp" \
restore latest --target /tmp/verify'Step 3 — If encrypted, copy to your local machine and decrypt:
docker cp backupctl:/tmp/verify/data/backups/myproject/myproject_backup.dump.gpg /tmp/
gpg --decrypt /tmp/myproject_backup.dump.gpg > /tmp/myproject_backup.dumpStep 4 — Compare checksums:
# On the server (original dump)
docker exec backupctl sha256sum /data/backups/myproject/myproject_backup.dump
# On your machine (decrypted from restore)
shasum -a 256 /tmp/myproject_backup.dumpIf both checksums match, the full chain (dump → encrypt → restic → restore → decrypt) is verified.
Step 5 — Verify the dump is readable:
pg_restore --list /tmp/myproject_backup.dump | head -20Step 6 — Clean up:
docker exec backupctl rm -rf /tmp/verify
rm /tmp/myproject_backup.dump /tmp/myproject_backup.dump.gpgWhen Do Config Changes Require a Restart?
Question: When I change configuration, what needs a restart vs reload vs nothing?
| Change | Action Required |
|---|---|
projects.yml fields (database, retention, encryption, hooks, etc.) | Nothing — re-read on next backup run |
projects.yml cron schedule | backupctl config reload |
.env values (secrets, ports, hosts) | docker compose up -d --force-recreate backupctl |
projects.yml new project added | backupctl config reload |
GPG Encryption — Which Key Goes Where?
Question: How does GPG encryption work and where do I put the keys?
| Key | Location | Purpose |
|---|---|---|
| Public key | gpg-keys/ directory (mounted into container) | Encrypts the dump during backup. Safe to store on the backup server. |
| Private key | Your local machine / secure workstation | Decrypts the dump during restore. Never put this on the backup server. |
Setup:
# Export your public key
gpg --export --armor your@email.com > gpg-keys/backup.pub
# Configure in .env
ENCRYPTION_ENABLED=true
GPG_RECIPIENT=your@email.com
GPG_KEYS_DIR=/app/gpg-keys
# Restart to auto-import
docker compose restart backupctl
# Verify it was imported
docker exec backupctl gpg --list-keysThe public key is auto-imported on every container startup from the gpg-keys/ directory.
Getting Help
If you've checked this FAQ and the Troubleshooting guide and are still stuck:
- Report an issue on GitHub — Bug reports, feature requests, or documentation improvements
- View existing issues — Check if someone else has reported the same problem
What's Next
- Runtime troubleshooting — Troubleshooting covers issues after initial setup.
- Configuration reference — Configuration for all
.envandprojects.ymloptions. - CLI commands — CLI Reference for all 14 commands.
- Daily operations — Cheatsheet for copy-paste commands.