Skip to content

Autopirate Application Stack

Deployment Date: October 6-9, 2025; January 1-3, 2026 (Radarr, Tautulli, Overseerr) Current Status: ✅ Operational Namespace: autopirate

Overview

The autopirate stack is a collection of applications for automated media management and downloading. All applications are deployed using the bjw-s app-template Helm chart (v3.7.x) with LinuxServer.io container images.

Architecture

Applications

Application Version Purpose Database
Sonarr 4.0.15 TV show management and automation PostgreSQL (sonarr-main, sonarr-log)
Radarr Latest Movie management and automation PostgreSQL (radarr-main, radarr-log)
Prowlarr 1.36.3 Indexer manager PostgreSQL (prowlarr-main, prowlarr-log)
SABnzbd 4.5.3 Usenet downloader SQLite
Tautulli 2.16.0 Plex Media Server monitoring and statistics SQLite
Overseerr 1.34.0 Media request and discovery management SQLite

Infrastructure Dependencies

  • PostgreSQL: CloudNativePG cluster for Sonarr, Radarr, and Prowlarr databases
  • Storage:
  • Config: Ceph RBD (ceph-block storage class, 5-200Gi per app)
  • Media: Shared NFS from apollo (pvc-apollo-media PVC)
  • Ingress: Traefik with Authentik authentication
  • Authentication: Authentik SSO for all services

Resource Allocation

Updated: October 9, 2025

Resource requests and limits are configured using centralized resource tier definitions to handle intensive operations like media imports, downloading, and unpacking.

Centralized Resource Tiers

The cluster uses standardized resource tier ConfigMaps defined in flux-repo/infrastructure/policies/ for consistent resource allocation:

Tier CPU Request CPU Limit Memory Request Memory Limit Use Case
Small 100m 1000m 256Mi 1Gi Lightweight services
Medium 500m 2000m 1Gi 4Gi Standard applications
Large 1000m 4000m 2Gi 8Gi Heavy processing
XLarge 2000m 6000m 4Gi 12Gi Intensive workloads
Database 500m 4000m 2Gi 8Gi Database optimized

See Infrastructure Policies for complete tier documentation.

Application Tier Assignments

Application Tier CPU Request-Limit Memory Request-Limit Rationale
Sonarr Large 500m-4000m 1Gi-4Gi Media imports, renaming, and moving files are CPU-intensive
Radarr Large 500m-4000m 1Gi-4Gi Same workload profile as Sonarr for movie management
SABnzbd XLarge 1000m-6000m 2Gi-12Gi Download, PAR2 repair, and decompression are very intensive
Prowlarr Medium 200m-2000m 512Mi-2Gi Indexer searches and syncing, moderate load
Tautulli Small 100m-500m 256Mi-512Mi Lightweight monitoring, occasional database queries
Overseerr Small 100m-500m 256Mi-512Mi Web UI with occasional API calls to media managers

Note: Current configurations use direct resource values in HelmRelease manifests. Future deployments can leverage Kustomize patches to apply tier-based resources per cluster.

Cluster Capacity: Worker nodes have 12 cores and 64GB RAM each, so these allocations are well within available capacity.

Database Configuration

PostgreSQL (Sonarr, Radarr, Prowlarr)

Sonarr, Radarr, and Prowlarr use a shared CloudNativePG-managed PostgreSQL cluster with two databases each:

Sonarr Databases: - sonarr-main: Application data (series, episodes, settings) - sonarr-log: Application logs

Radarr Databases: - radarr-main: Application data (movies, settings) - radarr-log: Application logs

Prowlarr Databases: - prowlarr-main: Application data (indexers, settings) - prowlarr-log: Application logs

Connection Configuration (Sonarr example):

env:
  SONARR__POSTGRES__HOST: postgresql-rw
  SONARR__POSTGRES__PORT: "5432"
  SONARR__POSTGRES__USER: postgres
  SONARR__POSTGRES__MAINDB: sonarr-main
  SONARR__POSTGRES__LOGDB: sonarr-log
  SONARR__POSTGRES__PASSWORD:
    valueFrom:
      secretKeyRef:
        name: postgresql-superuser
        key: password

Radarr/Prowlarr Configuration: Uses identical pattern with respective database names.

Authentication: All use the postgresql-superuser SealedSecret for credentials.

SQLite (SABnzbd)

SABnzbd uses SQLite database stored in its persistent volume.

Networking

Services

All applications expose HTTP services on their respective ports:

  • Sonarr: Port 80 (internal to cluster)
  • Radarr: Port 7878 (internal to cluster)
  • SABnzbd: Port 8080 (internal to cluster)
  • Prowlarr: Port 9696 (internal to cluster)
  • Tautulli: Port 8181 (internal to cluster)
  • Overseerr: Port 5055 (internal to cluster)

Ingress Routes

Traefik IngressRoutes with Authentik authentication:

Application URL Middleware
Sonarr https://sonarr.skaggsfamily.us authentik-forwardauth
Radarr https://radarr.skaggsfamily.us authentik-forwardauth
SABnzbd https://sabnzbd.skaggsfamily.us authentik-forwardauth
Prowlarr https://prowlarr.skaggsfamily.us authentik-forwardauth
Tautulli https://tautulli.skaggsfamily.us authentik-forwardauth
Overseerr https://overseerr.skaggsfamily.us authentik-forwardauth

Authentication: SSO via Authentik with domain-level session management

Storage

Config Volumes

Each application has a dedicated PVC for configuration:

# Example: Sonarr
persistence:
  config:
    enabled: true
    existingClaim: sonarr
    globalMounts:
      - path: /config

Storage Class: ceph-block Sizes: 5-50Gi per application

Shared Media Volume

All applications share access to the media library:

persistence:
  media:
    existingClaim: pvc-apollo-media
    globalMounts:
      - path: /media

Backend: apollo:/mnt/user/media (NFS)

Security

Pod Security

defaultPodOptions:
  securityContext:
    runAsNonRoot: false
    runAsUser: 0
    runAsGroup: 0
    fsGroup: 0
    fsGroupChangePolicy: OnRootMismatch
    seccompProfile:
      type: RuntimeDefault

Note: Runs as root to handle file permissions on NFS shares. This is a known limitation of the LinuxServer.io images.

Container Security

securityContext:
  allowPrivilegeEscalation: true
  readOnlyRootFilesystem: false

Note: LinuxServer.io containers require write access to root filesystem for configuration management.

Secrets Management

  • PostgreSQL credentials: Managed via SealedSecrets (postgresql-superuser)
  • Application secrets: Stored in application configuration (future: migrate to Kubernetes secrets)

Backup and Recovery

Backup Configuration

The autopirate stack uses a comprehensive backup strategy with all data backed up to Garage S3 on Apollo (172.16.104.30) over 10GbE.

PostgreSQL Database (CloudNativePG):

  • Method: Barman to Garage S3
  • Schedule: Daily at 3:45 AM
  • Retention: 30 days
  • Destination: s3://postgres-backups/autopirate

Config Volumes (VolSync):

Application Schedule Size
Prowlarr 2:00 AM 5Gi
Radarr 2:05 AM 10Gi
Sonarr 2:10 AM 10Gi
SABnzbd 2:15 AM 200Gi
Tautulli 2:20 AM 5Gi
Overseerr 2:25 AM 5Gi
  • Method: Restic to Garage S3
  • Retention: 7 daily, 4 weekly, 6 monthly
  • Destination: s3://volsync-backups/<app-name>

Media Volume: Not backed up via VolSync - covered by Apollo's unRAID backup system.

Backup Verification

# Check PostgreSQL backup status
kubectl get backup,scheduledbackup -n autopirate

# Check VolSync backup status
kubectl get replicationsource -n autopirate

# Verify backup data in Garage
ssh root@apollo "docker exec Garage /garage bucket info postgres-backups"
ssh root@apollo "docker exec Garage /garage bucket info volsync-backups"

Common Operations

Accessing Applications

Applications are accessible via their ingress URLs after authenticating with Authentik:

  • https://sonarr.skaggsfamily.us
  • https://radarr.skaggsfamily.us
  • https://sabnzbd.skaggsfamily.us
  • https://prowlarr.skaggsfamily.us
  • https://tautulli.skaggsfamily.us
  • https://overseerr.skaggsfamily.us

Checking Application Status

# Get all autopirate pods
kubectl get pods -n autopirate

# Check specific application
kubectl get pods -n autopirate -l app.kubernetes.io/name=sonarr

# View logs
kubectl logs -n autopirate -l app.kubernetes.io/name=sonarr --tail=50

Restarting Applications

# Restart Sonarr
kubectl delete pod -n autopirate -l app.kubernetes.io/name=sonarr

# Restart SABnzbd
kubectl delete pod -n autopirate -l app.kubernetes.io/name=sabnzbd

# Restart Prowlarr
kubectl delete pod -n autopirate -l app.kubernetes.io/name=prowlarr

Note: Pods take 8-10 minutes to start due to NFS volume mounting delays.

Accessing Databases

Sonarr PostgreSQL:

# Get password
PGPASS=$(kubectl get secret postgresql-superuser -n autopirate -o jsonpath='{.data.password}' | base64 -d)

# Connect to main database
kubectl exec -it postgresql-1 -n autopirate -- \
  psql -U postgres -d sonarr-main

# Connect to log database
kubectl exec -it postgresql-1 -n autopirate -- \
  psql -U postgres -d sonarr-log

Troubleshooting

Pod Stuck in ContainerCreating

Symptom: Pod remains in ContainerCreating state for extended period.

Cause: NFS volume mounting can take 8-10 minutes, especially on first mount.

Solution: Wait for the mount to complete. Monitor with:

kubectl describe pod <pod-name> -n autopirate

Sonarr CrashLoopBackOff with PostgreSQL Error

Symptom: Sonarr crashes with "password authentication failed for user 'postgres'" errors.

Cause: PostgreSQL password mismatch between the postgresql-superuser secret and the actual database password.

Solution: See PostgreSQL troubleshooting documentation.

SABnzbd Hostname Verification Failed

Symptom: SABnzbd returns "Access denied - Hostname verification failed" when accessing via ingress.

Cause: SABnzbd's hostname whitelist doesn't include the ingress hostname.

Solution: Ensure SABNZBD__HOST_WHITELIST_ENTRIES includes all hostnames:

env:
  SABNZBD__HOST_WHITELIST_ENTRIES: "sabnzbd,sabnzbd.autopirate,sabnzbd.autopirate.svc,sabnzbd.autopirate.svc.cluster,sabnzbd.autopirate.svc.cluster.local,sabnzbd.skaggsfamily.us"

403 Forbidden from Traefik

Symptom: Accessing application URL returns 403 Forbidden error.

Cause: Application not configured in Authentik or missing forwardauth middleware.

Solution: Ensure the IngressRoute includes the Authentik middleware:

middlewares:
  - name: authentik-forwardauth
    namespace: authentik

See Authentik documentation for provider configuration.

Deployment History

January 3, 2026: Tautulli and Overseerr Deployment

  • Deployed Tautulli 2.16.0 for Plex Media Server monitoring and statistics
  • Deployed Overseerr 1.34.0 for media request and discovery management
  • Both applications use SQLite databases (lightweight, no PostgreSQL needed)
  • Configured Authentik SSO authentication for both services
  • Added Homepage dashboard widgets with API key integration
  • Storage: 5Gi Ceph RBD volumes for each application's configuration

January 1, 2026: Radarr Deployment

  • Deployed Radarr for movie management and automation
  • Configured PostgreSQL databases (radarr-main, radarr-log)
  • Fixed database initialization issue (databases were missing on first startup)
  • Configured Authelia authentication at radarr.skaggsfamily.us
  • Same resource tier as Sonarr (Large - 500m-4000m CPU, 1Gi-4Gi memory)

October 9, 2025: Resource Allocation Improvements

  • Increased CPU and memory allocations for all applications
  • Added proper resource requests to guarantee baseline performance
  • Configured CPU limits to allow bursting during intensive operations

October 8, 2025: PostgreSQL Authentication Fix

  • Configured CloudNativePG superuserSecret for Sonarr database
  • Fixed password authentication issues for network connections
  • Updated SealedSecret to include both username and password

October 7, 2025: Prowlarr Deployment

  • Deployed Prowlarr 1.36.3 for indexer management
  • Configured Authelia authentication (later migrated to Authentik)
  • Configured PostgreSQL databases (prowlarr-main, prowlarr-log)

October 6, 2025: SABnzbd Deployment

  • Deployed SABnzbd 4.5.3 for Usenet downloading
  • Fixed hostname whitelist configuration
  • Configured shared media volume access

October 6, 2025: Sonarr Deployment

  • Deployed Sonarr 4.0.15 as first autopirate application
  • Configured PostgreSQL database backend
  • Established patterns for other autopirate apps

Future Enhancements

Planned Improvements

  • [x] Radarr: ✅ Deployed January 1, 2026
  • [x] Tautulli: ✅ Deployed January 3, 2026 - Plex monitoring and statistics
  • [x] Overseerr: ✅ Deployed January 3, 2026 - Media request management
  • [ ] Bazarr: Add subtitle management
  • [ ] Lidarr: Add music management
  • [ ] Readarr: Add book/audiobook management
  • [ ] VPN Integration: Route download traffic through VPN
  • [ ] Metrics: Add Prometheus exporters for monitoring
  • [ ] Automated Testing: Health checks and integration tests
  • [ ] Resource Quotas: Namespace-level resource limits
  • [ ] Network Policies: Restrict inter-pod communication

Security Improvements

  • [ ] Migrate to non-root containers (custom images or alternative charts)
  • [ ] Implement read-only root filesystem where possible
  • [ ] Add network policies to restrict database access
  • [ ] Migrate application secrets to Kubernetes Secrets
  • [x] Implement backup automation for databases and configuration (completed January 2026)

References

  • bjw-s app-template Chart: https://bjw-s.github.io/helm-charts/
  • LinuxServer.io Images: https://fleet.linuxserver.io/
  • Sonarr Documentation: https://wiki.servarr.com/sonarr
  • Radarr Documentation: https://wiki.servarr.com/radarr
  • Prowlarr Documentation: https://wiki.servarr.com/prowlarr
  • SABnzbd Documentation: https://sabnzbd.org/wiki/
  • Tautulli Documentation: https://github.com/Tautulli/Tautulli/wiki
  • Overseerr Documentation: https://docs.overseerr.dev/
  • Flux Configuration: flux-repo/apps/_bases/autopirate/
  • Cluster Deployment: flux-repo/clusters/prod/apps/

Last Updated: January 3, 2026 Next Review: February 1, 2026