PostgreSQL with CloudNativePG¶
Deployment Date: October 8, 2025 Current Status: ✅ Operational Operator Version: 0.26.0
Overview¶
The homelab uses CloudNativePG as the PostgreSQL operator for Kubernetes. CloudNativePG is an open-source operator that manages the full lifecycle of PostgreSQL clusters on Kubernetes.
Why CloudNativePG?¶
Migration from Bitnami: In October 2025, Broadcom announced the deprecation of free Bitnami container images (effective August 28, 2025), moving them to a legacy repository and requiring a $50,000-$72,000/year "Bitnami Secure" subscription for continued updates.
Advantages over Bitnami Helm Charts: - Operator-based management: Automated day-2 operations (backups, recovery, failover) - Built-in HA: Automatic primary election and replica management - Advanced backup/recovery: Point-in-time recovery (PITR) with WAL archiving - Automated updates: Rolling updates with zero downtime - Better monitoring: Native Prometheus metrics and PostgreSQL exporter integration - Active development: Community-driven project designed for production Kubernetes
Architecture¶
Operator Deployment¶
The CloudNativePG operator is deployed cluster-wide in the cnpg-system namespace:
Namespace: cnpg-system
Deployment: cloudnative-pg
Scope: Cluster-wide (manages PostgreSQL clusters in all namespaces)
PostgreSQL Cluster Configuration¶
PostgreSQL clusters are defined using the Cluster CRD (Custom Resource Definition):
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgresql
namespace: autopirate
spec:
instances: 1
# Superuser secret configuration
superuserSecret:
name: postgresql-superuser
storage:
storageClass: nfs-kubernetes-pv
size: 50Gi
postgresql:
parameters:
max_connections: "100"
shared_buffers: 256MB
bootstrap:
initdb:
database: autopirate
owner: autopirate
Deployed Clusters¶
Autopirate PostgreSQL¶
Namespace: autopirate
Cluster Name: postgresql
Purpose: Database backend for autopirate application stack
Configuration:
- Instances: 1 (single instance, non-HA)
- Storage: 50Gi NFS persistent volume
- Storage Class: nfs-kubernetes-pv (apollo:/mnt/user/k8s-nfs)
- Database: autopirate
- Owner: autopirate user
Resource Naming:
- Pod: postgresql-1
- PVC: postgresql-1
- Service: postgresql-rw (read-write), postgresql-ro (read-only), postgresql-r (read)
- Secret: postgresql-app (application credentials), postgresql-superuser (admin credentials)
Common Operations¶
Accessing PostgreSQL¶
Using the application user:
# Get credentials from the app secret
PGUSER=$(kubectl get secret postgresql-app -n autopirate -o jsonpath='{.data.username}' | base64 -d)
PGPASS=$(kubectl get secret postgresql-app -n autopirate -o jsonpath='{.data.password}' | base64 -d)
# Connect via kubectl exec
kubectl exec -it postgresql-1 -n autopirate -- psql -U $PGUSER autopirate
Using the superuser:
# Get superuser credentials
PGPASS=$(kubectl get secret postgresql-superuser -n autopirate -o jsonpath='{.data.password}' | base64 -d)
# Connect as postgres superuser
kubectl exec -it postgresql-1 -n autopirate -- psql -U postgres
From within the cluster:
# Read-write service (primary)
postgresql-rw.autopirate.svc.cluster.local:5432
# Read-only service (replicas, if HA is enabled)
postgresql-ro.autopirate.svc.cluster.local:5432
Checking Cluster Status¶
# Get cluster status
kubectl get cluster -n autopirate
# Get detailed cluster information
kubectl describe cluster postgresql -n autopirate
# Check pod status
kubectl get pods -n autopirate
# View cluster logs
kubectl logs postgresql-1 -n autopirate
Backup and Recovery¶
Planned Configuration (not yet implemented):
CloudNativePG supports automated backups using: - Physical backups: WAL archiving with base backups - Backup destinations: S3-compatible storage (MinIO, Backblaze B2) - Point-in-time recovery (PITR): Restore to any point in time within WAL retention
Example backup configuration:
spec:
backup:
barmanObjectStore:
destinationPath: s3://bucket-name/path
s3Credentials:
accessKeyId:
name: backup-credentials
key: ACCESS_KEY_ID
secretAccessKey:
name: backup-credentials
key: ACCESS_SECRET_KEY
retentionPolicy: "30d"
Manual Backup¶
# Trigger an on-demand backup
kubectl cnpg backup postgresql -n autopirate
# List backups
kubectl get backups -n autopirate
Scaling to High Availability¶
To enable HA with replicas:
spec:
instances: 3 # Change from 1 to 3
# Optional: specify replica sync mode
postgresql:
syncReplicaElectionConstraint:
enabled: true
This creates: - 1 primary instance (read-write) - 2 replica instances (read-only) - Automatic failover if primary fails
Monitoring¶
Prometheus Metrics¶
CloudNativePG exposes Prometheus metrics for each PostgreSQL cluster:
Available metrics: - PostgreSQL statistics (connections, transactions, queries) - Replication lag (in HA mode) - WAL generation and archiving - Backup status
Health Checks¶
The operator performs continuous health checks: - Liveness probe: Ensures PostgreSQL is responsive - Readiness probe: Checks if instance is ready for connections - Startup probe: Allows PostgreSQL time to initialize
Security¶
Authentication¶
- Superuser: Configured via
superuserSecretfield in Cluster spec, stored inpostgresql-superuserSealedSecret - Secret type:
kubernetes.io/basic-auth - Required keys:
usernameandpassword - Used for administrative operations and network authentication
- Application user: Automatically generated and stored in
postgresql-appsecret - Connection encryption: TLS can be enabled via cluster spec
Important: CloudNativePG uses different authentication methods:
- Local connections: peer authentication (no password required when connecting from localhost)
- Network connections: scram-sha-256 authentication (password required via postgresql-rw service)
When configuring the superuserSecret, ensure the password in the secret matches the actual PostgreSQL user password, especially for existing clusters where the superuser was auto-generated during bootstrap.
Network Policies¶
Recommended (not yet implemented):
# Allow only application pods to connect to PostgreSQL
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: postgresql-access
namespace: autopirate
spec:
podSelector:
matchLabels:
cnpg.io/cluster: postgresql
ingress:
- from:
- podSelector:
matchLabels:
app: autopirate-app
Upgrade Strategy¶
Operator Upgrades¶
The CloudNativePG operator is managed via Helm and Flux:
Upgrade process: 1. Flux detects new chart version 2. Helm performs rolling upgrade of operator 3. Existing clusters continue running unaffected 4. New features become available for cluster specs
PostgreSQL Version Upgrades¶
CloudNativePG supports major version upgrades using the imageName field:
Upgrade procedure:
1. Update imageName to new PostgreSQL version
2. Operator performs rolling update
3. Primary is updated last to maintain availability
4. Application connections may experience brief interruption
Troubleshooting¶
Cluster Not Starting¶
# Check operator logs
kubectl logs -n cnpg-system -l app.kubernetes.io/name=cloudnative-pg
# Check cluster events
kubectl describe cluster postgresql -n autopirate
# Check pod events
kubectl describe pod postgresql-1 -n autopirate
Connection Issues¶
# Verify service endpoints
kubectl get endpoints -n autopirate
# Test connectivity from another pod
kubectl run -it --rm debug --image=postgres:16 --restart=Never -- \
psql -h postgresql-rw.autopirate.svc -U autopirate -d autopirate
Password Authentication Failures¶
If applications fail to connect with "password authentication failed" errors:
Symptom: Applications using the postgresql-superuser secret fail with password authentication errors when connecting via the postgresql-rw service.
Root Cause: CloudNativePG auto-generates a superuser password during initial bootstrap. If the cluster was created without a superuserSecret configured, adding it later doesn't automatically update the PostgreSQL password.
Solution:
1. Ensure the Cluster spec includes superuserSecret reference
2. Verify the secret has both username and password keys with type kubernetes.io/basic-auth
3. Update the PostgreSQL password to match the secret:
# Get the password from the secret
PGPASS=$(kubectl get secret postgresql-superuser -n autopirate -o jsonpath='{.data.password}' | base64 -d)
# Update the PostgreSQL password
kubectl exec -n autopirate postgresql-1 -- \
psql -U postgres -c "ALTER USER postgres WITH PASSWORD '$PGPASS';"
Storage Issues¶
# Check PVC status
kubectl get pvc -n autopirate
# Check PV and storage class
kubectl get pv
kubectl get storageclass nfs-kubernetes-pv
Migration from Bitnami¶
Completed: October 8, 2025
The migration from Bitnami PostgreSQL Helm chart to CloudNativePG involved:
- Operator Installation:
- Added CloudNativePG HelmRepository to Flux sources
- Deployed operator in
cnpg-systemnamespace -
Configured cluster-wide operator scope
-
Cluster Migration:
- Removed Bitnami HelmRelease and PVC
- Created CloudNativePG
Clusterresource - Configured with same storage class (nfs-kubernetes-pv)
-
Created new 50Gi PVC for fresh deployment
-
Data Restoration:
- Pending: Restore from backup files
Benefits Realized: - Eliminated dependency on deprecated Bitnami images - Gained operator-based lifecycle management - Prepared for future HA and automated backup capabilities - Simplified cluster configuration (operator handles complexity)
Future Enhancements¶
Planned Features¶
- [ ] Automated Backups: Configure WAL archiving to Backblaze B2
- [ ] Point-in-time Recovery: Enable PITR for disaster recovery
- [ ] High Availability: Scale to 3 instances with automatic failover
- [ ] Monitoring Integration: Add Prometheus ServiceMonitor for metrics
- [ ] Connection Pooling: Deploy PgBouncer for connection management
- [ ] TLS Encryption: Enable encrypted client connections
Additional Clusters¶
As more applications require PostgreSQL:
- Create separate Cluster resources per application/namespace
- Leverage operator for consistent configuration
- Share operator infrastructure across all clusters
References¶
- CloudNativePG Documentation: https://cloudnative-pg.io/documentation/
- CloudNativePG GitHub: https://github.com/cloudnative-pg/cloudnative-pg
- Flux Configuration:
flux-repo/infrastructure/controllers/cloudnative-pg/ - Autopirate Cluster Config:
flux-repo/apps/_bases/autopirate/postgresql/
Last Updated: October 9, 2025 Next Review: November 9, 2025