Storage Configuration Reference¶
Storage Tiers¶
Tier 1: VM-BOOT (SSD RAID-1)¶
- Device: 2x 500GB SSDs in RAID-1
- Performance: High reliability, fast boot
- Use Cases:
- Proxmox OS and system files
- VM templates and ISOs
- Control plane VM boot drives
- System reliability and fast boot
Tier 2: VM-PERF (SAS RAID-10)¶
- Configuration: RAID-10 across SAS drives
- Performance: Balanced performance and redundancy
- Use Cases:
- General VM storage
- Application data requiring redundancy
- Kubernetes persistent volumes (standard)
Tier 3: VM-BULK (SAS RAID-6)¶
- Configuration: RAID-6 across remaining SAS drives
- Performance: High capacity, lower performance
- Use Cases:
- Archive storage
- Backup destinations
- Large file storage
- Log retention
External: Apollo Storage Pools¶
Apollo Main Array (NFS)¶
- Capacity: 72TB usable (unRAID parity-protected array)
- Network: 1GbE connection
- Use Cases:
- Media storage (Plex library)
- Backup target for VolSync
- Large file sharing
- Long-term archive
Apollo Freezer Pool (RAID-10)¶
- Capacity: 4.8TB usable (16x 600GB drives in RAID-10)
- Hardware: Dell MD1220 disk shelf
- Network: 1GbE connection
- Fault Tolerance: Can survive 1 drive failure per mirror pair
- Performance: Optimized for write-heavy workloads
- Planned Use Cases:
- MinIO object storage VM
- Database backups (CloudNativePG)
- Application backups requiring S3 API
- High-performance backup targets
Kubernetes Storage Classes¶
Current Storage Classes¶
```yaml
SSD boot storage¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ssd-boot provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer parameters: type: ssd tier: boot ```text
```text
```yaml
```yaml
Balanced performance storage¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: balanced provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer parameters: type: raid10 tier: performance ```text
```text
```yaml
```yaml
High capacity storage¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: bulk provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer parameters: type: raid6 tier: capacity ```text
```text
```yaml
```yaml
External NFS storage¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-apollo provisioner: nfs.csi.k8s.io parameters: server: apollo.lab.local share: /mnt/user/k8s-volumes ```text
```text
Planned Storage Classes (Phase 3 - Ceph)¶
```yaml
```yaml
Ceph block storage¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-rbd provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: k8s-pool replicaSize: "3" ```text
```text
```yaml
```yaml
Ceph filesystem¶
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-fs provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: k8s-filesystem ```text
```text
Application Storage Mapping¶
Critical Applications (fast-ssd)¶
- VaultWarden: Database requires fast I/O
- Prometheus: Metrics database
- Grafana: Dashboard database
- cert-manager: Certificate storage
Standard Applications (balanced)¶
- WordPress: Website data and uploads
- GitLab: Repository and metadata
- Registry: Container image storage
- Monitoring configs: Configuration data
Bulk Applications (bulk or nfs-apollo)¶
- Media applications: Temporary download processing
- Backup destinations: VolSync targets
- Log storage: Long-term log retention
- Archive data: Infrequently accessed data
Media Storage (nfs-apollo)¶
- Plex: Media library
- Radarr/Sonarr: Media management
- Download clients: Completed downloads
- Photo storage: Family photo archives
Backup Configuration¶
VolSync Backup Targets¶
```yaml
```yaml
Apollo NFS backup destination¶
apiVersion: v1 kind: PersistentVolume metadata: name: backup-apollo-nfs spec: capacity: storage: 10Ti accessModes:
- ReadWriteMany
nfs:
server: apollo.lab.local
path: /mnt/user/backups/volsync
```text
```text
Backup Retention Policies¶
| Backup Type | Frequency | Retention |
|---|---|---|
| Critical PVs | Hourly | 24 hours, 7 days, 4 weeks |
| Standard PVs | Daily | 7 days, 4 weeks, 6 months |
| Bulk data | Weekly | 4 weeks, 6 months, 1 year |
| Config backups | Daily | 30 days, 12 months |
Performance Characteristics¶
IOPS Expectations¶
- SSD (ssd-boot): 50,000+ IOPS
- RAID-10 (balanced): 5,000-10,000 IOPS
- RAID-6 (bulk): 1,000-3,000 IOPS
- NFS (apollo): 500-1,000 IOPS
Throughput Expectations¶
- SSD: 500-600 MB/s
- RAID-10: 500-1,000 MB/s
- RAID-6: 200-500 MB/s
- NFS: 100-125 MB/s (limited by 1GbE)
Monitoring and Alerts¶
Storage Metrics to Monitor¶
- Disk usage: Per-volume utilization
- IOPS: Read/write operations per second
- Latency: Response times for storage operations
- Throughput: Data transfer rates
- Health: RAID status, disk errors
Alert Thresholds¶
- Disk usage: >80% warning, >90% critical
- Latency: >10ms warning, >50ms critical
- RAID degraded: Immediate critical alert
- Backup failures: Critical alert for missed backups
Future Storage Planning¶
Phase 2 (Multi-Node)¶
- Shared storage: Consider distributed storage needs
- Replication: Plan for cross-node data replication
- Network storage: Evaluate 10GbE for storage traffic
Phase 3 (Ceph Integration)¶
- Dedicated storage nodes: R630 servers for Ceph OSDs
- Storage network: 10GbE for Ceph replication
- Pool configuration: Separate pools for different workloads
Phase 4 (Hybrid Cloud)¶
- Cloud storage: Object storage integration
- Cross-region backup: Geographic data distribution
- Cost optimization: Tiered storage strategies