Storage Integration Strategy¶
Infrastructure Overview¶
Current Storage Assets¶
- emerald: R720XD with 3-tier storage (SSD + RAID-10 + RAID-6)
- fuji: R720XD (planned, identical to emerald)
- apollo: R720XD + MD1220 running unRAID
- Main array: 72TB (media and general storage)
- Freezer pool: ~9.6TB (16x 600GB SAS - dedicated backups)
Apollo Integration with Kubernetes¶
NFS Storage Benefits¶
- Massive capacity: 72TB for bulk storage needs
- Multi-access: ReadWriteMany volumes for shared data
- Cost effective: Existing infrastructure, no additional investment
- Proven reliability: unRAID provides data protection
Kubernetes Storage Classes¶
Tier 4: NFS Bulk Storage (Apollo)¶
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-bulk
provisioner: nfs.csi.k8s.io
parameters:
server: apollo.local
share: /mnt/user/k8s-storage
mountPermissions: "0755"
volumeBindingMode: Immediate
allowVolumeExpansion: true
Use Cases for Apollo NFS Storage¶
Primary Applications¶
- Media storage: Videos, images, large file repositories
- Backup destinations: Kubernetes persistent volume backups
- Shared application data: Content management systems
- Log aggregation: Long-term log storage and analysis
- Development assets: Build artifacts, container registries overflow
Kubernetes Workload Examples¶
- GitLab/Gitea: Repository storage
- NextCloud/OwnCloud: File sharing platforms
- Prometheus: Long-term metrics storage
- Grafana: Dashboard exports and backups
- CI/CD pipelines: Artifact storage
4-Tier Storage Architecture¶
Complete Storage Hierarchy¶
Tier 1 (SSD Boot): Reliable boot and system storage
├── Proxmox OS and system files
├── VM templates and ISOs
└── Control plane VM boot drives
Tier 2 (SAS RAID-10): High-performance
├── Application workloads
├── Container registries
└── Persistent volumes
Tier 3 (SAS RAID-6): Balanced performance
├── Development environments
├── Testing workloads
└── Medium-priority storage
Tier 4 (NFS/unRAID): High-capacity
├── Bulk data storage
├── Media and archives
├── Backup destinations
└── Shared file systems
Implementation Strategy¶
Phase 1: Basic NFS Integration¶
- Configure NFS exports on apollo for Kubernetes
- Install NFS CSI driver in cluster
- Create storage classes for different NFS shares
- Test basic workloads using NFS storage
Phase 2: Advanced Integration¶
- Backup automation: Regular snapshots to apollo
- Tiered data movement: Automated archival policies
- Monitoring integration: Include apollo in cluster monitoring
- Disaster recovery: Cross-system backup strategies
Phase 3: Optimization¶
- Performance tuning: NFS mount options optimization
- Network optimization: Dedicated storage VLANs
- Load balancing: Multiple NFS export points
- Caching layers: Local SSD cache for NFS data
Network Considerations¶
Storage Network Design¶
- Dedicated VLAN: Isolate storage traffic
- 10GbE preferred: High-bandwidth for large file transfers
- Bonded interfaces: Redundancy and increased throughput
- Quality of Service: Prioritize storage traffic
Security Considerations¶
- NFS security: Proper user/group mapping
- Network isolation: Storage VLAN access controls
- Encryption: NFS over TLS where possible
- Access controls: Kubernetes RBAC for storage classes
Backup and Disaster Recovery¶
Multi-Tier Backup Strategy¶
Critical Data (Tier 1/2):
├── Real-time replication to fuji
├── Daily snapshots to apollo
└── Weekly cloud backups
Bulk Data (Tier 4):
├── unRAID parity protection
├── Periodic snapshots
└── Selective cloud archival
Recovery Scenarios¶
- Single node failure: Use apollo as temporary storage
- Cluster rebuild: Restore from apollo snapshots
- Complete site failure: Cloud restore + apollo replication
Performance Expectations¶
NFS Performance (Apollo)¶
- Sequential throughput: 100-500 MB/s (depending on network)
- Concurrent access: Excellent for shared workloads
- Latency: Higher than local storage (network dependent)
- Capacity: Virtually unlimited for most workloads
Optimal Workload Placement¶
- Boot and system storage: Tier 1 (SSD)
- General applications: Tier 2 (RAID-10)
- Development/testing: Tier 3 (RAID-6)
- Bulk/shared data: Tier 4 (NFS/Apollo)
Cost Benefits¶
Infrastructure Efficiency¶
- Maximize existing investment: Use apollo's 72TB capacity
- Reduce cluster storage needs: Offload bulk data
- Centralized management: Single backup/archive system
- Power efficiency: Specialized storage vs compute nodes
Future Considerations¶
Apollo Upgrade Path¶
- 10GbE networking: Improve NFS performance
- SSD caching: Add cache drives to unRAID
- Replication: Mirror critical NFS shares
- Monitoring: Integrate with cluster observability
Hybrid Cloud Integration¶
- Cloud tiering: Archive apollo data to cloud storage
- Disaster recovery: Replicate critical apollo data
- Burst capacity: Cloud storage for overflow scenarios
This strategy leverages apollo as a critical component of the overall storage architecture, providing massive capacity and shared storage capabilities to complement the high-performance local storage tiers.