Hardware Inventory¶
Current Infrastructure¶
Primary Compute Servers¶
emerald (Dell PowerEdge R720XD)¶
Status: ✅ Operational - Kubernetes Primary Node
Specifications:
- Model: Dell PowerEdge R720XD
- CPU: 2x Intel Xeon E5-2660 v2 @ 2.20GHz (10 cores each, 40 threads total)
- Memory: 384GB DDR3 ECC (24x 16GB modules)
- Storage:
- 2x 500GB 2.5" SSDs in rear bays (RAID-1 for Proxmox OS + boot storage)
- 16x 600GB 10K RPM SAS drives (configured in RAID arrays)
- Network:
- 2x 1GbE (integrated Broadcom)
- 1x iDRAC7 Enterprise
- Power: Dual redundant PSUs
Current Configuration:
- Hypervisor: Proxmox VE 8
- VMs: Kubernetes cluster (1 control plane + 2 workers)
- Storage Tiers: SSD RAID-1 (boot), SAS RAID-10 (performance), SAS RAID-6 (capacity)
fuji (Dell PowerEdge R720XD)¶
Status: ✅ Installed - Proxmox Configured
Specifications:
- Model: Dell PowerEdge R720XD
- CPU: 2x Intel Xeon E5-2660 v2 @ 2.20GHz (10 cores each, 40 threads total)
- Memory: 384GB DDR3 ECC (24x 16GB modules)
- Storage:
- 2x 500GB 2.5" SSDs in rear bays (RAID-1 for Proxmox OS + boot storage)
- 16x 600GB 10K RPM SAS drives (configured in RAID arrays)
- Network:
- 2x 1GbE (integrated Broadcom)
- 1x iDRAC7 Enterprise
- Power: Dual redundant PSUs
- Physical Status: Racked and wired
Current Configuration:
- Hypervisor: Proxmox VE 8
- Storage Tiers: SSD RAID-1 (boot), available SAS drives for VM storage
- Planned Role: Second cluster node in Phase 2 expansion
Compute Expansion Servers¶
bishop (Dell PowerEdge R630)¶
Status: ✅ Installed - Awaiting Configuration
Specifications:
- Model: Dell PowerEdge R630 (1U form factor)
- CPU: 2x Intel Xeon E5-2680 v3 @ 2.5GHz (12 cores each, 48 threads total)
- Memory: 32GB DDR4 ECC (upgradable, additional RAM being sourced)
- Storage:
- 1x 80GB SATA SSD (boot drive)
- 8x 600GB 10K RPM SAS drives
- RAID Controller: Dell PERC H330 Mini
- Network:
- 4x 1GbE (integrated)
- 1x iDRAC8 Enterprise
- Power: Dual redundant PSUs
- Physical Status: Racked and wired
Planned Use Cases:
- Kubernetes worker node
- Ceph OSD node (distributed storage)
- High-density compute workloads
castle (Dell PowerEdge R630)¶
Status: ✅ Installed - Awaiting Configuration
Specifications: Identical to bishop
domino (Dell PowerEdge R630)¶
Status: ✅ Installed - Awaiting Configuration
Specifications: Identical to bishop
Storage Server¶
apollo (Dell PowerEdge R720XD + MD1220)¶
Status: ✅ Operational - Network Storage
Specifications:
- Primary: Dell PowerEdge R720XD
- CPU: 2x Intel Xeon E5-2660 v2 @ 2.20GHz (10 cores each, 40 threads total)
- Memory: 384GB DDR3 ECC (24x 16GB modules)
- Expansion: Dell PowerVault MD1220 (24x 2.5" bay expansion)
- Storage:
- Main Array: 72TB usable (unRAID configuration)
- Freezer Pool: 4.8TB usable (16x 600GB drives in RAID-10 on MD1220)
- Network: 1GbE connection to lab network
- Hypervisor: unRAID 6.x
- Services: NFS exports for Kubernetes persistent volumes
Current Integration:
- NFS storage classes for Kubernetes
- Media storage for Plex and download automation
- Backup target for VolSync
Future Planning:
- MinIO VM deployment on Freezer pool for S3-compatible object storage
Network Infrastructure¶
Core Networking¶
Internet Connection¶
- Provider: Fiber ISP
- Speed: 1Gb/s synchronous
- IPv4: Static IP assignment
- IPv6: Available (not currently utilized)
Primary Router¶
- Model: Ubiquiti EdgeRouter-X
- Role: Core routing, DHCP, firewall
- Management: Web interface + CLI
Switching Infrastructure¶
Primary Aggregation Switch (SW-AGGR-01)¶
- Model: Cisco Catalyst WS-C3850-12X48U
- Ports:
- 12x 10GBASE-T copper RJ45 (TenGigabitEthernet 1/1/1-12)
- 48x 1GbE RJ45 (GigabitEthernet 1/0/1-48)
- Network Module: C3850-NM-2-40G (2x 40Gbps QSFP+ ports)
- Features: Layer 3, StackWise-480, VLAN support, jumbo frames (MTU 9000)
- Location: Main network closet
- IP Address: 172.16.0.10
- Use Cases:
- Core aggregation for house network
- 40Gbps uplink to server rack switch
- Management traffic aggregation
Server Rack Switch (SW-RACK-01)¶
Status: ✅ Installed - Configuration In Progress
- Model: Cisco Catalyst WS-C3850-12X48U
- Ports:
- 12x 10GBASE-T copper RJ45 (TenGigabitEthernet 1/1/1-12)
- 48x 1GbE RJ45 (GigabitEthernet 1/0/1-48)
- Network Module: C3850-NM-2-40G (2x 40Gbps QSFP+ ports)
- Features: Layer 3, StackWise-480, VLAN support, jumbo frames (MTU 9000)
- Location: Server rack
- IP Address: 172.16.0.11
- Use Cases:
- Dedicated 10GbE storage network for Ceph (VLAN 104)
- Server management network (VLAN 103)
- 40Gbps uplink to main aggregation switch
Pending Configuration:
- 40GbE uplink to SW-AGGR-01 (hardware installation pending)
- Data VLAN configuration and connections
Inter-Switch Uplink: 40Gbps QSFP+ connection between SW-AGGR-01 and SW-RACK-01 (planned)
Legacy Server Access Switch (Retired)¶
- Model: Cisco SG300-28
- Ports: 28x 1GbE (24 ports + 4 combo mini-GBIC)
- Features: Layer 2/3 Lite, VLAN routing, managed
- Firmware: v1.4.11.5
- Former Location: Server rack
- Former IP Address: 172.16.0.11
- Status: ⛔ Retired - Replaced by SW-RACK-01 (Cisco Catalyst 3850) in November 2025
House Access Switches¶
- Model: Multiple Cisco SG-300 series
- Distribution: Throughout house for endpoint connectivity
Wireless Infrastructure¶
Access Points¶
Status: ✅ Operational - Upgraded to Catalyst 9120
- Model: 3x Cisco Catalyst 9120AXI
- Standard: Wi-Fi 6 (802.11ax)
- Management: Mobility Express (controller-less) or Cisco DNA Center capable
- Coverage: Whole house wireless coverage
- Features:
- Wi-Fi 6 (802.11ax) support
- 4x4:4 MU-MIMO
- Internal antennas
- PoE+ powered
- Replaced: Cisco Aironet 3802i (802.11ac Wave 2) in November 2025
Future Hardware Planning¶
Phase 2 Requirements (Q1 2025)¶
- Network: Consider 10GbE NICs for
emeraldandfuji - Storage: Plan distributed storage network
- Redundancy: Evaluate UPS requirements
Phase 3 Requirements (Q3 2025)¶
- Integration: R630 servers into cluster
- Network: 10GbE switching for storage network
- Storage: Ceph-dedicated storage nodes
Phase 4 Requirements (Q1 2026)¶
- Cloud: Oracle Cloud Free Tier integration
- Connectivity: VPN hardware/software
- Monitoring: External monitoring infrastructure
Maintenance Schedule¶
Regular Maintenance¶
- Monthly: Firmware updates, health checks
- Quarterly: Deep cleaning, thermal monitoring
- Annually: Warranty review, lifecycle planning
Upgrade Paths¶
- Memory: All servers support additional RAM
- Storage: SSD and SAS expansion available
- Network: 10GbE upgrade paths identified
Power and Cooling¶
Power Requirements¶
- Current Infrastructure (All Racked):
- emerald (R720XD): ~300W
- fuji (R720XD): ~300W
- apollo (R720XD): ~300W
- bishop (R630): ~200W
- castle (R630): ~200W
- domino (R630): ~200W
- Networking: ~100W
- Total: ~1600W (all 6 servers + networking)
- UPS: Recommended 2400VA/1920W minimum for full infrastructure
Cooling Considerations¶
- Current Load: 6 servers racked (emerald, fuji, apollo, bishop, castle, domino)
- Rack Cooling: Adequate with existing ventilation
- Fan Control: IPMI-based fan control implemented on R720XD servers (emerald, fuji, apollo)
- Monitoring: Temperature sensors in planning for environmental monitoring
Specifications Summary¶
| Server | CPU Cores | Memory | Storage | Role | Status |
|---|---|---|---|---|---|
| emerald | 40 threads | 384GB | 2x 500GB SSD + 16x 600GB SAS | Kubernetes Primary | ✅ Operational |
| fuji | 40 threads | 384GB | 2x 500GB SSD + 16x 600GB SAS | Kubernetes Secondary | ✅ Installed |
| apollo | 40 threads | 384GB | 72TB (Main) + 4.8TB (Freezer RAID-10) | Network Storage | ✅ Operational |
| bishop | 48 threads | 32GB | 80GB SSD + 8x 600GB SAS | Worker/Storage | ✅ Installed |
| castle | 48 threads | 32GB | 80GB SSD + 8x 600GB SAS | Worker/Storage | ✅ Installed |
| domino | 48 threads | 32GB | 80GB SSD + 8x 600GB SAS | Worker/Storage | ✅ Installed |