Overview
Storage and networking define what the MOOD MNKY cluster can safely and efficiently run. This page documents:- ZFS pools on each node and their roles.
- The shared NFS export used by LXCs and VMs.
- Network topology across nodes and to external services like TrueNAS and Cloudflare.
proxmox-terraform/CLUSTER-NODES-HARDWARE.md- Proxmox API outputs (
pvesh get /nodes/*/storage,/cluster/resources) - Hardware snapshots in
/root/hardware-snapshots/<node>/<timestamp>/(currently detailed for CODE-MNKY)
ZFS pools per node
CODE-MNKY
Fromzpool list -v and zpool status -v:
CODE-MAIN-zfs:- ~3.62 TiB total, NVMe-backed.
- Primary pool for high-performance LXCs and VMs.
CODE-BKP-zfs:- ~464 GiB total, HDD-backed.
- Backup/secondary pool for snapshots and archives.
rpool:- ~472 GiB total, SSD-backed.
- Contains the Proxmox root filesystem (
rpool/ROOT/pve-1).
CASA-MNKY
FromCLUSTER-NODES-HARDWARE.md and Proxmox storage config:
local-zfsandlocal:- ~1.11 TiB root filesystem and ZFS pool.
- Used for VMs/LXCs and system data.
- Access to
hyper-mnky-sharedNFS.
DATA-MNKY
- Root filesystem ~1.72 TiB.
local-zfsandlocalpools for compute workloads.- Access to
hyper-mnky-sharedNFS.
STUD-MNKY
STUD-zfs:- Dedicated ZFS pool specific to STUD-MNKY.
- Used for node-local workloads, experiments, or replicas.
local-zfsandlocal.- Access to
hyper-mnky-sharedNFS.
PRO-MNKY
- ZFS pool configuration to be documented when node is online and a snapshot has been collected.
Shared storage: hyper-mnky-shared
All four online nodes mount a shared NFS export:- Name:
hyper-mnky-shared - Role:
- Centralized storage for shared datasets.
- Source/destination for backups, media, and cross-node artifacts.
- Consumers:
- LXCs and VMs across all nodes.
- TrueNAS integration providing the backing storage.
- Proxmox storage configuration (
pvesh get /nodes/<node>/storage). df -hTandmountoutputs on each node.
Storage usage patterns
Recommended guidelines:- High-IOPS / latency-sensitive:
- Use
CODE-MAIN-zfson CODE-MNKY for AI stacks, databases, and time-sensitive automation workloads.
- Use
- Cold data / backups:
- Use
CODE-BKP-zfson CODE-MNKY or the equivalent on other nodes.
- Use
- Experimentation:
- Use
STUD-zfsfor isolated experiments, test stacks, or data that can be safely lost.
- Use
- Shared datasets:
- Use
hyper-mnky-sharedfor data that must be visible to multiple nodes.
- Use
- Update or add disks on the appropriate node.
- Run the hardware snapshot collector.
- Adjust ZFS vdevs and datasets as needed.
- Update this page and
CLUSTER-NODES-HARDWARE.mdto reflect new capacity.
Network topology
Node-level networking
Each node has:- One or more physical NICs connected to the cluster LAN.
- Linux bridges (e.g.
vmbr0) that connect LXCs and VMs to the LAN. - Routes configured via
ip route show.
ip -d link show: bridge and NIC hierarchy with flags and offload settings.ip addr show: IPs attached to physical interfaces and bridges.ip route show: default route and any dedicated routes to storage or management networks.ethtooloutput: link speeds, duplex modes, and offload features per interface.
External integrations
The cluster connects to several external services:- TrueNAS:
- Provides NFS exports (including
hyper-mnky-sharedand per-VM datashare). - Accessed via dedicated storage network or the main LAN, depending on configuration.
- Provides NFS exports (including
- Cloudflare:
- Cloudflare tunnels configured via Proxmox Ansible roles.
- Used for secure external access to services without direct inbound port exposure.
proxmox-ansible/docs/TRUENAS-INTEGRATION.mdproxmox-ansible/docs/CLOUDFLARE-TUNNEL-AND-NOTION.md
Troubleshooting and verification
When diagnosing storage or network issues:- Compare snapshots vs. live state:
- Re-run
collect-node-hardware.shon the affected node. - Diff
zpool-*,ip-*, andethtool-summaryagainst previous snapshots.
- Re-run
- Validate Proxmox view:
- Use
pvesh get /nodes/<node>/storageand/cluster/resourcesto confirm Proxmox’s understanding of storage.
- Use
- Check shared storage:
- Validate NFS mounts and permissions for
hyper-mnky-shared.
- Validate NFS mounts and permissions for
- Update this page:
- Reflect any structural changes in pools, mounts, or bridges so future incidents start from correct assumptions.