Documentation Index
Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Storage and networking define what the MOOD MNKY cluster can safely and efficiently run. This page documents:- ZFS pools on each node and their roles.
- The shared NFS export used by LXCs and VMs.
- Network topology across nodes and to external services like TrueNAS and Cloudflare.
proxmox-terraform/CLUSTER-NODES-HARDWARE.md- Proxmox API outputs (
pvesh get /nodes/*/storage,/cluster/resources) - Hardware snapshots in
/root/hardware-snapshots/<node>/<timestamp>/(currently detailed for CODE-MNKY)
Layer-3 segments
Physical and logical storage (ZFS, NFS) still attach to hosts that live on specific RFC1918 site VLANs (10.0.0.0/24 through 10.4.0.0/24). For why each /24 exists (DATA core vs MOOD public plane vs CODE automation, typical anchors, and NetBird context), use the canonical page VLAN subnets and identity. This document stays focused on disks and paths.
ZFS pools per node
CODE-MNKY
Fromzpool list -v and zpool status -v:
CODE-MAIN-zfs:- ~3.62 TiB total, NVMe-backed.
- Primary pool for high-performance LXCs and VMs.
CODE-BKP-zfs:- ~464 GiB total, HDD-backed.
- Backup/secondary pool for snapshots and archives.
rpool:- ~472 GiB total, SSD-backed.
- Contains the Proxmox root filesystem (
rpool/ROOT/pve-1).
CASA-MNKY
FromCLUSTER-NODES-HARDWARE.md and Proxmox storage config:
local-zfsandlocal:- ~1.11 TiB root filesystem and ZFS pool.
- Used for VMs/LXCs and system data.
- Access to
hyper-mnky-sharedNFS.
DATA-MNKY
- Root filesystem ~1.72 TiB.
local-zfsandlocalpools for compute workloads.- Access to
hyper-mnky-sharedNFS.
SAGE-MNKY
STUD-zfs:- Dedicated ZFS pool specific to SAGE-MNKY.
- Used for node-local workloads, experiments, or replicas.
local-zfsandlocal.- Access to
hyper-mnky-sharedNFS.
MOOD-MNKY
- NVMe + SATA SSDs present on-node (see MOOD-MNKY node); exact ZFS pool names and topology should be confirmed with
zpool list/ Proxmox Datacenter → Storage on MOOD-MNKY.
Edge client: NVIDIA Shield TV (TrueNAS SMB + Termux)
A SHIELD Android TV on the DATA LAN can reach TrueNAS SMB shares for media and Steam library staging. SSH for automation uses Termux on port 8022 (not 22). Wireless ADB is commonly 5555 when enabled in developer options.- SMB shares (TrueNAS):
Media→ dataset.../PRO-MNKY/Media(Jellyfin library tree);Steam-Library→.../PRO-MNKY/Steam-Library. - Automation:
rclonein Termux with dedicated remotes (e.g.truenas_media,truenas_steam); userclone copy/sync/lsd—FUSErclone mountis generally not available on stock Termux/Android without a workingfusermountstack. - Credentials: dedicated TrueNAS SMB user for the Shield—store
TRUENAS_SHIELD_SMB_*andSHIELD_*keys only indatacenter.env/ Infisical; never in Mintlify.
10.0.0.0/24 route active, the same 10.0.0.5 SMB targets work off-LAN; see NetBird.
Shared storage: hyper-mnky-shared
All four online nodes mount a shared NFS export:- Name:
hyper-mnky-shared - Role:
- Centralized storage for shared datasets.
- Source/destination for backups, media, and cross-node artifacts.
- Consumers:
- LXCs and VMs across all nodes.
- TrueNAS integration providing the backing storage.
- Proxmox storage configuration (
pvesh get /nodes/<node>/storage). df -hTandmountoutputs on each node.
ISO library and network boot (iVentoy)
The samehyper-mnky-shared export includes the Proxmox ISO tree (under template/iso on the mount). MNKY-HQ can mount this export and run iVentoy so PXE clients get an install menu sourced directly from those ISOs—upload ISOs in the Proxmox GUI to hyper-mnky-shared and they appear in iVentoy without a second copy step. See iVentoy PXE (network install).
Media stack mounts
The MOOD MNKY media stack mounts the shared TrueNAS media dataset at/mnt/media inside the LXC.
Expected mapping:
- TrueNAS export:
10.0.0.5:/mnt/HYPER-MNKY/PRO-MNKY/Media - LXC mount point:
/mnt/media - *Jellyfin libraries and arr roots:
/mnt/media/Movies-> Jellyfin/data/moviesand Radarr/movies/mnt/media/Shows-> Jellyfin/data/tvshowsand Sonarr/tv/mnt/media/Music-> Jellyfin/data/musicand Lidarr/music/mnt/media/Downloads-> qBittorrent/downloadsand *arr download ingestion
Storage usage patterns
Recommended guidelines:- High-IOPS / latency-sensitive:
- Use
CODE-MAIN-zfson CODE-MNKY for AI stacks, databases, and time-sensitive automation workloads.
- Use
- Cold data / backups:
- Use
CODE-BKP-zfson CODE-MNKY or the equivalent on other nodes.
- Use
- Experimentation:
- Use
STUD-zfsfor isolated experiments, test stacks, or data that can be safely lost.
- Use
- Shared datasets:
- Use
hyper-mnky-sharedfor data that must be visible to multiple nodes.
- Use
- Update or add disks on the appropriate node.
- Run the hardware snapshot collector.
- Adjust ZFS vdevs and datasets as needed.
- Update this page and
CLUSTER-NODES-HARDWARE.mdto reflect new capacity.
Network topology
Node-level networking
Each node has:- One or more physical NICs connected to the cluster LAN.
- Linux bridges (e.g.
vmbr0) that connect LXCs and VMs to the LAN. - Routes configured via
ip route show.
ip -d link show: bridge and NIC hierarchy with flags and offload settings.ip addr show: IPs attached to physical interfaces and bridges.ip route show: default route and any dedicated routes to storage or management networks.ethtooloutput: link speeds, duplex modes, and offload features per interface.
Edge access (pfSense + NetBird)
Site RFC1918 networks are reachable from the internet through NetBird (overlay100.64.0.0/10) with pfSense as the authoritative subnet-routing peer. The self-hosted NetBird control plane runs on LXC 102 (10.0.0.20); WAN UDP 3478/51820 are forwarded from pfSense to that host. For architecture, route inventory, and validation steps, see the Edge network overview and related pages under Development & DevOps → Edge network (pfSense + NetBird).
External integrations
The cluster connects to several external services:- TrueNAS:
- Provides NFS exports (including
hyper-mnky-sharedand per-VM datashare). - Accessed via dedicated storage network or the main LAN, depending on configuration.
- Provides NFS exports (including
- Cloudflare:
- Cloudflare tunnels configured via Proxmox Ansible roles.
- Used for secure external access to services without direct inbound port exposure.
proxmox-ansible/docs/TRUENAS-INTEGRATION.mdproxmox-ansible/docs/CLOUDFLARE-TUNNEL-AND-NOTION.md
Troubleshooting and verification
When diagnosing storage or network issues:- Compare snapshots vs. live state:
- Re-run
collect-node-hardware.shon the affected node. - Diff
zpool-*,ip-*, andethtool-summaryagainst previous snapshots.
- Re-run
- Validate Proxmox view:
- Use
pvesh get /nodes/<node>/storageand/cluster/resourcesto confirm Proxmox’s understanding of storage.
- Use
- Check shared storage:
- Validate NFS mounts and permissions for
hyper-mnky-shared.
- Validate NFS mounts and permissions for
- Update this page:
- Reflect any structural changes in pools, mounts, or bridges so future incidents start from correct assumptions.