Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Storage and networking define what the MOOD MNKY cluster can safely and efficiently run. This page documents:
  • ZFS pools on each node and their roles.
  • The shared NFS export used by LXCs and VMs.
  • Network topology across nodes and to external services like TrueNAS and Cloudflare.
Underlying data comes from:
  • proxmox-terraform/CLUSTER-NODES-HARDWARE.md
  • Proxmox API outputs (pvesh get /nodes/*/storage, /cluster/resources)
  • Hardware snapshots in /root/hardware-snapshots/<node>/<timestamp>/ (currently detailed for CODE-MNKY)

Layer-3 segments

Physical and logical storage (ZFS, NFS) still attach to hosts that live on specific RFC1918 site VLANs (10.0.0.0/24 through 10.4.0.0/24). For why each /24 exists (DATA core vs MOOD public plane vs CODE automation, typical anchors, and NetBird context), use the canonical page VLAN subnets and identity. This document stays focused on disks and paths.

ZFS pools per node

CODE-MNKY

From zpool list -v and zpool status -v:
  • CODE-MAIN-zfs:
    • ~3.62 TiB total, NVMe-backed.
    • Primary pool for high-performance LXCs and VMs.
  • CODE-BKP-zfs:
    • ~464 GiB total, HDD-backed.
    • Backup/secondary pool for snapshots and archives.
  • rpool:
    • ~472 GiB total, SSD-backed.
    • Contains the Proxmox root filesystem (rpool/ROOT/pve-1).

CASA-MNKY

From CLUSTER-NODES-HARDWARE.md and Proxmox storage config:
  • local-zfs and local:
    • ~1.11 TiB root filesystem and ZFS pool.
    • Used for VMs/LXCs and system data.
  • Access to hyper-mnky-shared NFS.

DATA-MNKY

  • Root filesystem ~1.72 TiB.
  • local-zfs and local pools for compute workloads.
  • Access to hyper-mnky-shared NFS.

SAGE-MNKY

  • STUD-zfs:
    • Dedicated ZFS pool specific to SAGE-MNKY.
    • Used for node-local workloads, experiments, or replicas.
  • local-zfs and local.
  • Access to hyper-mnky-shared NFS.

MOOD-MNKY

  • NVMe + SATA SSDs present on-node (see MOOD-MNKY node); exact ZFS pool names and topology should be confirmed with zpool list / Proxmox Datacenter → Storage on MOOD-MNKY.

Edge client: NVIDIA Shield TV (TrueNAS SMB + Termux)

A SHIELD Android TV on the DATA LAN can reach TrueNAS SMB shares for media and Steam library staging. SSH for automation uses Termux on port 8022 (not 22). Wireless ADB is commonly 5555 when enabled in developer options.
  • SMB shares (TrueNAS): Media → dataset .../PRO-MNKY/Media (Jellyfin library tree); Steam-Library.../PRO-MNKY/Steam-Library.
  • Automation: rclone in Termux with dedicated remotes (e.g. truenas_media, truenas_steam); use rclone copy / sync / lsdFUSE rclone mount is generally not available on stock Termux/Android without a working fusermount stack.
  • Credentials: dedicated TrueNAS SMB user for the Shield—store TRUENAS_SHIELD_SMB_* and SHIELD_* keys only in datacenter.env / Infisical; never in Mintlify.
Remote access: With NetBird connected and the 10.0.0.0/24 route active, the same 10.0.0.5 SMB targets work off-LAN; see NetBird.

Shared storage: hyper-mnky-shared

All four online nodes mount a shared NFS export:
  • Name: hyper-mnky-shared
  • Role:
    • Centralized storage for shared datasets.
    • Source/destination for backups, media, and cross-node artifacts.
  • Consumers:
    • LXCs and VMs across all nodes.
    • TrueNAS integration providing the backing storage.
The exact mount details (server address, path, and mount options) are visible in:
  • Proxmox storage configuration (pvesh get /nodes/<node>/storage).
  • df -hT and mount outputs on each node.

ISO library and network boot (iVentoy)

The same hyper-mnky-shared export includes the Proxmox ISO tree (under template/iso on the mount). MNKY-HQ can mount this export and run iVentoy so PXE clients get an install menu sourced directly from those ISOs—upload ISOs in the Proxmox GUI to hyper-mnky-shared and they appear in iVentoy without a second copy step. See iVentoy PXE (network install).

Media stack mounts

The MOOD MNKY media stack mounts the shared TrueNAS media dataset at /mnt/media inside the LXC. Expected mapping:
  • TrueNAS export: 10.0.0.5:/mnt/HYPER-MNKY/PRO-MNKY/Media
  • LXC mount point: /mnt/media
  • *Jellyfin libraries and arr roots:
    • /mnt/media/Movies -> Jellyfin /data/movies and Radarr /movies
    • /mnt/media/Shows -> Jellyfin /data/tvshows and Sonarr /tv
    • /mnt/media/Music -> Jellyfin /data/music and Lidarr /music
    • /mnt/media/Downloads -> qBittorrent /downloads and *arr download ingestion

Storage usage patterns

Recommended guidelines:
  • High-IOPS / latency-sensitive:
    • Use CODE-MAIN-zfs on CODE-MNKY for AI stacks, databases, and time-sensitive automation workloads.
  • Cold data / backups:
    • Use CODE-BKP-zfs on CODE-MNKY or the equivalent on other nodes.
  • Experimentation:
    • Use STUD-zfs for isolated experiments, test stacks, or data that can be safely lost.
  • Shared datasets:
    • Use hyper-mnky-shared for data that must be visible to multiple nodes.
When expanding storage:
  1. Update or add disks on the appropriate node.
  2. Run the hardware snapshot collector.
  3. Adjust ZFS vdevs and datasets as needed.
  4. Update this page and CLUSTER-NODES-HARDWARE.md to reflect new capacity.

Network topology

Node-level networking

Each node has:
  • One or more physical NICs connected to the cluster LAN.
  • Linux bridges (e.g. vmbr0) that connect LXCs and VMs to the LAN.
  • Routes configured via ip route show.
CODE-MNKY snapshot highlights:
  • ip -d link show: bridge and NIC hierarchy with flags and offload settings.
  • ip addr show: IPs attached to physical interfaces and bridges.
  • ip route show: default route and any dedicated routes to storage or management networks.
  • ethtool output: link speeds, duplex modes, and offload features per interface.
Other nodes follow a similar pattern, with differences in IP addressing and VLAN/tagging as configured.

Edge access (pfSense + NetBird)

Site RFC1918 networks are reachable from the internet through NetBird (overlay 100.64.0.0/10) with pfSense as the authoritative subnet-routing peer. The self-hosted NetBird control plane runs on LXC 102 (10.0.0.20); WAN UDP 3478/51820 are forwarded from pfSense to that host. For architecture, route inventory, and validation steps, see the Edge network overview and related pages under Development & DevOps → Edge network (pfSense + NetBird).

External integrations

The cluster connects to several external services:
  • TrueNAS:
    • Provides NFS exports (including hyper-mnky-shared and per-VM datashare).
    • Accessed via dedicated storage network or the main LAN, depending on configuration.
  • Cloudflare:
    • Cloudflare tunnels configured via Proxmox Ansible roles.
    • Used for secure external access to services without direct inbound port exposure.
Detailed tunnel and TrueNAS integration docs live in:
  • proxmox-ansible/docs/TRUENAS-INTEGRATION.md
  • proxmox-ansible/docs/CLOUDFLARE-TUNNEL-AND-NOTION.md

Troubleshooting and verification

When diagnosing storage or network issues:
  1. Compare snapshots vs. live state:
    • Re-run collect-node-hardware.sh on the affected node.
    • Diff zpool-*, ip-*, and ethtool-summary against previous snapshots.
  2. Validate Proxmox view:
    • Use pvesh get /nodes/<node>/storage and /cluster/resources to confirm Proxmox’s understanding of storage.
  3. Check shared storage:
    • Validate NFS mounts and permissions for hyper-mnky-shared.
  4. Update this page:
    • Reflect any structural changes in pools, mounts, or bridges so future incidents start from correct assumptions.