Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt

Use this file to discover all available pages before exploring further.

The Data Center Map is the comprehensive, first-principles reference for the MOOD MNKY Proxmox cluster. It documents every node (hardware, capabilities, storage), every running VM and LXC, shared storage and network topology, and a topology diagram. Use it as the single source of truth for “what runs where” and for capacity planning. Keep it updated when hardware or workloads change; the Data Center Upgrade Plan is the living roadmap for improvements.

Cluster summary

AttributeValue
NameMOOD MNKY Proxmox cluster
HypervisorProxmox VE 8.4.17
Kernel6.8.12-19-pve (x86_64)
BootEFI; Secure Boot off
Quorum5 nodes (typically all online; verify with pvecm status)
Node count5 (CODE-MNKY, CASA-MNKY, DATA-MNKY, SAGE-MNKY, MOOD-MNKY)
Segment reference: see VLAN subnets and identity for workload and trust semantics per site /24 (DATA, MOOD, SAGE, CODE, CASA).
High-level roles: CODE-MNKY is the primary GPU and workload host (LXCs + QEMU VMs). CASA-MNKY, DATA-MNKY, and SAGE-MNKY provide additional compute and storage capacity and share NFS. MOOD-MNKY hosts the Intel iGPU media stack (LXC 120). For a live CODE-MNKY LXC breakdown, see CODE-MNKY LXC inventory.

Per-node detail

CODE-MNKY

AttributeValue
StatusOnline
Site CIDR10.3.0.0/24 (CODE VLAN)
Proxmox mgmt IP10.3.0.10
RoleGPU host; LXCs 300, 301, 3001 + VMs 3055–3056 (live); Terraform target_node may lag
CPUAMD Ryzen 5 4600G with Radeon Graphics
Cores / threads6 / 12 (1 socket, ~3566 MHz)
Memory125 GiB (134,322,823,168 bytes)
Root storage~457 GiB (ZFS/local)
PVEpve-manager/8.4.17
Storage (active): CODE-MAIN-zfs (zfspool), CODE-BKP-zfs (zfspool), local (dir), local-zfs (zfspool), hyper-mnky-shared (nfs). Capabilities: NVIDIA Tesla P40 (Pascal), driver 560.35.03, CUDA 12.6. Primary LXC rootfs on CODE-MAIN-zfs (~3.62 TiB total; ~3.03 TiB free after current allocations).

VMs and LXCs (CODE-MNKY) — live inventory (2026-03)

Older documentation used VMIDs 3099–3104; live on CODE-MNKY the following are present. Supabase and n8n run as QEMU VMs, not inside LXC 301. QEMU VMs
VMIDHostnamePurposeCoresRAMBoot diskNotes
3055mnky-supabase-prodSelf-hosted Supabase (Docker on VM)824 GiB512 GiBCanonical Supabase for production URLs
3056mnky-n8n-prodn8n workflow automation616 GiB64 GiBCanonical n8n; uses Redis sidecar on VM
LXCs
VMIDHostnamePurposeCoresRAMRootfsNotes
300mnky-automation-stackAWX + Semaphore (intended)48 GiB128 GiBNo Docker stack running — empty shell until playbooks re-applied
301mnky-ai-stackOllama, Flowise, MinIO, etc. (intended)832 GiB2 TiBNo Docker stack running — services moved to VMs / elsewhere; huge disk mostly unused
3001pegaproxPegaProx (Proxmox cluster management UI)44 GiB16 GiBPython services on ports 5000–5002
Total allocated rootfs (LXCs on CODE-MNKY): 128 + 2000 + 16 ≈ 2,144 GiB plus VM disks above. See CODE-MNKY LXC inventory for redundancy analysis and redesign options.

CASA-MNKY

AttributeValue
StatusOnline
Site CIDR10.4.0.0/24 (CASA VLAN)
Proxmox mgmt IP10.4.0.10
RoleGeneral-purpose compute; migration target
CPUAMD Ryzen 5 4600G with Radeon Graphics
Cores / threads6 / 12 (1 socket, ~3667 MHz)
Memory62 GiB
Root storage~1.11 TiB
PVE8.4.17
Storage (active): local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs). VMs and LXCs: None defined in Terraform or current docs. Available for workload migration, PBS, or new services (e.g. GitLab, registry).

DATA-MNKY

AttributeValue
StatusOnline
Site CIDR10.0.0.0/24 (DATA LAN)
Proxmox mgmt IP10.0.0.10
RoleHigh-core, large disk; TrueNAS/PBS host candidate
CPUAMD Ryzen 7 5700X 8-Core Processor
Cores / threads8 / 16 (1 socket, ~3713 MHz)
Memory~62.7 GiB
Root storage~1.72 TiB
PVE8.4.17
Storage (active): local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs). VMs and LXCs: None defined in Terraform or current docs. Suited for PBS VM, GitLab, or other data-heavy workloads.

SAGE-MNKY

AttributeValue
StatusOnline
Site CIDR10.2.0.0/24 (SAGE VLAN)
Proxmox mgmt IP10.2.0.10
RoleAdditional compute; dedicated STUD-zfs pool
CPUAMD Ryzen 7 5700G with Radeon Graphics
Cores / threads8 / 16 (1 socket)
Memory62 GiB
Root storage~445 GiB
PVE8.4.17
Storage (active): STUD-zfs (zfspool), local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs). VMs and LXCs: None defined in Terraform or current docs. STUD-zfs (~3.62 TiB) available for experiments or replicas.

MOOD-MNKY

AttributeValue
StatusOnline (media-stack target)
Site CIDR10.1.0.0/24 (MOOD VLAN)
Proxmox mgmt IP10.1.0.10
RoleAdditional node; Intel iGPU media workloads for VA-API transcoding
CPUIntel Core i9-13900KS (Intel iGPU / VA-API capable)
Memory~125 GiB (host)
Root storagerpool (~2.7 TiB)
VMs and LXCs: LXC 120 (mnky-media-stack) runs the target ARR + Jellyfin + qBittorrent media stack with TrueNAS NFS mounts and NetBird access.

Storage and network

ZFS pools per node

  • CODE-MNKY: CODE-MAIN-zfs (~3.62 TiB, NVMe) — primary LXC/VM disk; CODE-BKP-zfs (~464 GiB, HDD) — backup/secondary; rpool — root.
  • CASA-MNKY: local-zfs, local; rpool — root (~1.11 TiB).
  • DATA-MNKY: local-zfs, local; rpool — root (~1.72 TiB).
  • SAGE-MNKY: STUD-zfs (~3.62 TiB), local-zfs, local; rpool — root (~445 GiB).
  • MOOD-MNKY: To be documented when online.

Shared storage: hyper-mnky-shared

AttributeValue
TypeNFS
BackingTrueNAS Scale at 10.0.0.5
Path (TrueNAS)/mnt/HYPER-MNKY/proxmox/shared
Contentiso, scripts, snippets, template, templates
Mounted onAll four online nodes (e.g. /mnt/pve/hyper-mnky-shared)
Size (approx)~3.9 TiB
Used for ISOs, LXC templates, snippets, and scripts across the cluster. Network installs: ISOs in this store are also the source for iVentoy PXE on MNKY-HQ (iVentoy PXE).

Per-LXC datashare (TrueNAS)

Each LXC can have a private NFS path on TrueNAS: lxc-private/<vmid> under the same export, mounted in the container as ~/truenas-data (e.g. /home/moodmnky/truenas-data). Create lxc-private/<vmid> for each active VMID (e.g. 300, 301, MOOD LXC 120, etc.); adding a new LXC requires creating the corresponding path on TrueNAS and running the TrueNAS mounts playbook.

Network

  • Linux bridges (e.g. vmbr0) connect LXCs and VMs to the cluster LAN.
  • TrueNAS is reached at 10.0.0.5; cluster nodes use the same LAN for NFS and management.
  • Cloudflare tunnels can be used for external access (see proxmox-ansible docs).

Topology diagram


Data source note

This map is derived from Terraform state and definitions when present in-repo, CLUSTER-NODES-HARDWARE.md, STORAGE-ASSESSMENT.md, VMID-AND-NAMING.md, mnky-docs data-center pages, and live Proxmox checks. VMIDs and placement can drift from Terraform if resources were created manually or imported late — use CODE-MNKY LXC inventory for a recent CODE-MNKY-specific snapshot. Hardware snapshots under /root/hardware-snapshots/<node>/<timestamp>/ on each host provide detailed hardware introspection for CODE-MNKY, CASA-MNKY, DATA-MNKY, and SAGE-MNKY.
For the phased upgrade plan, persona perspectives, and implementation roadmap, see the Data Center Upgrade Plan. For per-node deep dives and runbooks, see Cluster Nodes, Storage and network, and Runbooks.