Documentation Index
Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt
Use this file to discover all available pages before exploring further.
The Data Center Map is the comprehensive, first-principles reference for the MOOD MNKY Proxmox cluster. It documents every node (hardware, capabilities, storage), every running VM and LXC, shared storage and network topology, and a topology diagram. Use it as the single source of truth for “what runs where” and for capacity planning. Keep it updated when hardware or workloads change; the Data Center Upgrade Plan is the living roadmap for improvements.
Cluster summary
| Attribute | Value |
|---|
| Name | MOOD MNKY Proxmox cluster |
| Hypervisor | Proxmox VE 8.4.17 |
| Kernel | 6.8.12-19-pve (x86_64) |
| Boot | EFI; Secure Boot off |
| Quorum | 5 nodes (typically all online; verify with pvecm status) |
| Node count | 5 (CODE-MNKY, CASA-MNKY, DATA-MNKY, SAGE-MNKY, MOOD-MNKY) |
Segment reference: see VLAN subnets and identity for workload and trust semantics per site /24 (DATA, MOOD, SAGE, CODE, CASA).
High-level roles: CODE-MNKY is the primary GPU and workload host (LXCs + QEMU VMs). CASA-MNKY, DATA-MNKY, and SAGE-MNKY provide additional compute and storage capacity and share NFS. MOOD-MNKY hosts the Intel iGPU media stack (LXC 120). For a live CODE-MNKY LXC breakdown, see CODE-MNKY LXC inventory.
Per-node detail
CODE-MNKY
| Attribute | Value |
|---|
| Status | Online |
| Site CIDR | 10.3.0.0/24 (CODE VLAN) |
| Proxmox mgmt IP | 10.3.0.10 |
| Role | GPU host; LXCs 300, 301, 3001 + VMs 3055–3056 (live); Terraform target_node may lag |
| CPU | AMD Ryzen 5 4600G with Radeon Graphics |
| Cores / threads | 6 / 12 (1 socket, ~3566 MHz) |
| Memory | 125 GiB (134,322,823,168 bytes) |
| Root storage | ~457 GiB (ZFS/local) |
| PVE | pve-manager/8.4.17 |
Storage (active): CODE-MAIN-zfs (zfspool), CODE-BKP-zfs (zfspool), local (dir), local-zfs (zfspool), hyper-mnky-shared (nfs).
Capabilities: NVIDIA Tesla P40 (Pascal), driver 560.35.03, CUDA 12.6. Primary LXC rootfs on CODE-MAIN-zfs (~3.62 TiB total; ~3.03 TiB free after current allocations).
VMs and LXCs (CODE-MNKY) — live inventory (2026-03)
Older documentation used VMIDs 3099–3104; live on CODE-MNKY the following are present. Supabase and n8n run as QEMU VMs, not inside LXC 301.
QEMU VMs
| VMID | Hostname | Purpose | Cores | RAM | Boot disk | Notes |
|---|
| 3055 | mnky-supabase-prod | Self-hosted Supabase (Docker on VM) | 8 | 24 GiB | 512 GiB | Canonical Supabase for production URLs |
| 3056 | mnky-n8n-prod | n8n workflow automation | 6 | 16 GiB | 64 GiB | Canonical n8n; uses Redis sidecar on VM |
LXCs
| VMID | Hostname | Purpose | Cores | RAM | Rootfs | Notes |
|---|
| 300 | mnky-automation-stack | AWX + Semaphore (intended) | 4 | 8 GiB | 128 GiB | No Docker stack running — empty shell until playbooks re-applied |
| 301 | mnky-ai-stack | Ollama, Flowise, MinIO, etc. (intended) | 8 | 32 GiB | 2 TiB | No Docker stack running — services moved to VMs / elsewhere; huge disk mostly unused |
| 3001 | pegaprox | PegaProx (Proxmox cluster management UI) | 4 | 4 GiB | 16 GiB | Python services on ports 5000–5002 |
Total allocated rootfs (LXCs on CODE-MNKY): 128 + 2000 + 16 ≈ 2,144 GiB plus VM disks above.
See CODE-MNKY LXC inventory for redundancy analysis and redesign options.
CASA-MNKY
| Attribute | Value |
|---|
| Status | Online |
| Site CIDR | 10.4.0.0/24 (CASA VLAN) |
| Proxmox mgmt IP | 10.4.0.10 |
| Role | General-purpose compute; migration target |
| CPU | AMD Ryzen 5 4600G with Radeon Graphics |
| Cores / threads | 6 / 12 (1 socket, ~3667 MHz) |
| Memory | 62 GiB |
| Root storage | ~1.11 TiB |
| PVE | 8.4.17 |
Storage (active): local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs).
VMs and LXCs: None defined in Terraform or current docs. Available for workload migration, PBS, or new services (e.g. GitLab, registry).
DATA-MNKY
| Attribute | Value |
|---|
| Status | Online |
| Site CIDR | 10.0.0.0/24 (DATA LAN) |
| Proxmox mgmt IP | 10.0.0.10 |
| Role | High-core, large disk; TrueNAS/PBS host candidate |
| CPU | AMD Ryzen 7 5700X 8-Core Processor |
| Cores / threads | 8 / 16 (1 socket, ~3713 MHz) |
| Memory | ~62.7 GiB |
| Root storage | ~1.72 TiB |
| PVE | 8.4.17 |
Storage (active): local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs).
VMs and LXCs: None defined in Terraform or current docs. Suited for PBS VM, GitLab, or other data-heavy workloads.
SAGE-MNKY
| Attribute | Value |
|---|
| Status | Online |
| Site CIDR | 10.2.0.0/24 (SAGE VLAN) |
| Proxmox mgmt IP | 10.2.0.10 |
| Role | Additional compute; dedicated STUD-zfs pool |
| CPU | AMD Ryzen 7 5700G with Radeon Graphics |
| Cores / threads | 8 / 16 (1 socket) |
| Memory | 62 GiB |
| Root storage | ~445 GiB |
| PVE | 8.4.17 |
Storage (active): STUD-zfs (zfspool), local-zfs (zfspool), local (dir), hyper-mnky-shared (nfs).
VMs and LXCs: None defined in Terraform or current docs. STUD-zfs (~3.62 TiB) available for experiments or replicas.
MOOD-MNKY
| Attribute | Value |
|---|
| Status | Online (media-stack target) |
| Site CIDR | 10.1.0.0/24 (MOOD VLAN) |
| Proxmox mgmt IP | 10.1.0.10 |
| Role | Additional node; Intel iGPU media workloads for VA-API transcoding |
| CPU | Intel Core i9-13900KS (Intel iGPU / VA-API capable) |
| Memory | ~125 GiB (host) |
| Root storage | rpool (~2.7 TiB) |
VMs and LXCs: LXC 120 (mnky-media-stack) runs the target ARR + Jellyfin + qBittorrent media stack with TrueNAS NFS mounts and NetBird access.
Storage and network
ZFS pools per node
- CODE-MNKY: CODE-MAIN-zfs (~3.62 TiB, NVMe) — primary LXC/VM disk; CODE-BKP-zfs (~464 GiB, HDD) — backup/secondary; rpool — root.
- CASA-MNKY: local-zfs, local; rpool — root (~1.11 TiB).
- DATA-MNKY: local-zfs, local; rpool — root (~1.72 TiB).
- SAGE-MNKY: STUD-zfs (~3.62 TiB), local-zfs, local; rpool — root (~445 GiB).
- MOOD-MNKY: To be documented when online.
Shared storage: hyper-mnky-shared
| Attribute | Value |
|---|
| Type | NFS |
| Backing | TrueNAS Scale at 10.0.0.5 |
| Path (TrueNAS) | /mnt/HYPER-MNKY/proxmox/shared |
| Content | iso, scripts, snippets, template, templates |
| Mounted on | All four online nodes (e.g. /mnt/pve/hyper-mnky-shared) |
| Size (approx) | ~3.9 TiB |
Used for ISOs, LXC templates, snippets, and scripts across the cluster. Network installs: ISOs in this store are also the source for iVentoy PXE on MNKY-HQ (iVentoy PXE).
Per-LXC datashare (TrueNAS)
Each LXC can have a private NFS path on TrueNAS: lxc-private/<vmid> under the same export, mounted in the container as ~/truenas-data (e.g. /home/moodmnky/truenas-data). Create lxc-private/<vmid> for each active VMID (e.g. 300, 301, MOOD LXC 120, etc.); adding a new LXC requires creating the corresponding path on TrueNAS and running the TrueNAS mounts playbook.
Network
- Linux bridges (e.g. vmbr0) connect LXCs and VMs to the cluster LAN.
- TrueNAS is reached at 10.0.0.5; cluster nodes use the same LAN for NFS and management.
- Cloudflare tunnels can be used for external access (see proxmox-ansible docs).
Topology diagram
Data source note
This map is derived from Terraform state and definitions when present in-repo, CLUSTER-NODES-HARDWARE.md, STORAGE-ASSESSMENT.md, VMID-AND-NAMING.md, mnky-docs data-center pages, and live Proxmox checks. VMIDs and placement can drift from Terraform if resources were created manually or imported late — use CODE-MNKY LXC inventory for a recent CODE-MNKY-specific snapshot. Hardware snapshots under /root/hardware-snapshots/<node>/<timestamp>/ on each host provide detailed hardware introspection for CODE-MNKY, CASA-MNKY, DATA-MNKY, and SAGE-MNKY.
For the phased upgrade plan, persona perspectives, and implementation roadmap, see the Data Center Upgrade Plan. For per-node deep dives and runbooks, see Cluster Nodes, Storage and network, and Runbooks.