Overview
CODE-MNKY is the primary GPU and stack host in the MOOD MNKY Proxmox cluster. It concentrates the AI, automation, gaming, and media stacks, and is the main target for Terraform and Ansible provisioning.
This page consolidates low-level hardware and Proxmox details for CODE-MNKY, based on the hardware snapshots located at:
System identity
Key system metadata (uname, pveversion, and dmidecode -t system):
- Manufacturer: ASUS
- Product: System Product Name
- SMBIOS: 3.3.0
- UUID:
bbc84485-2759-3bda-0936-107c611e6e44 - OS: Proxmox VE 8.4.17
- Kernel:
6.8.12-19-pve(x86_64) - Boot: EFI, Secure Boot disabled
CPU topology
From thelscpu snapshot:
- Single-socket, 6-core / 12-thread CPU with SMT enabled.
- Full support for AVX2, AES-NI, AMD-V, and related instruction sets.
- Single NUMA node simplifies LXC and VM placement decisions.
Flags list confirms extensive mitigation coverage for CPU vulnerabilities, which is relevant when considering kernel and microcode updates.
Memory configuration
Fromdmidecode -t memory, /proc/meminfo, and free -h:
- Installed memory: ~125 GiB total.
- Layout: Multiple DIMMs; exact slot count, manufacturer, and speed are encoded in
dmidecode-memory.txt. - NUMA: Single node; all memory is attached to NUMA node 0.
- OS view:
MemTotal≈ 125 GiB with swap configured.
- Supports multiple large LXC workloads (AI models, database caches, media indexing) concurrently.
- When planning expansions, check
dmidecode-memory.txtfor which slots are populated and maximum supported capacity.
Storage layout
Storage is a combination of three ZFS pools plus the Proxmox root filesystem. Fromzpool list -v and zpool status -v:
CODE-MAIN-zfs:- ~3.62 TiB total, NVMe-backed.
- Primary pool for high-performance LXCs and VMs.
CODE-BKP-zfs:- ~464 GiB total, HDD-backed.
- Backup/secondary pool for snapshots and archives.
rpool:- ~472 GiB total, SSD-backed.
- Contains the Proxmox root filesystem (
rpool/ROOT/pve-1).
lsblk and df -hT:
- All three pools map to underlying block devices and expose standard ZFS datasets and zvols.
- ZFS datasets for individual LXCs and VMs are enumerated in
zfs-list.txtand can be cross-referenced with LXC/VM IDs.
- Use
CODE-MAIN-zfsfor performance-sensitive stacks (AI, databases, media transcodes). - Use
CODE-BKP-zfsfor backup, snapshots, or less-critical data. - Keep
rpoolusage conservative to avoid impacting Proxmox itself.
GPU profile
Fromnvidia-smi -q:
- GPU: NVIDIA Tesla P40 (Pascal)
- Driver version: 560.35.03
- CUDA version: 12.6
- PCIe:
- Bus:
00000000:01:00.0 - Link width: x16
- Maximum PCIe generation: 3
- Bus:
- Ollama and related model hosts.
- Any container using
nvidia-container-toolkitintegration. - GPU-accelerated transcoding for media (if configured).
- Snapshot current GPU state with
nvidia-smi -qand confirm health. - Update this page with new device and driver information after the change.
Network configuration
Fromip -d link show, ip addr show, ip route show, and ethtool-summary:
- Interfaces:
- One or more physical NICs connected to the cluster LAN.
- Linux bridges (e.g.
vmbr0) that connect LXCs and VMs to the LAN.
- Routing:
- Default route to the cluster gateway.
- Routes to storage or management networks as configured.
- Offload and MTU:
- For each interface,
ethtoolcaptures offload settings and link parameters used when diagnosing throughput and latency.
- For each interface,
PCI and USB topology
Fromlspci -nnvv, lspci -tv, lsusb, and lsusb -t:
- PCI:
- Root ports and chipset devices.
- GPU (
01:00.0) with vendor and device IDs:0x1B3810DE(P40). - NVMe and SATA controllers backing the ZFS pools.
- USB:
- Host controllers and any connected peripherals (HIDs, storage, dongles).
- Configuring PCI passthrough to VMs.
- Assessing IOMMU groups for security or isolation.
- Verifying that hardware is still enumerated as expected after changes.
Proxmox and cluster view
Frompvesh get /nodes/CODE-MNKY/status and pvesh get /cluster/resources:
- Node-level:
- CPU utilization, memory usage, uptime, and load for CODE-MNKY.
- Active storage backends (
CODE-MAIN-zfs,CODE-BKP-zfs,local,local-zfs,hyper-mnky-shared).
- Cluster-level:
- All five nodes (CODE-MNKY, CASA-MNKY, DATA-MNKY, STUD-MNKY, PRO-MNKY) and their online/offline state.
- Resource distribution across nodes.
Refresh procedure for this page
Whenever CODE-MNKY’s hardware or Proxmox configuration changes:- SSH into CODE-MNKY (or open a root console).
-
Run:
-
Note the new timestamped directory under:
-
Update each section in this page based on the new snapshot:
- CPU and memory (from
lscpuanddmidecode-*). - Storage (from
zpool-*,zfs-list,lsblk,df -hT). - GPU (from
nvidia-smi-q). - Network (from
ip-*andethtool-summary). - Proxmox/cluster status (from
pvesh-*).
- CPU and memory (from