Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

This page documents the target media-stack architecture for the MOOD MNKY Proxmox data center: a Docker-based Jellyfin + qBittorrent + Prowlarr + Sonarr/Radarr/Lidarr + Jellyseerr stack running inside an LXC on MOOD-MNKY. The stack is wired to:
  • TrueNAS NFS for libraries and downloads (the shared media dataset).
  • NetBird so “Datacenter” peers can reach the workload over the overlay network without depending on WAN exposure.

Current vs planned deployment

Per the cluster documentation, CODE-MNKY currently hosts the legacy media stack (VMID 3103 per data-center-map.mdx). The planned new home is MOOD-MNKY, using LXC 120 (mnky-media-stack) with an Intel iGPU device pass-through for Jellyfin hardware acceleration. Cutover is your operational decision; this document describes how MOOD-MNKY should be provisioned and validated when it becomes the primary media host.

Target deployment details

ItemValue
Proxmox nodeMOOD-MNKY
LXCmnky-media-stack (VMID 120)
Host IP (LAN)10.1.0.x (DHCP on VLAN 10.1.0.0/24)
iGPU device for Jellyfin/dev/dri/renderD128 (passed through to the LXC)
Docker stackqBittorrent (via Gluetun), Jellyfin, Jackett, Jellyseerr, Prowlarr, Sonarr, Radarr, Lidarr

Storage layout (TrueNAS NFS)

The LXC mounts the TrueNAS media dataset at:
  • Mount point in LXC: /mnt/media
  • TrueNAS export: 10.0.0.5:/mnt/HYPER-MNKY/PRO-MNKY/Media
The docker-compose binds those subpaths into each service. The intended logical mapping is:
  • /mnt/media/Movies -> Jellyfin /data/movies and Radarr /movies
  • /mnt/media/Shows -> Jellyfin /data/tvshows and Sonarr /tv
  • /mnt/media/Music -> Jellyfin /data/music and Lidarr /music
  • /mnt/media/Downloads -> qBittorrent /downloads and *arr download ingestion
See also: Storage and Network Topology for the shared NFS model.

Transcoding (Intel iGPU via VA-API/QSV)

Jellyfin hardware acceleration is validated by confirming that FFmpeg can access VA-API and that an encode can be performed via h264_vaapi.

Validation done (evidence)

  • Jellyfin container sees /dev/dri/renderD128.
  • A short VAAPI encode test using the Jellyfin ffmpeg binary at /usr/lib/jellyfin-ffmpeg/ffmpeg completed successfully and produced an H.264 MP4 output using h264_vaapi.

What to configure in Jellyfin

In the Jellyfin UI, enable playback/transcoding acceleration using either:
  • VA-API, or
  • Intel QSV
Then select the corresponding render node (commonly exposed as renderD128) and verify:
  • direct play works as expected
  • transcodes move to the hardware encoder path (check Jellyfin logs during playback/transcode)

qBittorrent VPN (Proton + Gluetun)

The stack routes qBittorrent through Gluetun (ProtonVPN WireGuard) with VPN port forwarding enabled. The qBittorrent Web UI is published on the LXC on 8081 (mapped via Gluetun). Torrent listen port is driven by Proton’s forwarded port (synced into qBittorrent via Gluetun’s VPN_PORT_FORWARDING_UP_COMMAND). Required: In qBittorrent’s Web UI, enable “Bypass authentication for clients on localhost” so Gluetun can call the local API to set listen_port. Firewall: Set GLUETUN_FIREWALL_OUTBOUND_SUBNETS to specific LAN/NFS CIDRs only. Avoid broad ranges that overlap Proton’s WireGuard tunnel (often 10.2.0.0/…), or NAT-PMP port forwarding can fail. *arr hostname: The gluetun service advertises the hostname qbittorrent on the Docker network so download clients can still use qbittorrent:8080.

NetBird access (Datacenter routing)

NetBird is installed inside the LXC so the media workload is a first-class peer.

Join (what to use, not the secrets)

  • NETBIRD_MANAGEMENT_URL (from your private env / secrets store)
  • a NetBird setup token created for the peer (stored in your secrets store; do not commit it)

Routing behavior

Assign this peer to the Datacenter group, so the classic RFC1918 routes (including 10.1.0.0/24) are installed for peers in that group.

Install / join (official flow)

Aligned with NetBird’s Linux install docs: install the agent (curl -fsSL https://pkgs.netbird.io/install.sh | sh), then register with netbird up --setup-key <SETUP_KEY>. For self-hosted management, pass --management-url <your URL> (your deployment uses NETBIRD_MANAGEMENT_URL / the dashboard URL from secrets). Verify with netbird status and ip addr show wt0. See NetBird — Install on Linux and Install (setup key + management URL).

Access hostnames (next setup steps)

Use these from any NetBird peer in a group that receives routes (e.g. Datacenter) once Magic DNS / your resolver can reach the peer’s FQDN. You can always use the NetBird IP or MOOD LAN IP as a fallback.
AccessJellyfinqBittorrent Web UI (Gluetun)
NetBird FQDN (from netbird status)http://mnky-media-stack-185-25.netbird.moodmnky.com:8096http://mnky-media-stack-185-25.netbird.moodmnky.com:8081
NetBird IP (wt0)http://100.117.185.25:8096http://100.117.185.25:8081
MOOD LAN (VLAN 10.1.0.0/24)http://10.1.0.120:8096http://10.1.0.120:8081
Re-check the FQDN and NetBird IP with netbird status on the LXC if the peer is reinstalled — they can change.

Public HTTPS (Traefik: media.moodmnky.com + media-request.moodmnky.com)

  • https://media.moodmnky.com → Traefik (10.0.0.25) → Jellyfin on the media LXC (10.1.0.120:8096 HTTP).
  • https://media-request.moodmnky.com → Traefik → Jellyseerr (10.1.0.120:5055 HTTP).
Repo template: infra/mnky-media-stack/traefik-dynamic/moodmnky-media.example.yml. A path like /request on the media hostname is not supported on the stock Jellyseerr image (Next.js /_next assets at root); use media-request.moodmnky.com (or another subdomain) unless you maintain a custom build with basePath. pfSense: Unbound host overrides media.moodmnky.com and media-request.moodmnky.com10.0.0.25 (split DNS; LAN clients avoid hairpinning to WAN). After editing config.xml, run services_unbound_configure (or Services → DNS Resolver → Save/Apply) and restart Unbound so /var/unbound/host_entries.conf picks up local-data lines. Public DNS (Cloudflare): A records media / media-request on moodmnky.com should point at the site WAN IPv4 (same as other Traefik frontends). If they already exist and match WAN, no change is required. LXC: configure-jellyfin-reverse-proxy.py --restart; .env.secrets: JELLYFIN_URL, JELLYSEERR_PUBLIC_URL (default https://media-request.moodmnky.com in bootstrap); bootstrap-media-integrations.py.

pfSense cleanup (legacy torrent WAN rule)

When TrueNAS qBittorrent is turned off, delete or disable the old WAN → TrueNAS BitTorrent port forward. It does not help the Gluetun + Proton path (peers use the VPN exit IP and Proton’s forwarded port). See infra/mnky-media-stack/README.md (UPnP / WAN).

Production readiness (where you stand)

In good shape for “friends & family” production: stack on dedicated LXC, NFS libraries, Gluetun + Proton port-forward, Prowlarr ↔ *arr, qBittorrent clients, Jellyseerr wired to Jellyfin, NetBird for remote admin, smoke tests via verify-media-connectivity.py, Jellyseerr login standardized. Typical next steps before calling it “hardened production”: scheduled config + DB backups (Jellyfin config, each config/*arr, Jellyseerr db, settings.json), DHCP reservation for the media LXC (Traefik and firewall rules point at a stable IP), monitoring/alerting (container health, NFS mount, Gluetun forward), retire legacy TrueNAS qBittorrent and its pfSense NAT rule, optional access policy in front of public Jellyfin (Authelia, VPN-only, or GeoIP/rate limits on Traefik), transcode limits and library CDN/off-peak policy if CPU is tight, and image update cadence (docker compose pull).

Reachability validation (evidence)

From the MOOD-MNKY host, Jellyfin’s web root was reachable over NetBird to the workload’s overlay IP/hostname (returns a redirect to the Jellyfin web UI).

Services and ports (LXC host ports)

These ports are published on the LXC network interface:
  • Jellyfin: 8096
  • qBittorrent WebUI: 8081
  • Prowlarr: 9696
  • Sonarr: 8989
  • Radarr: 7878
  • Lidarr: 8686
  • Jellyseerr: 5055
Note: service UIs may return 401 until you complete each application’s initial setup workflow.

Operational notes

Docker compose location

On the LXC, configs live under the local disk path:
  • /opt/mnky-media-stack/config
The compose file is:
  • /opt/mnky-media-stack/docker-compose.yml

Automation scripts (on the LXC)

From /opt/mnky-media-stack (see repo infra/mnky-media-stack/README.md):
  • python3 scripts/verify-media-connectivity.py — smoke-test Jellyfin, *arr, Prowlarr, Jellyseerr, qBittorrent, and qBittorrent download clients in Sonarr/Radarr/Lidarr.
  • python3 scripts/reset-jellyseerr-admin-password.py — reset the local Jellyseerr admin password from .env.secrets when login drifts from Jellyfin.
  • ./scripts/ensure-netbird-peer.sh — idempotent netbird up using NETBIRD_SETUP_TOKEN / NETBIRD_MANAGEMENT_URL from env files.
  • python3 scripts/configure-jellyfin-reverse-proxy.py — Jellyfin network.xml Known proxies + published URI for Traefik.

Start/stop and updates

When updating images:
  1. docker compose pull
  2. docker compose up -d
For troubleshooting, use:
  1. docker compose logs -f <service>
  2. inspect the corresponding /opt/mnky-media-stack/config/<service> directory
If you are upgrading the whole stack, keep stateful services aligned by order:
  • Prowlarr (indexer manager)
  • Sonarr/Radarr/Lidarr (app download management)
  • qBittorrent (download client)
  • Jellyfin (media playback + transcoding)

Architecture diagram