The MOOD MNKY deployment architecture is designed for reliability, scalability, and developer productivity. Our infrastructure follows cloud-native principles and leverages modern deployment techniques to ensure consistent and efficient delivery of our applications.
Our deployment architecture is built on containerization, infrastructure as code, and automated CI/CD pipelines to ensure reliable and consistent deployments.
We maintain separate environments to support our development process:
Development
Staging
Production
Copy
# Development EnvironmentDomain: dev.moodmnky.coPurpose: Active development and feature testingInfrastructure: Containerized applications with shared servicesData: Anonymized production dataAccess: Internal team onlyDeployment: Automatic from feature branches
Copy
# Staging EnvironmentDomain: staging.moodmnky.coPurpose: Pre-production verification and integration testingInfrastructure: Mirrors production setupData: Full copy of production data (anonymized where necessary)Access: Internal team and selected testersDeployment: Automatic from main branch
Copy
# Production EnvironmentDomain: moodmnky.coPurpose: Live customer-facing applicationInfrastructure: Fully scaled, high-availability configurationData: Live customer data with backup and disaster recoveryAccess: Public (with authentication)Deployment: Controlled releases from release branches
We use a set of standardized base images to ensure consistency across services:
Copy
# Example base image for Node.js applicationsFROM node:20-alpine AS base# Set working directoryWORKDIR /app# Install dependenciesRUN apk add --no-cache libc6-compatRUN npm install -g pnpm turbo# Set environment variablesENV NODE_ENV=productionENV PORT=3000# Set health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD wget -qO- http://localhost:$PORT/api/health || exit 1# Set user to non-rootUSER node# Expose portEXPOSE $PORT# Command is set in the app-specific Dockerfile
Application Images
Each application has its own optimized Dockerfile tailored to its specific needs:
Copy
# Example Dockerfile for a Next.js applicationFROM moodmnky/base-node:20 AS builder# Copy package filesCOPY --chown=node:node package.json pnpm-lock.yaml ./COPY --chown=node:node apps/web/package.json ./apps/web/COPY --chown=node:node packages/ui/package.json ./packages/ui/COPY --chown=node:node packages/config/package.json ./packages/config/# Install dependenciesRUN pnpm install --frozen-lockfile# Copy source codeCOPY --chown=node:node . .# Build applicationRUN pnpm turbo run build --filter=web...# Production imageFROM moodmnky/base-node:20 AS runner# Copy built applicationCOPY --from=builder --chown=node:node /app/apps/web/.next/standalone ./COPY --from=builder --chown=node:node /app/apps/web/.next/static ./apps/web/.next/staticCOPY --from=builder --chown=node:node /app/apps/web/public ./apps/web/public# Set environment variablesENV NODE_ENV productionENV PORT 3000# Start the applicationCMD ["node", "apps/web/server.js"]
Container Optimization
We implement several optimizations to minimize container size and improve security:
Our database migrations are managed using Supabase migrations:
Migration Process
Database changes follow a structured process:
Development: Migrations are created locally and tested in development
Version Control: Migration files are committed to the repository
CI Verification: Migrations are tested against a test database
Staging Apply: Migrations are applied to the staging environment
Production Apply: After verification, migrations are applied to production
Copy
# Create a new migrationpnpm supabase:migration create add_user_preferences# Apply migrations to local developmentpnpm supabase:migration up# Generate migration from current database statepnpm supabase:migration dump
Migration Safety
We follow several practices to ensure safe database migrations:
Backwards-compatible schema changes when possible
Multi-stage migrations for complex changes
Pre-migration backups for critical environments
Read-only transition periods during major changes
Rollback plans for each migration
Migration Strategies
Different types of schema changes require different approaches:Adding Tables/Columns: Safe operation that can be performed without downtime
Copy
-- Adding a new tablecreate table if not exists public.user_preferences ( id uuid primary key default uuid_generate_v4(), user_id uuid references public.users not null, preference_key text not null, preference_value jsonb not null, created_at timestamp with time zone default now(), updated_at timestamp with time zone default now(), unique(user_id, preference_key));-- Add RLS policiesalter table public.user_preferences enable row level security;create policy "Users can view their own preferences" on public.user_preferences for select using (auth.uid() = user_id);create policy "Users can insert their own preferences" on public.user_preferences for insert with check (auth.uid() = user_id);create policy "Users can update their own preferences" on public.user_preferences for update using (auth.uid() = user_id);
Modifying Columns: Requires careful planning for existing data
Copy
-- Step 1: Add new column without constraintsalter table public.users add column display_name text;-- Step 2: Populate the new columnupdate public.users set display_name = username where display_name is null;-- Step 3: Add constraints after data is populatedalter table public.users alter column display_name set not null;
Removing Tables/Columns: Multi-stage process with code changes first
Copy
-- Step 1: Deprecate in code (no longer write to column)-- Step 2: After release, drop columnalter table public.users drop column if exists legacy_field;
We employ different deployment strategies depending on the nature of the change:
Blue-Green Deployments
For significant updates, we use blue-green deployments to minimize risk:
Deploy the new version (green) alongside the current version (blue)
Gradually redirect traffic to the green environment
Monitor for issues and quickly rollback if needed by routing traffic back to blue
Once stable, decommission the blue environment
Copy
# Example blue-green service configurationapiVersion: v1kind: Servicemetadata: name: web-app namespace: mood-mnky-productionspec: selector: app: web-app version: green # Switch between blue and green ports: - port: 80 targetPort: 3000
Canary Releases
For testing new features with a subset of users:
Deploy the new version to a small subset of servers
Route a percentage of traffic to the new version
Gradually increase the percentage as confidence grows
Complete the rollout when the feature is proven stable