Production Deployment Guide
This guide covers deploying vuer-rtc applications to production environments, including client deployment, server deployment, database setup, and scaling considerations.
Overview
A production vuer-rtc deployment consists of:
- Client Application: Static site (React/Next.js) deployed to CDN/edge network
- RTC Server: Node.js server with WebSocket support
- MongoDB: Replica set for Change Streams (minimum 1 node)
- Redis: For pub/sub and session management
- SSL/TLS: Required for secure WebSocket connections
┌──────────────────────────────────────────────────────────────────┐
│ Production Architecture │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Netlify │ │ Railway/ │ │ MongoDB │ │
│ │ (Client) │◀──────▶│ Render/ │◀──────▶│ Atlas │ │
│ │ CDN │ WSS │ Fly.io │ │ Replica │ │
│ └─────────────┘ └──────┬──────┘ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │Redis Cloud │ │
│ │ Pub/Sub │ │
│ └─────────────┘ │
└──────────────────────────────────────────────────────────────────┘1. Client Deployment
The client is a static site that can be deployed to any CDN or edge network.
Netlify Deployment
Setup:
-
Install Netlify CLI:
npm i -g netlify-cli -
Configure
netlify.toml:[build] command = "pnpm build" publish = "dist" [build.environment] NODE_VERSION = "20" [[redirects]] from = "/*" to = "/index.html" status = 200 -
Set environment variables in Netlify dashboard or CLI:
netlify env:set VITE_RTC_SERVER_URL "https://api.your-domain.com" netlify env:set VITE_RTC_SERVER_WS "wss://api.your-domain.com" -
Deploy:
netlify deploy --prod
2. Server Deployment
The RTC server requires WebSocket support and persistent connections.
Railway Deployment
Railway offers excellent WebSocket support and automatic scaling.
Setup:
-
Install Railway CLI:
npm i -g @railway/cli -
Initialize project:
railway init -
Create
railway.json:{ "build": { "builder": "NIXPACKS", "buildCommand": "cd packages/vuer-rtc-server && pnpm install && pnpm build" }, "deploy": { "startCommand": "cd packages/vuer-rtc-server && pnpm start", "healthcheckPath": "/health", "healthcheckTimeout": 300 } } -
Set environment variables:
railway variables set MONGODB_URI="mongodb+srv://..." railway variables set REDIS_URL="redis://..." railway variables set NODE_ENV="production" railway variables set CORS_ORIGIN="https://your-app.netlify.app" -
Deploy:
railway up
Environment Variables:
| Variable | Example | Description |
|---|---|---|
MONGODB_URI | mongodb+srv://user:pass@cluster.mongodb.net/vuer-rtc?replicaSet=atlas-... | MongoDB connection string (must be replica set) |
REDIS_URL | redis://:password@redis.example.com:6379 | Redis connection string |
NODE_ENV | production | Node environment |
CORS_ORIGIN | https://app.example.com | Allowed CORS origins (comma-separated) |
PORT | 3000 | Server port (Railway auto-assigns) |
Render Deployment
Setup:
-
Create
render.yaml:services: - type: web name: vuer-rtc-server env: node buildCommand: cd packages/vuer-rtc-server && pnpm install && pnpm build startCommand: cd packages/vuer-rtc-server && pnpm start healthCheckPath: /health envVars: - key: MONGODB_URI sync: false - key: REDIS_URL sync: false - key: NODE_ENV value: production - key: CORS_ORIGIN sync: false -
Connect to GitHub and deploy via Render dashboard
-
Add environment variables in Render dashboard
AWS Deployment (ECS Fargate)
Dockerfile:
FROM node:20-alpine AS base
RUN corepack enable && corepack prepare pnpm@latest --activate
FROM base AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY packages/vuer-rtc/package.json ./packages/vuer-rtc/
COPY packages/vuer-rtc-server/package.json ./packages/vuer-rtc-server/
RUN pnpm install --frozen-lockfile
FROM base AS builder
WORKDIR /app
COPY /app/node_modules ./node_modules
COPY /app/packages/vuer-rtc/node_modules ./packages/vuer-rtc/node_modules
COPY /app/packages/vuer-rtc-server/node_modules ./packages/vuer-rtc-server/node_modules
COPY . .
RUN cd packages/vuer-rtc && pnpm build
RUN cd packages/vuer-rtc-server && pnpm build
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY /app/packages/vuer-rtc-server/dist ./dist
COPY /app/packages/vuer-rtc-server/package.json ./
COPY /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/index.js"]ECS Task Definition:
{
"family": "vuer-rtc-server",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"containerDefinitions": [
{
"name": "vuer-rtc-server",
"image": "your-registry/vuer-rtc-server:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{"name": "NODE_ENV", "value": "production"}
],
"secrets": [
{"name": "MONGODB_URI", "valueFrom": "arn:aws:secretsmanager:..."},
{"name": "REDIS_URL", "valueFrom": "arn:aws:secretsmanager:..."}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3
}
}
]
}Fly.io Deployment
Setup:
-
Install Fly CLI:
curl -L https://fly.io/install.sh | sh -
Create
fly.toml:app = "vuer-rtc-server" primary_region = "sjc" [build] dockerfile = "Dockerfile" [env] NODE_ENV = "production" PORT = "3000" [[services]] internal_port = 3000 protocol = "tcp" [[services.ports]] port = 80 handlers = ["http"] force_https = true [[services.ports]] port = 443 handlers = ["tls", "http"] [[services.http_checks]] interval = "30s" timeout = "5s" method = "GET" path = "/health" -
Set secrets:
fly secrets set MONGODB_URI="mongodb+srv://..." fly secrets set REDIS_URL="redis://..." fly secrets set CORS_ORIGIN="https://your-app.com" -
Deploy:
fly deploy
3. MongoDB Atlas Setup
MongoDB Atlas provides managed MongoDB with built-in replica sets.
Setup Steps:
-
Create Cluster:
- Go to cloud.mongodb.com
- Create a new cluster (M0 free tier works for development)
- Choose region closest to your server
-
Configure Network Access:
Security → Network Access → Add IP Address - Add your server's IP or 0.0.0.0/0 (for Railway/Render) -
Create Database User:
Security → Database Access → Add New Database User - Username: vuer-rtc - Password: (auto-generate) - Database User Privileges: Read and write to any database -
Get Connection String:
Deployment → Database → Connect → Connect your applicationExample connection string:
mongodb+srv://vuer-rtc:PASSWORD@cluster0.xxxxx.mongodb.net/vuer-rtc?retryWrites=true&w=majority&replicaSet=atlas-xxxxx-shard-0Important: Atlas clusters are automatically configured as replica sets!
-
Create Indexes (Optional but recommended):
// In mongosh or Compass use vuer-rtc // Document indexes db.Document.createIndex({ "sceneId": 1 }) db.Document.createIndex({ "updatedAt": -1 }) // Operation indexes db.Operation.createIndex({ "documentId": 1, "lamportTime": 1 }) db.Operation.createIndex({ "sessionId": 1 }) // Session indexes db.Session.createIndex({ "sceneId": 1, "connected": 1 }) db.Session.createIndex({ "expiresAt": 1 }, { expireAfterSeconds: 0 })
Environment Variable:
MONGODB_URI="mongodb+srv://vuer-rtc:PASSWORD@cluster0.xxxxx.mongodb.net/vuer-rtc?retryWrites=true&w=majority&replicaSet=atlas-xxxxx-shard-0"4. Redis Cloud Setup
Redis Cloud provides managed Redis with persistence and high availability.
Setup Steps:
-
Create Database:
- Go to redis.com/try-free
- Create a free 30MB database
- Choose region closest to your server
-
Get Connection String:
Databases → Your Database → ConfigurationExample:
redis://default:PASSWORD@redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345 -
Configure Persistence (Optional):
Databases → Configuration → Data Persistence - Enable AOF (Append Only File) for durability
Environment Variable:
REDIS_URL="redis://default:PASSWORD@redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345"Alternative: Upstash (Serverless Redis)
# Free tier: 10,000 commands/day
REDIS_URL="redis://default:PASSWORD@us1-example-12345.upstash.io:6379"5. Environment Variables Checklist
Client (.env.production)
# RTC Server URLs
VITE_RTC_SERVER_URL=https://api.your-domain.com
VITE_RTC_SERVER_WS=wss://api.your-domain.com
# Optional: Analytics, monitoring
VITE_SENTRY_DSN=https://...Server (.env.production)
# Database
MONGODB_URI=mongodb+srv://user:pass@cluster.mongodb.net/vuer-rtc?replicaSet=...
REDIS_URL=redis://default:pass@redis.example.com:6379
# Server
NODE_ENV=production
PORT=3000
# CORS (comma-separated)
CORS_ORIGIN=https://app.example.com,https://www.app.example.com
# Optional: Session management
SESSION_TTL=86400 # 24 hours in seconds
MAX_CONNECTIONS=1000 # Max concurrent WebSocket connections
# Optional: Monitoring
SENTRY_DSN=https://...
LOG_LEVEL=info
# Optional: Security
JWT_SECRET=your-secret-key
RATE_LIMIT_WINDOW=60000 # 1 minute
RATE_LIMIT_MAX=100 # 100 requests per window6. CORS Configuration for WebSockets
WebSocket connections require proper CORS configuration.
Server Configuration (Fastify):
import Fastify from 'fastify';
import cors from '@fastify/cors';
import websocket from '@fastify/websocket';
const app = Fastify({ logger: true });
// CORS for HTTP endpoints
await app.register(cors, {
origin: process.env.CORS_ORIGIN?.split(',') || ['http://localhost:5173'],
credentials: true,
methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
});
// WebSocket with origin checking
await app.register(websocket, {
options: {
verifyClient: (info, callback) => {
const origin = info.origin || info.req.headers.origin;
const allowedOrigins = process.env.CORS_ORIGIN?.split(',') || [];
if (allowedOrigins.includes(origin)) {
callback(true);
} else {
callback(false, 403, 'Origin not allowed');
}
},
},
});Client Configuration:
import { RTCClient } from '@vuer-ai/vuer-rtc';
const client = new RTCClient({
serverUrl: import.meta.env.VITE_RTC_SERVER_URL,
wsUrl: import.meta.env.VITE_RTC_SERVER_WS,
// Credentials are sent automatically by the browser
});7. SSL/TLS for WebSocket Connections
Production WebSocket connections must use WSS (WebSocket Secure).
Automatic SSL (Recommended)
Most platforms handle SSL automatically:
| Platform | SSL Setup |
|---|---|
| Railway | Automatic SSL via Railway domains or custom domains |
| Render | Automatic SSL via Let's Encrypt |
| Fly.io | Automatic SSL via Let's Encrypt |
| Netlify | Automatic SSL for all deployments |
Manual SSL (Nginx + Let's Encrypt)
If self-hosting, use Nginx as a reverse proxy:
Install Certbot:
sudo apt-get update
sudo apt-get install certbot python3-certbot-nginxNginx Configuration (/etc/nginx/sites-available/vuer-rtc):
upstream vuer_rtc_server {
server localhost:3000;
}
server {
listen 80;
server_name api.your-domain.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name api.your-domain.com;
# SSL certificates (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/api.your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.your-domain.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# WebSocket upgrade
location / {
proxy_pass http://vuer_rtc_server;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket timeouts
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}Enable and get certificate:
sudo ln -s /etc/nginx/sites-available/vuer-rtc /etc/nginx/sites-enabled/
sudo certbot --nginx -d api.your-domain.com
sudo systemctl restart nginxAuto-renewal:
sudo certbot renew --dry-run8. Scaling Considerations
Horizontal Scaling
Use Redis Pub/Sub to synchronize state across multiple server instances.
Architecture:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Server 1 │ │ Server 2 │ │ Server 3 │
│ (Node.js) │ │ (Node.js) │ │ (Node.js) │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└──────────────────┼──────────────────┘
│
┌──────▼──────┐
│ Redis │
│ (Pub/Sub) │
└─────────────┘Implementation:
import Redis from 'ioredis';
class ScalableRTCServer {
private redis: Redis;
private subscriber: Redis;
constructor() {
this.redis = new Redis(process.env.REDIS_URL);
this.subscriber = new Redis(process.env.REDIS_URL);
// Subscribe to broadcast channel
this.subscriber.subscribe('rtc:broadcast');
this.subscriber.on('message', (channel, message) => {
const { sceneId, msg, senderId } = JSON.parse(message);
this.broadcastToLocal(sceneId, msg, senderId);
});
}
handleMessage(sceneId: string, msg: CRDTMessage, senderId: string) {
// Process locally
this.processMessage(sceneId, msg);
// Broadcast to other servers via Redis
this.redis.publish('rtc:broadcast', JSON.stringify({
sceneId,
msg,
senderId,
}));
}
private broadcastToLocal(sceneId: string, msg: CRDTMessage, senderId: string) {
// Send to local clients only (excluding sender)
const clients = this.getLocalClients(sceneId);
for (const client of clients) {
if (client.id !== senderId) {
client.send(JSON.stringify({ mtype: 'message', msg }));
}
}
}
}Load Balancing
Railway/Render: Automatic load balancing with multiple instances
AWS ALB Configuration:
{
"Type": "application",
"Scheme": "internet-facing",
"TargetType": "ip",
"HealthCheck": {
"Path": "/health",
"Protocol": "HTTP",
"Interval": 30,
"Timeout": 5,
"HealthyThreshold": 2,
"UnhealthyThreshold": 3
},
"Stickiness": {
"Enabled": true,
"Type": "lb_cookie",
"DurationSeconds": 86400
}
}Important: Enable sticky sessions (session affinity) for WebSocket connections!
Database Scaling
MongoDB Atlas:
M0 (Free) → 512MB, shared CPU → Development
M10 ($57/mo) → 2GB, dedicated → Small production
M30 ($237/mo) → 8GB, dedicated → Medium production
M50+ (Custom) → 16GB+, auto-scaling → Large productionSharding (Advanced):
// Enable sharding for large-scale deployments
sh.enableSharding("vuer-rtc")
sh.shardCollection("vuer-rtc.operations", { "documentId": 1, "lamportTime": 1 })9. Health Checks and Monitoring
Health Check Endpoint
// packages/vuer-rtc-server/src/routes/health.ts
import { FastifyInstance } from 'fastify';
export async function healthRoutes(app: FastifyInstance) {
app.get('/health', async (request, reply) => {
const mongoOk = await checkMongo();
const redisOk = await checkRedis();
const status = mongoOk && redisOk ? 200 : 503;
return reply.status(status).send({
status: status === 200 ? 'healthy' : 'unhealthy',
timestamp: new Date().toISOString(),
services: {
mongo: mongoOk ? 'up' : 'down',
redis: redisOk ? 'up' : 'down',
},
});
});
app.get('/metrics', async (request, reply) => {
return {
connections: app.websocketServer.clients.size,
uptime: process.uptime(),
memory: process.memoryUsage(),
};
});
}
async function checkMongo(): Promise<boolean> {
try {
await prisma.$queryRaw`SELECT 1`;
return true;
} catch {
return false;
}
}
async function checkRedis(): Promise<boolean> {
try {
await redis.ping();
return true;
} catch {
return false;
}
}Monitoring with Sentry
Install:
pnpm add @sentry/node @sentry/profiling-nodeSetup:
import * as Sentry from '@sentry/node';
import { ProfilingIntegration } from '@sentry/profiling-node';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
integrations: [
new ProfilingIntegration(),
],
tracesSampleRate: 0.1,
profilesSampleRate: 0.1,
});
// Error handling
app.setErrorHandler((error, request, reply) => {
Sentry.captureException(error);
reply.status(500).send({ error: 'Internal Server Error' });
});Logging
import pino from 'pino';
const logger = pino({
level: process.env.LOG_LEVEL || 'info',
transport: process.env.NODE_ENV === 'development' ? {
target: 'pino-pretty',
options: { colorize: true }
} : undefined,
});
// Log WebSocket events
ws.on('message', (data) => {
logger.info({ type: 'ws:message', size: data.length }, 'Received message');
});
ws.on('error', (error) => {
logger.error({ err: error }, 'WebSocket error');
});10. Backup and Disaster Recovery
MongoDB Backup
Automated Backups (Atlas):
Atlas provides automatic daily backups with point-in-time recovery:
Deployment → Backup → Configure
- Cloud Provider: AWS/GCP/Azure
- Backup Policy: Daily snapshots + continuous backups
- Retention: 1-7 days (free tier) or customManual Backup:
# Export database
mongodump --uri="mongodb+srv://..." --out=/backup/$(date +%Y%m%d)
# Restore
mongorestore --uri="mongodb+srv://..." /backup/20260227Backup Strategy:
#!/bin/bash
# backup.sh - Run daily via cron
BACKUP_DIR="/backups/vuer-rtc"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7
# Create backup
mongodump --uri="${MONGODB_URI}" --out="${BACKUP_DIR}/${DATE}"
# Compress
tar -czf "${BACKUP_DIR}/${DATE}.tar.gz" -C "${BACKUP_DIR}" "${DATE}"
rm -rf "${BACKUP_DIR}/${DATE}"
# Delete old backups
find "${BACKUP_DIR}" -name "*.tar.gz" -mtime +${RETENTION_DAYS} -delete
# Upload to S3 (optional)
aws s3 cp "${BACKUP_DIR}/${DATE}.tar.gz" "s3://your-bucket/backups/"Cron job:
0 2 * * * /usr/local/bin/backup.sh >> /var/log/vuer-rtc-backup.log 2>&1Redis Backup
Redis Cloud: Automatic daily backups
Manual Backup:
# Snapshot
redis-cli --rdb /backup/redis/dump-$(date +%Y%m%d).rdb
# Restore
cp /backup/redis/dump-20260227.rdb /var/lib/redis/dump.rdb
systemctl restart redisDisaster Recovery Plan
-
Regular Backups:
- MongoDB: Daily automated backups via Atlas
- Redis: Daily snapshots
- Code: GitHub repository
-
Recovery Procedures:
# 1. Restore MongoDB mongorestore --uri="${MONGODB_URI}" /backup/latest # 2. Restore Redis redis-cli --rdb /backup/redis/dump.rdb # 3. Redeploy server railway up # or your deployment command # 4. Verify health curl https://api.your-domain.com/health -
Failover Strategy:
- Use MongoDB Atlas automatic failover
- Deploy to multiple regions (Railway/Render supports this)
- Implement circuit breakers in client code
-
Testing:
# Test restore quarterly mongorestore --uri="${TEST_MONGODB_URI}" /backup/latest # Verify data integrity
Monitoring Alerts
Setup alerts for:
- Server downtime (health check failures)
- High memory/CPU usage (>80%)
- Database connection failures
- WebSocket connection spikes
- High error rates (>1% of requests)
Example (Railway):
# Railway automatically monitors uptime
# Configure alerts in Settings → NotificationsExample (Sentry):
Sentry.init({
beforeSend(event, hint) {
// Alert on critical errors
if (event.level === 'fatal') {
// Trigger PagerDuty/Slack/email alert
}
return event;
},
});Quick Start Checklist
-
Client deployed (Netlify)
- Environment variables set (
VITE_RTC_SERVER_URL,VITE_RTC_SERVER_WS) - SSL/HTTPS enabled
- Environment variables set (
-
Server deployed (Railway/Render/AWS/Fly.io)
- WebSocket support enabled
- Environment variables set
- Health check endpoint working (
/health)
-
MongoDB Atlas configured
- Cluster created (replica set enabled)
- Database user created
- Network access configured
- Connection string added to server
-
Redis Cloud configured
- Database created
- Connection string added to server
-
CORS configured
- Client origin whitelisted
- WebSocket origin verification enabled
-
SSL/TLS enabled
- Client uses HTTPS
- Server uses WSS for WebSockets
-
Monitoring setup
- Health checks enabled
- Logging configured
- Error tracking (Sentry) enabled
-
Backups configured
- MongoDB daily backups enabled
- Redis snapshots enabled
- Disaster recovery plan documented
Troubleshooting
WebSocket Connection Fails
Symptoms: Client cannot connect, "WebSocket connection failed" error
Solutions:
-
Check CORS:
# Test from browser console new WebSocket('wss://api.your-domain.com') -
Verify SSL:
openssl s_client -connect api.your-domain.com:443 -
Check server logs:
railway logs # or your platform's log command
MongoDB Replica Set Error
Symptoms: MongoServerError: The command requires replica set
Solution: Ensure connection string includes replicaSet parameter:
mongodb+srv://...?replicaSet=atlas-xxxxx-shard-0High Latency
Symptoms: Slow updates, delayed synchronization
Solutions:
-
Check region proximity:
- Deploy server close to MongoDB cluster
- Use same cloud provider for server and database
-
Enable connection pooling:
const prisma = new PrismaClient({ datasources: { db: { url: process.env.MONGODB_URI, }, }, // Increase pool size pool: { min: 5, max: 20, }, }); -
Optimize queries:
// Add indexes db.Operation.createIndex({ documentId: 1, lamportTime: 1 })
Memory Leaks
Symptoms: Increasing memory usage over time
Solutions:
-
Monitor with metrics:
curl https://api.your-domain.com/metrics -
Clean up stale connections:
setInterval(() => { for (const [id, ws] of connections) { if (ws.readyState === WebSocket.CLOSED) { connections.delete(id); } } }, 60000); // Every minute -
Limit journal size:
if (journal.length > 10000) { journal = journal.slice(-5000); // Keep last 5000 entries }