
🚀 Building a Scalable Monorepo: A Senior Engineer’s Tour of `fullstack-lab`
scalable-monorepo-rohan-fullstack-labA practical walkthrough of how one monorepo powers API, web, and admin apps while keeping type safety and tooling in sync. Learn the structure, patterns, and practices that enable consistency, rapid iteration, and automated deployment for a maintainable, scalable setup.
Modern full-stack development thrives on consistency, ⚡ rapid iteration, and 🤖 automated deployment.
This repository demonstrates how a single monorepo can power multiple applications — API, Web, and Admin — while keeping type safety and tooling in sync.
Below is an architectural walkthrough that explains the project’s structure, highlights scalability & maintainability practices, and discusses how each piece adds value.
1) 🧱 Monorepo Layout & Dependency Management
Using pnpm workspaces keeps apps and packages under a single roof, enabling a shared dependency graph, atomic installs, and one-command builds/deploys across the stack. Your root workspace acts as the command center for the entire system. ⚙️
💡 Why it matters: Faster installs, fewer version drifts, and consistent tooling across API, Web, and Admin apps.
🗂️ pnpm-workspace.yaml
packages:
- "apps/*"
- "packages/*"
{
"scripts": {
"build:types": "pnpm --filter @fullstack-lab/types... build",
"build:api": "pnpm --filter node-backend... build",
"build:web": "pnpm --filter web... build",
"build:admin": "pnpm --filter admin... build",
"build": "pnpm build:types && pnpm build:api && pnpm run build:web & pnpm run build:admin & wait",
"deploy:stage": "fly deploy --config ./apps/api/fly.stage.toml --dockerfile ./apps/api/Dockerfile --remote-only .",
"deploy:prod": "fly deploy --config ./apps/api/fly.toml --dockerfile ./apps/api/Dockerfile --remote-only .",
"web:build": "cd apps/web && npx @cloudflare/next-on-pages@latest",
"web:deploy": "cd apps/web && npx wrangler pages deploy .vercel/output/static --project-name=rohan-fullstack-lab",
"web:preview": "cd apps/web && npx wrangler pages deploy .vercel/output/static --project-name=rohan-fullstack-lab --branch=develop",
"web:dev": "cd apps/web && npx wrangler pages dev .vercel/output/static --config wrangler.toml"
}
}
🚀 Tip:
pnpm --filterlets you target a single app or an entire scope.
The rootbuildruns web & admin in parallel (& … & wait) after building shared types and the API.
###🧩 Shared Types Package
A dedicated @fullstack-lab/types package centralizes domain models (e.g., Blog, Project).
This keeps API responses, frontend components, and admin forms perfectly in sync.
Built with TypeScript, it exports compiled typings for consumption across workspaces. 🔄
📦 packages/types/package.json
{
"name": "@fullstack-lab/types",
"version": "1.0.0",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"dev": "pnpm -r dev",
"build": "tsc -p tsconfig.json"
}
}
📝 packages/types/src/blog.ts
export type BlogStatus = 'draft' | 'published' | 'archived';
export interface BlogLink {
url: string;
label?: string;
kind?: 'repo' | 'ref' | 'demo' | 'other';
}
export interface IBlogBase {
title: string;
slug: string;
content: string;
summary?: string;
author: string;
tags: string[];
links: BlogLink[];
coverImageUrl?: string;
readingTime?: number;
isFeatured?: boolean;
status: BlogStatus;
publishedAt?: Date;
}
export interface IBlogDb extends IBlogBase {
createdAt: Date;
updatedAt: Date;
}
export interface IBlogDto extends IBlogBase {
id: string;
createdAt: Date;
updatedAt: Date;
}
🧠 Usage:** import once, rely everywhere —
import type { IBlogDto } from '@fullstack-lab/types';
🛡️ Benefit:** single source of truth → fewer runtime mismatches and safer refactors.
2) 🛠️ API Service (apps/api)
The Node/Express backend emphasizes robustness and observability.
🔐 Security & Middleware
- CORS configuration with allowlist
- Helmet for secure HTTP headers
- Express Rate Limit to mitigate abuse
- Swagger UI for interactive API docs at
/api-docs
app.use(
cors({
origin: (origin, callback) => {
// Allow requests with no origin (like mobile apps or curl)
if (!origin) return callback(null, true);
logger.info(`🌍 allowedOrigins: ${allowedOrigins}`);
if (allowedOrigins.includes(origin)) return callback(null, true);
return callback(new Error('Not allowed by CORS'), false);
},
methods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
allowedHeaders: ['Content-Type', 'Authorization', 'Range'],
exposedHeaders: ['X-Total-Count', 'Content-Range'],
})
);
app.use(express.json());
app.use(jsonErrorHandler);
app.get('/', (_, res) => {
res.send('Server is working');
});
app.get('/health', (req, res) => {
res.status(200).send('OK');
});
app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerSpec, { explorer: true }));
app.use('/api/todos', todoRoutes);
app.use('/api/blogs', blogRoutes);
app.use('/api/profiles', profileRoutes);
app.use('/api/contact', contactRoutes);
app.use('/api/projects', projectRoutes);
app.use('/api/uploads', uploadRoutes);
app.use('/api/slots', slotRoutes);
🧪 Environment Validation
- Joi validates runtime configuration before boot, preventing deploys with missing credentials or misconfigured environments.
// src/config/env.ts
import dotenv from 'dotenv';
import path from 'path';
import Joi from 'joi';
const NODE_ENV = process.env.NODE_ENV || 'development';
const envFile = NODE_ENV === 'development' ? '.env' : `.env.${NODE_ENV}`;
dotenv.config({ path: path.resolve(process.cwd(), envFile) });
const schema = Joi.object({
NODE_ENV: Joi.string().default('development'),
PORT: Joi.string().optional(),
CLOUDFLARE_R2_ACCOUNT_ID: Joi.string().required(),
CLOUDFLARE_R2_ACCESS_KEY_ID: Joi.string().required(),
CLOUDFLARE_R2_SECRET_ACCESS_KEY: Joi.string().required(),
CLOUDFLARE_R2_BUCKET_NAME: Joi.string().required(),
R2_CDN_HOST: Joi.string().optional(),
}).unknown(true);
const { value, error } = schema.validate(process.env, { abortEarly: false });
if (error) {
throw new Error(`Env validation failed: ${error.message}`);
}
export const env = {
nodeEnv: value.NODE_ENV as string,
port: Number(value.PORT ?? 5050),
r2: {
accountId: value.CLOUDFLARE_R2_ACCOUNT_ID as string,
accessKeyId: value.CLOUDFLARE_R2_ACCESS_KEY_ID as string,
secretAccessKey: value.CLOUDFLARE_R2_SECRET_ACCESS_KEY as string,
bucket: value.CLOUDFLARE_R2_BUCKET_NAME as string,
cdnHost: value.R2_CDN_HOST as string | undefined,
},
♻️ Generic Controller Pattern
- Reusable CRUD handlers combine Mongoose queries, pagination, and caching, dramatically reducing boilerplate as new resources are added.
import type { Request, Response } from 'express';
import mongoose from 'mongoose';
import type { Model } from 'mongoose';
import logger from '../utils/logger';
import { cache } from './cache';
import { ListOptions, ByIdOptions, WriteOptions } from '../types/controller';
import { parseListParams, setListHeaders } from './http';
export function makeListHandler<T>(opts: ListOptions<T>) {
const {
ns, model, buildQuery, transform = defaultTransform, allowedSort = ['createdAt', 'updatedAt', 'year', 'title'],
ttlSeconds = cache.DEFAULT_TTL,
} = opts;
return async function list(req: Request, res: Response) {
try {
const { filter, range, sort, limit, skip } = parseListParams(req.query as any);
const sortField = allowedSort.includes(sort[0]) ? sort[0] : 'createdAt';
const sortOrder = sort[1] === 'DESC' ? -1 : 1;
const mongoFilter = buildQuery(filter);
// cache key
const version = await cache.getVersionNS(ns);
const key = cache.keyForListNS(ns, version, { filter, range, sort });
// try cache
const cached = await cache.get<{ data: any[]; total: number }>(key);
if (cached) {
setListHeaders(res, skip, cached.data.length, cached.total);
return res.status(200).json(cached.data);
}
// DB
const total = await model.countDocuments(mongoFilter);
const docs = await model.find(mongoFilter)
.sort({ [sortField]: sortOrder })
.skip(skip).limit(limit)
.lean({ virtuals: true });
const data = docs.map(transform);
await cache.set(key, { data, total }, ttlSeconds);
setListHeaders(res, skip, data.length, total);
res.status(200).json(data);
} catch (err) {
logger.error({ err }, '❌ list handler failed');
res.status(500).json({ error: 'FETCH_FAILED' });
}
};
}
export function makeGetByIdHandler<T>(opts: ByIdOptions<T>) {
const { ns, model, transform = defaultTransform, ttlSeconds = cache.DEFAULT_TTL } = opts;
return async function getById(req: Request, res: Response) {
const { id } = req.params;
if (!mongoose.Types.ObjectId.isValid(id)) {
return res.status(404).json({ error: 'INVALID_ID' });
}
try {
const version = await cache.getVersionNS(ns);
const key = cache.keyForIdNS(ns, version, id);
const cached = await cache.get<any>(key);
if (cached) return res.status(200).json(cached);
const found = await model.findById(id);
if (!found) return res.status(404).json({ error: 'NOT_FOUND' });
const obj = transform(found.toJSON());
await cache.set(key, obj, ttlSeconds);
res.status(200).json(obj);
} catch (err) {
logger.error({ err }, '❌ getById handler failed');
res.status(500).json({ error: 'FETCH_FAILED' });
}
};
}
export function makeCreateHandler<T>(opts: WriteOptions<T>) {
const { ns, model, normalize, afterCreate } = opts;
return async function create(req: Request, res: Response) {
try {
const normalized = normalize(req.validatedBody);
const doc = new model(normalized);
const result = await doc.save();
if (typeof afterCreate === 'function') {
try { await afterCreate(result as any); } catch (e) {}
}
await cache.bumpVersionNS(ns);
res.status(201).json(result.toJSON());
} catch (err) {
logger.error({ err }, '❌ create handler failed');
res.status(500).json({ error: 'CREATE_FAILED' });
}
};
}
export function makeUpdateHandler<T>(opts: WriteOptions<T>) {
const { ns, model, allowedFields = [], normalize, afterUpdate } = opts;
return async function update(req: Request, res: Response) {
const { id } = req.params;
if (!mongoose.Types.ObjectId.isValid(id)) {
return res.status(404).json({ error: 'INVALID_ID' });
}
try {
for (const k of ['createdAt', 'updatedAt', '__v', 'id', '_id']) delete (req.body ?? {})[k];
req.validatedBody = req.validatedBody ?? req.body;
const normalized = normalize(req.validatedBody);
const update: Record<string, any> = {};
for (const k of allowedFields) if (normalized[k] !== undefined) update[k] = normalized[k];
const updated = await model.findByIdAndUpdate(
id,
{ $set: update },
{ new: true, runValidators: true, strict: 'throw' }
);
if (!updated) return res.status(404).json({ error: 'NOT_FOUND' });
if (typeof afterUpdate === 'function') {
try { await afterUpdate(updated); } catch (e) { /* log but don’t crash */ }
}
await cache.bumpVersionNS(ns);
res.status(200).json(updated.toJSON());
} catch (err) {
logger.error({ err }, '❌ update handler failed');
res.status(500).json({ error: 'UPDATE_FAILED' });
}
};
}
export function makeDeleteHandler<T>(opts: { ns: string; model: Model<T>; afterDelete?: (doc: T) => Promise<void> }) {
const { ns, model, afterDelete } = opts;
return async function remove(req: Request, res: Response) {
const { id } = req.params;
if (!mongoose.Types.ObjectId.isValid(id)) {
return res.status(404).json({ error: 'INVALID_ID' });
}
try {
const deleted = await model.findByIdAndDelete(id);
if (!deleted) return res.status(404).json({ error: 'NOT_FOUND' });
if (typeof afterDelete === 'function') {
try { await afterDelete(deleted); } catch (e) { /* log but don’t crash */ }
}
await cache.bumpVersionNS(ns);
res.status(204).send();
} catch (err) {
logger.error({ err }, '❌ delete handler failed');
res.status(500).json({ error: 'DELETE_FAILED' });
}
};
}
function defaultTransform(doc: any) {
const id = String(doc.id ?? doc._id);
const { _id, __v, ...rest } = doc;
return { id, ...rest };
}
⚡ Cache Layer
- Upstash Redis via a lightweight helper:
- Versioned cache keys
- Easy invalidation
- Graceful fallback when unavailable
// lib/cache.ts
import crypto from 'crypto';
import logger from '../utils/logger';
import { redisRest } from './redis-rest';
const DEFAULT_TTL = 60 * 5; // 5 minutes
const hash = (obj: unknown) =>
crypto.createHash('sha1').update(JSON.stringify(obj)).digest('hex').slice(0, 16);
const nsKey = (ns: string) => `${ns}:version`;
async function getVersionNS(ns: string) {
if (!redisRest) return 1;
try {
const v = await redisRest.get<number>(nsKey(ns));
return v ? Number(v) : 1;
} catch (e) {
logger.warn({ e, ns }, 'Redis GET version failed (continuing without cache)');
return 1;
}
}
async function bumpVersionNS(ns: string) {
if (!redisRest) return;
try {
await redisRest.incr(nsKey(ns));
} catch (e) {
logger.warn({ e, ns }, 'Redis INCR version failed (continuing without cache)');
}
}
function keyForListNS(ns: string, v: number | string, qs: unknown) {
return `${ns}:v${v}:list:${hash(qs)}`;
}
function keyForIdNS(ns: string, v: number | string, id: string) {
return `${ns}:v${v}:id:${id}`;
}
async function cacheGet<T>(key: string): Promise<T | null> {
if (!redisRest) return null;
try {
// @upstash/redis will parse JSON automatically if it was stored as JSON
return (await redisRest.get<T>(key)) ?? null;
} catch (e) {
logger.warn({ e, key }, 'Redis GET failed (continuing without cache)');
return null;
}
}
async function cacheSet(key: string, value: unknown, ttl = DEFAULT_TTL) {
if (!redisRest) return;
try {
// Upstash REST accepts objects directly and stores JSON under the hood
await redisRest.set(key, value as any, { ex: ttl });
} catch (e) {
logger.warn({ e, key }, 'Redis SET failed (continuing without cache)');
}
}
export const cache = {
DEFAULT_TTL,
getVersionNS,
bumpVersionNS,
keyForListNS,
keyForIdNS,
get: cacheGet,
set: cacheSet,
};
🔏 Access Control
- Admin-only routes gated by a token header
- Critical endpoints protected with Cloudflare Turnstile to block bots
import process from 'process';
import { Request, Response, NextFunction } from 'express';
export function requireAdmin(req: Request, res: Response, next: NextFunction) {
const ADMIN_TOKEN = process.env.ADMIN_TOKEN;
const authHeader = req.headers.authorization;
if (!authHeader?.startsWith('Bearer ')) {
return res.status(401).json({ message: 'Unauthorized: Missing or invalid token' });
}
const token = authHeader.replace('Bearer ', '');
if (token !== ADMIN_TOKEN) {
return res.status(401).json({ message: 'Unauthorized: Invalid admin token' });
}
next();
}
import type { Request, Response, NextFunction } from "express";
const VERIFY_URL = "https://challenges.cloudflare.com/turnstile/v0/siteverify";
export async function requireCaptcha(req: Request, res: Response, next: NextFunction) {
try {
const token = req.body?.captchaToken;
if (!token) return res.status(400).json({ msg: "Captcha missing" });
const secret = process.env.TURNSTILE_SECRET_KEY;
if (!secret) return res.status(500).json({ msg: "Captcha not configured" });
const form = new URLSearchParams();
form.append("secret", secret);
form.append("response", token);
if (req.ip) form.append("remoteip", req.ip);
const r = await fetch(VERIFY_URL, { method: "POST", body: form });
const data = (await r.json()) as { success: boolean; ["error-codes"]?: string[] };
if (!data.success) {
return res.status(403).json({ msg: "Captcha verification failed", error: data["error-codes"] });
}
next();
} catch {
res.status(500).json({ msg: "Captcha verification error" });
}
}
✅ Outcome: Safer deploys, less boilerplate, faster responses, and production-ready defaults.
🚀 Deployment
A multi-stage Dockerfile in the repo root installs workspace dependencies, compiles shared types, and produces a lean dist/ artifact for the API.
This image is reproducible and ready to deploy on Fly.io (or any container platform).
- 🧰 Multi-stage build → smaller images, faster cold starts
- 🧩 Compiles shared
@fullstack-lab/typesbefore bundling the API - 🚀 Targets a single, minimal runtime layer for deploys
# ---------------------------
# 🐳 Dockerfile for apps/api
# Supports pnpm + monorepo + TypeScript build for Fly.io
# ---------------------------
# Base Node image
FROM node:18-alpine
# Enable Corepack for pnpm support
RUN corepack enable
# Set working directory at monorepo root
WORKDIR /app
# Copy entire monorepo into the container
COPY . .
# Set working dir to API app
WORKDIR /app/apps/api
# Install all dependencies in the monorepo
RUN pnpm install --frozen-lockfile
# Build shared types package first
RUN pnpm --filter @fullstack-lab/types... run build
# Build the API (this compiles TypeScript to dist/)
RUN pnpm run build
# Expose port (adjust if your API uses a different port)
EXPOSE 5050
# Start the compiled Node.js app
CMD ["node", "dist/index.js"]
3) 🌐 Web Front-End (apps/web)
The Next.js 15 application serves the public site and consumes the API’s typed interfaces.
⚛️ Modern React Stack
- React 19, Radix UI, Tailwind CSS for a clean, accessible UI
- Dynamic imports with fallbacks keep critical paths fast
🔍 SEO & Metadata
layout.tsxcentralizes meta tags, Open Graph, and JSON-LD schema- Executed at build time for optimal performance and crawlability
🌎 Edge-Friendly Deployment
- Cloudflare Pages configured via
wrangler.toml - Attaches R2 object storage
- Exposes public environment variables for client usage
name = {value}
compatibility_date = "2024-10-01"
compatibility_flags = ["nodejs_compat", "global_fetch_strictly_public"]
pages_build_output_dir = ".vercel/output/static"
[[r2_buckets]]
binding = {value}
bucket_name = {value}
preview_bucket_name = {value}
[vars]
NEXT_PUBLIC_API_URL = {value}
4) 🗂️ Admin Dashboard (apps/admin)
The admin app uses React Admin with Material UI, providing a declarative interface for content management.
🔌 Data Provider
ra-data-simple-restplugs straight into the API.- Auth headers are attached via a shared
httpClientso every request carries theAuthorization: Bearer <token>.
🧩 Resource Modules
- Blog, Profile, Project, Contact, and Slot resources reuse shared types and are mounted as
<Resource>components — making the admin UI easy to extend.
import { Admin, Resource, CustomRoutes } from 'react-admin';
import simpleRestProvider from 'ra-data-simple-rest';
import { Route } from 'react-router-dom';
import { httpClient } from './httpClient'; // attaches Authorization header
import AssetUploadSection from './components/AssetUploadSection';
import { BlogList, BlogCreate, BlogEdit, BlogShow } from './pages/blog';
import { ProfileList, ProfileCreate, ProfileEdit, ProfileShow } from './pages/profile';
import { ProjectList, ProjectCreate, ProjectEdit, ProjectShow } from './pages/project';
import { ContactList } from './pages/contact';
import { SlotList, SlotEdit, SlotShow } from './pages/slot';
const apiUrl = process.env.REACT_APP_API_URL || 'https://your-api-host';
const dataProvider = simpleRestProvider(`${apiUrl}/api`, httpClient);
export default function App() {
return (
<Admin dataProvider={dataProvider}>
<CustomRoutes>
<Route path="/assets" element={<AssetUploadSection apiUrl={apiUrl} />} />
</CustomRoutes>
<Resource name="blogs" list={BlogList} create={BlogCreate} edit={BlogEdit} show={BlogShow} />
<Resource name="profiles" list={ProfileList} create={ProfileCreate} edit={ProfileEdit} show={ProfileShow} />
<Resource name="contact" list={ContactList} />
<Resource name="projects" list={ProjectList} create={ProjectCreate} edit={ProjectEdit} show={ProjectShow} />
<Resource name="slots" list={SlotList} edit={SlotEdit} show={SlotShow} />
</Admin>
);
}
🤝 Workspace Coordination
- Dependencies like MUI and react-admin live alongside the shared types package (
@fullstack-lab/types), so the dashboard evolves in lockstep with API and Web. - Shared TypeScript interfaces ensure forms, tables, and detail views stay in sync with backend contracts.
"dependencies": {
"@emotion/react": "^11.14.0",
"@emotion/styled": "^11.14.1",
"@fullstack-lab/types": "workspace:*",
"@mui/icons-material": "^7.3.1",
"@mui/material": "^7.3.1",
"@testing-library/dom": "^10.4.1",
"@testing-library/jest-dom": "^6.6.4",
"@testing-library/react": "^16.3.0",
"@testing-library/user-event": "^13.5.0",
"@types/jest": "^27.5.2",
"@types/node": "^16.18.126",
"@types/react": "^19.1.8",
"@types/react-dom": "^19.1.6",
"ra-data-simple-rest": "^5.10.0",
"react": "^19.1.1",
"react-admin": "^5.10.0",
"react-dom": "^19.1.1",
"react-router-dom": "7.8.0",
"react-scripts": "5.0.1",
"typescript": "^4.9.5",
"web-vitals": "^2.1.4"
},
💡 Tip: Add feature flags per resource (e.g.,
ENABLE_CONTACT) to toggle modules across environments without code forks.
5) 📈 Scalability, 🛠️ Maintainability, and 🎯 Value
| Dimension | Practices & Benefits |
|---|---|
| Scalability | Caching and rate limiting in the API, global CDN via Cloudflare Pages, and containerized deployment enable horizontal scaling with minimal friction. |
| Maintainability | Centralized types, consistent validation, and generic controller patterns reduce code duplication and prevent drift between services. |
| Observability & Safety | Pino logging, strict environment validation, and defensive middleware (Helmet, captcha) help catch issues early and protect resources. |
| Developer Velocity | Shared build scripts, pnpm’s workspace install, and automated deploy commands shorten feedback loops and simplify onboarding. |
This architecture balances flexibility and discipline, making it easy to add new applications or packages without fragmenting the codebase.
By leveraging a monorepo, the project keeps type definitions, utilities, and deployment tooling in one place—promoting a uniform developer experience and ensuring that every layer of the stack evolves together.
Whether you’re starting a personal portfolio or scaling a production-grade platform, this approach demonstrates how a well-structured monorepo can accelerate full-stack development while preserving clarity, security, and maintainability.
🔑 Key Takeaways / TL;DR
- 🧱 Monorepo with pnpm — keeps dependencies in sync, enables atomic installs, parallel builds, and a single command center for the stack.
- 🧩 Shared
@fullstack-lab/typespackage — centralizes domain models, ensuring type safety across API, Web, and Admin with zero drift. - 🛠️ API (Express + Mongoose) — secure by default (Helmet, CORS, rate limiting), validated environments (Joi), generic controller patterns, and Redis caching.
- 🔏 Access control & bot protection — admin token + Cloudflare Turnstile safeguard critical endpoints.
- 🚀 Deployment-ready — multi-stage Docker builds for Fly.io (API) and Cloudflare Pages (Web) with R2 storage.
- ⚛️ Next.js 15 frontend — edge-friendly, SEO-first, with streaming and dynamic imports for speed.
- 🗂️ Admin Dashboard (React Admin + MUI) — declarative resources tied to API, powered by shared types for consistent forms & views.
- 📈 Production-grade practices — caching, structured logging, strict env validation, and observability built in.
- 🧩 Scalable & maintainable — easy to add new services or packages without fragmentation, while preserving clarity, security, and velocity.
👉 Bottom line: This setup shows how a disciplined monorepo + shared types + modern tooling can deliver fast iteration, safe refactors, and production-grade deployment across a full-stack platform.
Links
- GitHub Repository(Repository)



