The Architecture Nobody Warns You About
TL;DR for the impatient:
- Monorepo with Nx or Turborepo for caching, not just code colocation.
- pnpm for strict dependency isolation.
- Feature-sliced directories, not file-type grouping.
- Zod at every network boundary because TypeScript alone won't save you at runtime.
We've been building frontend systems for enterprise teams since around 2016. The kind where 30 developers across three time zones push code into the same codebase, and a single bad merge can cost real money.
This series is everything I wish someone had handed me when I started leading frontend architecture. Not theory. Not conference-talk material. Just the stuff that actually works when you're shipping to hundreds of thousands of users.
Let's start with the foundation — because every disaster I've seen traces back to decisions made in the first two weeks of a project.
1. The Monorepo: We Didn't Want One Either
I need to be honest. When someone first suggested we move to a monorepo, I pushed back. Hard. Our setup at the time was a collection of about 8 separate repositories — a design system, a customer-facing app, an internal admin panel, a shared utilities package, a couple of micro-frontends, and so on. Each repo had its own CI pipeline, its own versioning, its own maintainers. It felt organized.
It wasn't.
The Incident That Changed Our Minds
Late 2023. A designer flagged that the padding on our primary `<Button />` component looked off on mobile. Straightforward fix, maybe 4 lines of CSS. A junior dev picked it up.
Here's what the fix actually required:
1. Clone the `ui-kit` repo (which they hadn't touched in weeks, so they had to resolve merge conflicts first)
2. Make the CSS change
3. Wait 15 minutes for CI to pass
4. Publish `ui-kit@2.4.1` to our private npm registry
5. Open the `checkout` repo, bump the dependency in `package.json`
6. Open the `auth-modal` repo (which checkout depended on), realize it was pinned to `ui-kit@2.3.0`
7. Bump auth-modal too, publish that
8. Go back to checkout, update both dependencies
They missed step 6. The auth modal was still pulling in the old button. React ended up loading two different versions of our UI context provider simultaneously — one from `ui-kit@2.3.0` inside auth-modal, one from `ui-kit@2.4.1` in checkout. The context values didn't match. The checkout flow rendered a blank screen.
A 10-minute CSS fix turned into a six-hour incident involving three senior engineers, a rollback, and a very uncomfortable standup the next morning.
That was the week we started planning the monorepo migration.
What We Actually Gained (And What Hurt)
The migration took us about six weeks. Not because the tooling was hard — Nx has solid migration guides — but because of the human side. People were attached to their repos. One team lead literally said, "I don't want other teams' broken code in my git log." Fair concern, honestly.
Here's what we did to address that:
.github/CODEOWNERS — this was non-negotiable before we merged anything Each team owns their slice. PRs touching these paths require their approval.
/apps/checkout/ @frontend-checkout-team
/apps/admin-panel/ @frontend-admin-team
/libs/ui-kit/ @design-systems-team
/libs/shared-utils/ @platform-teamWe also set up path-based filtering in CI so the checkout team's pipeline only ran when files under `/apps/checkout` or its dependencies changed. This was critical — without it, every PR would trigger every pipeline.
The actual numbers after migration:
| Metric | Before (Polyrepo) | After (Monorepo + Nx) |
|---|---|---|
| Time to ship a shared component change | 2-4 hours | 8-15 minutes |
| Average CI time per PR | 22 minutes | 6 minutes (with remote caching) |
| Works on my machine" incidents per sprint | 3-5 | Nearly zero |
| Cross-team refactoring PRs per quarter | 1-2 (because nobody wanted to) | 10-15 |
Why Nx Over Turborepo (For Us)
We evaluated both seriously. Turborepo is excellent and simpler to adopt if you're starting fresh. But we went with Nx for a few specific reasons:
-Affected graph visualization
When you run `nx affected --graph`, it shows you exactly which projects are impacted by your changes. For a team of 30 people, being able to visually confirm "my change only affects these 3 libraries" before pushing was a game-changer for confidence.
-Generators
Nx lets you write custom code generators. We built one that scaffolds a new feature module with the correct folder structure, barrel file, ESLint config, and a skeleton test file. New devs don't have to guess how to set things up.
-Module boundary enforcement
Nx has a built-in `@nx/enforce-module-boundaries` ESLint rule. You tag projects (e.g., `scope:checkout`, `type:ui-lib`) and define which tags can depend on which. This is how we prevent the admin panel from accidentally importing checkout-specific code.
.eslintrc.json (root level) — module boundary rules
{
"overrides": [
{
"files": [
"*.ts",
"*.tsx"
],
"rules": {
"@nx/enforce-module-boundaries": [
"error",
{
"depConstraints": [
{
"sourceTag": "scope:checkout",
"onlyDependOnLibsWithTags": [
"scope:checkout",
"scope:shared"
]
},
{
"sourceTag": "scope:admin",
"onlyDependOnLibsWithTags": [
"scope:admin",
"scope:shared"
]
},
{
"sourceTag": "type:feature",
"onlyDependOnLibsWithTags": [
"type:ui",
"type:util",
"type:data-access"
]
},
{
"sourceTag": "type:ui",
"onlyDependOnLibsWithTags": [
"type:ui",
"type:util"
]
},
{
"sourceTag": "type:util",
"onlyDependOnLibsWithTags": [
"type:util"
]
}
]
}
]
}
}
]
}
// libs/checkout/feature-cart/project.json — tagging a library
{
"name": "checkout-feature-cart",
"tags": [
"scope:checkout",
"type:feature"
]
}Turborepo doesn't have this out of the box. You'd need to wire up `eslint-plugin-boundaries` yourself, which is doable but more manual.
That said — if your team is under 10 people and you have fewer than 8 packages, Turborepo's simplicity is genuinely appealing. We just weren't in that situation.
What We'd Do Differently
We should have migrated sooner. We spent a year "planning" the migration while continuing to suffer through polyrepo pain. The actual migration was less scary than the anticipation. Also, we initially tried to preserve every repo's full git history using `git subtree`. Don't do this. It made the initial clone enormous and the history was nearly unusable anyway. We ended up doing a clean start with a single "initial migration" commit per project and archiving the old repos as read-only. Nobody ever went back to look at the old history.
2. Package Management: The Invisible Landmine
This one is less glamorous but has caused us more production incidents than any framework bug.
The Phantom Dependency
Early 2024. A new developer joined the team, cloned the repo, ran `npm install`, and immediately hit a build error: `Cannot find module 'date-fns'`. But the app was running fine in production. It was running fine on every other developer's machine. After about an hour of debugging, we figured it out. One of our developers had written a utility that imported `date-fns` directly:
// libs/shared/utils/src/format-date.ts
import { format } from'date-fns'; // <-- never added to package.json
export functionformatDate(date: Date): string {
returnformat(date, 'yyyy-MM-dd');
}This worked because another package in our dependency tree — I think it was `react-datepicker` — depended on `date-fns`. npm's hoisting algorithm had placed `date-fns` in the root `node_modules`, making it importable from anywhere. The developer who wrote `format-date.ts` never realized they were importing a package they hadn't declared. When the new hire installed, a slightly different resolution order meant `date-fns` didn't get hoisted to the root. Build failed. This is called a phantom dependency, and it's one of the most insidious bugs in the Node ecosystem because it works silently for months until it doesn't.
The Version Doppelgänger
A second, nastier variant hit us a few months later. We had two libraries that both depended on `lodash`, but at slightly different semver ranges. npm's deduplication decided to install two copies — one at the root, one nested inside one of the libraries. The result? Our bundle contained two full copies of lodash. We didn't notice until a performance audit revealed our vendor chunk had ballooned by 70KB. The bundle analyzer output looked like this:
vendor.js (parsed)
├── lodash@4.17.21 72.5 KB ← from root node_modules
├── lodash@4.17.19 71.8 KB ← nested in node_modules/some-lib/node_modules
├── react-dom 128.3 KB
└──Why We Moved to pnpm
After these incidents, we evaluated our options:
| Feature | npm | yarn | yarn berry (PnP) | pnpm |
|---|---|---|---|---|
| Phantom dependency protection | No | No | Yes (strict) | Yes (strict) |
| Disk efficiency | Poor | Poor | Good | Excellent |
| Monorepo workspace support | Basic | Basic | Good | Excellent |
| Migration effort from npm | - | Low | High (PnP breaks many packages) | Low-Medium |
| Ecosystem compatibility | Baseline | High | Painful | High |
We tried Yarn Berry with Plug'n'Play first, actually. It's technically the most correct solution — it eliminates `node_modules` entirely. But the migration was brutal. About 30% of our dependencies didn't work with PnP out of the box. We spent two weeks patching things with `.yarnrc.yml` `packageExtensions` before giving up.
pnpm was the pragmatic choice. The migration was straightforward:
# Step 1: Install pnpm
corepack enable
corepack prepare pnpm@latest --activate
# Step 2: Delete existing node_modules and lockfile
rm -rf node_modules package-lock.json
# Step 3: Install — pnpm reads your existing package.json as-is
pnpm installThe first `pnpm install` immediately caught three phantom dependencies we didn't know about. We added them to the correct `package.json` files, and that was essentially the migration.
Our `.npmrc` for the monorepo:
# .npmrc
shamefully-hoist=false # Keep it strict. Don't hoist.
strict-peer-dependencies=false # We relaxed this — too many libs have sloppy peer deps
auto-install-peers=true # Let pnpm handle peer dep installation
link-workspace-packages=true # Workspace packages resolve to local code, not registry
# Private registry config (enterprise reality)
@our-company:registry=https://npm.internal.company.com/
//npm.internal.company.com/:_authToken=${NPM_TOKEN}One gotcha we hit: some build scripts assumed a flat `node_modules` structure and used paths like `./node_modules/.bin/webpack`. With pnpm's symlinked structure, those paths don't always resolve the same way. We had to update about a dozen scripts to use `pnpm exec webpack` instead. Minor, but it tripped up a few people during the first week.
What We'd Do Differently
Start with pnpm from day one. There's no good reason to use npm in a monorepo in 2026. The migration cost is low, and the protection you get from strict dependency resolution pays for itself the first time it catches a phantom dependency before it reaches production.
3. Directory Architecture: The Feature-Sliced Approach
Here's a directory structure I've seen in probably 80% of the React and Angular codebases I've reviewed:
src/
├── components/
│ ├── Button.tsx
│ ├── Modal.tsx
│ ├── CreditCardForm.tsx
│ ├── UserAvatar.tsx
│ ├── InvoiceTable.tsx
│ └── ... (147 more files)
├── hooks/
│ ├── useAuth.ts
│ ├── useBilling.ts
│ ├── useUserProfile.ts
│ └── ...
├── services/
│ ├── authApi.ts
│ ├── billingApi.ts
│ └── ...
├── utils/
│ └── ... (the junk drawer)
└── views/
├── Dashboard.tsx
├── BillingPage.tsx
└── ...This structure works fine when you have 10-15 files. It becomes a navigation nightmare around 50 files. By 200 files, developers spend more time finding code than writing it.
The Real Problem
A product manager walks over and says, "We need to completely rework the billing dashboard." In the structure above, here's what you touch:
- `views/BillingPage.tsx`
- `components/CreditCardForm.tsx`
- `components/InvoiceTable.tsx`
- `hooks/useBilling.ts`
- `services/billingApi.ts`
- `utils/formatCurrency.ts`Six files across five different directories. Meanwhile, another developer working on user profiles is also editing files in `components/`, `hooks/`, and `services/`. You're stepping on each other's toes constantly. Git merge conflicts spike. Code review becomes harder because a PR touching `components/` could be about anything.
What We Moved To
We adopted a feature-sliced structure. The core idea: code that changes together lives together.
src/
├── app/ # App shell, routing, global providers
│ ├── app.tsx
│ ├── routes.tsx
│ └── providers.tsx
│
├── features/ # Business domains
│ ├── billing/
│ │ ├── api/
│ │ │ ├── billing.api.ts # API calls for billing
│ │ │ └── billing.schemas.ts # Zod schemas for billing responses
│ │ ├── ui/
│ │ │ ├── BillingDashboard.tsx
│ │ │ ├── CreditCardForm.tsx
│ │ │ └── InvoiceTable.tsx
│ │ ├── hooks/
│ │ │ ├── useBillingData.ts
│ │ │ └── usePaymentMethod.ts
│ │ ├── types.ts # Billing-specific types
│ │ └── index.ts # Public API — the boundary
│ │
│ ├── user-profile/
│ │ ├── api/
│ │ ├── ui/
│ │ ├── hooks/
│ │ └── index.ts
│ │
│ └── auth/
│ ├── api/
│ ├── ui/
│ ├── hooks/
│ └── index.ts
│
├── shared/ # Truly shared, domain-agnostic code
│ ├── ui/ # Design system primitives (Button, Modal, Input)
│ ├── lib/ # Pure utility functions
│ └── config/ # Environment config, constants
│
└── types/ # Global type definitionsThe `index.ts` in each feature is the key. It's the public API:
// features/billing/index.ts — the public contract
// Only these exports are available to other features.
// Everything else is internal implementation detail.
export { BillingDashboard } from './ui/BillingDashboard';
export { useBillingData } from './hooks/useBillingData';
export type { Invoice, PaymentMethod } from './types';
// Notice: CreditCardForm is NOT exported.
// It's an internal component used only within the billing feature.
// No other feature should ever import it directly.We enforce this with ESLint:
// .eslintrc.json — boundary enforcement
{
"plugins": ["boundaries"],
"settings": {
"boundaries/elements": [
{ "type": "app", "pattern": "src/app/*" },
{ "type": "feature", "pattern": "src/features/*", "capture": ["featureName"] },
{ "type": "shared", "pattern": "src/shared/*" }
]
},
"rules": {
"boundaries/element-types": [
"error",
{
"default": "disallow",
"rules": [
{
"from": "app",
"allow": ["feature", "shared"]
},
{
"from": "feature",
"allow": ["shared", ["feature", { "featureName": "${from.featureName}" }]]
},
{
"from": "shared",
"allow": ["shared"]
}
]
}
],
"boundaries/no-private": [
"error",
{
"allowUncles": false
}
]
}
}With this config, if someone on the user-profile team tries to do `import { CreditCardForm } from '../../billing/ui/CreditCardForm'`, ESLint throws a hard error. They have to go through the barrel file, and if it's not exported, they need to talk to the billing team first.
The Friction We Hit
It wasn't all smooth. Two real problems came up:
Problem 1: Shared logic between features.
We had a `formatCurrency` function that both billing and the admin reporting feature needed. Where does it live? We initially put it in `features/billing/` since billing "owned" currency formatting. But then the reporting team was importing from billing, which felt wrong — reporting shouldn't depend on billing.
The answer: if two or more features need something, it moves to `shared/lib/`. We added a rule to our PR review checklist: "If this utility is used by more than one feature, it belongs in shared." Simple, but it needed to be explicit.
Problem 2: Barrel file performance. This one surprised us. In development mode, our bundler was importing every export from a feature's `index.ts` even if we only used one. For large features with heavy dependencies, this slowed down hot module replacement noticeably.
The fix was configuring our bundler to treat barrel files correctly for tree-shaking, and being disciplined about not re-exporting heavy components unnecessarily:
// BAD — re-exports everything, including heavy chart libraries
export * from './ui/BillingDashboard';
export * from './ui/BillingAnalyticsChart'; // pulls in recharts (200KB)
export * from './ui/CreditCardForm';
// GOOD — explicit, minimal exports
export { BillingDashboard } from './ui/BillingDashboard';
export { useBillingData } from './hooks/useBillingData';
// BillingAnalyticsChart is lazy-loaded within BillingDashboard,
// so it doesn't need to be in the public APIWhat We'd Do Differently
We should have written an Architecture Decision Record (ADR) explaining the feature-sliced structure before we started. For the first few months, every new developer asked the same questions: "Where do I put this?" "Can I import from another feature?" "What goes in shared vs. features?" We ended up writing the ADR retroactively, but it would have saved dozens of Slack conversations if we'd done it upfront.
4. TypeScript Governance: Types Are Not Enough
I'm not going to argue for TypeScript over JavaScript. If you're building anything with more than two developers, TypeScript is table stakes. But I want to talk about the false confidence that comes with it.
The Friday Deploy
We had a user profile page that displayed avatars. The TypeScript interface was clean:
interface User {
id: string;
name: string;
email: string;
avatarUrl: string; // <-- looks fine, right?
}We'd been using this interface for months. Everything compiled. Tests passed. We deployed on a Friday afternoon (first mistake).
Over the weekend, the backend team shipped a database migration. New users who hadn't uploaded a profile photo would now return `null` for `avatarUrl` instead of a default placeholder URL. They updated their API docs. They did not tell us. New user registrations were resulting in a white screen. The component rendering the avatar was doing this:
// This compiled perfectly. TypeScript saw avatarUrl as `string`.
// At runtime, it was null for new users.
<img src={user.avatarUrl} alt={user.name} />
// Deeper in the component, we had:
const isGravatar = user.avatarUrl.includes('gravatar.com');
// TypeError: Cannot read properties of null (reading 'includes')
// React error boundary catches this, entire profile section unmounts.TypeScript couldn't catch this because TypeScript doesn't exist at runtime. It's stripped out during compilation. The type said `string`, the network sent `null`, and our code trusted the type.
The Fix: Runtime Validation at Every Boundary
We now have a hard rule: never trust data that crosses a network boundary. Every API response gets validated at runtime before it enters our application state.
We use Zod for this. Here's what our actual API layer looks like:
// features/user-profile/api/user.schemas.ts
import { z } from 'zod';
export const UserSchema = z.object({
id: z.string().uuid(),
name: z.string().min(1),
email: z.string().email(),
avatarUrl: z.string().url().nullable(), // Now we explicitly handle null
role: z.enum(['admin', 'member', 'viewer']),
createdAt: z.string().datetime(),
preferences: z
.object({
theme: z.enum(['light', 'dark', 'system']).default('system'),
language: z.string().default('en')
})
.optional()
});
// The TypeScript type is inferred from the schema — single source of truth.
// No more maintaining a separate interface that can drift from reality.
export type User = z.infer<typeof UserSchema>;
// For list endpoints, we validate the entire response shape
export const UserListResponseSchema = z.object({
data: z.array(UserSchema),
pagination: z.object({
page: z.number().int().positive(),
pageSize: z.number().int().positive(),
total: z.number().int().nonnegative()
})
});
export type UserListResponse = z.infer<typeof UserListResponseSchema>;// features/user-profile/api/user.api.ts
// features/user-profile/api/user.api.ts
import { UserSchema, UserListResponseSchema } from './user.schemas';
import type { User, UserListResponse } from './user.schemas';
import { apiClient } from '@/shared/lib/api-client';
export async function fetchUser(id: string): Promise<User> {
const response = await apiClient.get(`/api/users/${id}`);
// This is where the magic happens.
// If the backend sends { avatarUrl: null }, Zod accepts it (we declared nullable).
// If the backend sends { avatarUrl: 42 }, Zod throws a ZodError with a clear message:
// "Expected string, received number at path: avatarUrl"
// This error gets caught by our error boundary BEFORE bad data
// reaches any UI component.
const result = UserSchema.safeParse(response.data);
if (!result.success) {
// Log the validation failure — this is an early warning that
// the backend contract has changed
console.error('[API Contract Violation] fetchUser:', result.error.format());
// Report to monitoring (Datadog, Sentry, etc.)
reportContractViolation('fetchUser', result.error);
throw new ApiContractError('User response validation failed', result.error);
}
return result.data;
}
export async function fetchUsers(page: number): Promise<UserListResponse> {
const response = await apiClient.get(`/api/users`, { params: { page } });
const result = UserListResponseSchema.safeParse(response.data);
if (!result.success) {
reportContractViolation('fetchUsers', result.error);
throw new ApiContractError('UserList response validation failed', result.error);
}
return result.data;
}// shared/lib/api-client.ts — centralized error handling
import axios from 'axios';
export const apiClient = axios.create({
baseURL: import.meta.env.VITE_API_URL,
timeout: 10_000
});
// Custom error class for contract violations
export class ApiContractError extends Error {
constructor(
message: string,
public zodError: z.ZodError
) {
super(message);
this.name = 'ApiContractError';
}
}
// In our React error boundary, we handle this specifically:
// if (error instanceof ApiContractError) {
// show "Something went wrong, please refresh" instead of white screen
// fire alert to on-call channel
// }
The Pushback (And How We Handled It)
When we introduced this pattern, about half the team pushed back. The main complaints:
- "This is so much boilerplate for every endpoint."
- "We already have TypeScript types, why do we need schemas too?"
- "What if the schema is wrong and rejects valid data?"
The third concern was legitimate. In the first month, we had two incidents where a Zod schema was too strict. One was a field we'd marked as `z.string().email()` but the backend was sending some legacy accounts with malformed email addresses that didn't pass Zod's email validation. Users with those accounts couldn't load their profiles.
We learned to be pragmatic about strictness:
// Too strict — broke for legacy data
email: z.string().email(),
// Pragmatic — validate it's a string, handle email format in the UI layer
email: z.string().min(1),For the boilerplate concern, we wrote a thin wrapper:
// shared/lib/validated-fetch.ts
import { z } from 'zod';
import { apiClient, ApiContractError } from './api-client';
export async function validatedGet<T extends z.ZodType>(
url: string,
schema: T,
params?: Record<string, unknown>
): Promise<z.infer<T>> {
const response = await apiClient.get(url, { params });
const result = schema.safeParse(response.data);
if (!result.success) {
reportContractViolation(url, result.error);
throw new ApiContractError(`Validation failed for GET ${url}`, result.error);
}
return result.data;
}
// Usage becomes much cleaner:
export const fetchUser = (id: string) => validatedGet(`/api/users/${id}`, UserSchema);
export const fetchUsers = (page: number) => validatedGet('/api/users', UserListResponseSchema, { page });Why Zod Over Alternatives
We evaluated a few options:
| Feature | Zod | io-ts | Valibot | Superstruct |
|---|---|---|---|---|
| Bundle size | ~13KB | ~7KB (+ fp-ts ~30KB) | ~1KB | ~4KB |
| TypeScript inference | Excellent | Good (verbose) | Excellent | Good |
| Learning curve | Low | High (FP style) | Low | Low |
| Ecosystem / community | Huge | Niche | Growing | Small |
| Error messages | Clear, structured | Cryptic without custom reporters | Clear | Decent |
io-ts was technically impressive but required fp-ts, which meant teaching the team functional programming concepts just to validate API responses. Not worth it.
Valibot is genuinely interesting if bundle size is your primary concern — it's tree-shakeable down to almost nothing. For a new project in 2026, I'd seriously consider it. We stuck with Zod because the ecosystem is massive (tRPC, React Hook Form, and dozens of other libraries integrate with it natively) and the team already knew it.
What We'd Do Differently
We should have set up automated schema generation from the backend's OpenAPI spec. We spent too many hours manually writing Zod schemas that mirrored what the backend already had documented. Tools like `openapi-zod-client` exist now and would have saved us significant effort and reduced the risk of schema drift.
Tying It Together
None of this is glamorous. Nobody's going to give a conference talk about switching to pnpm or reorganizing folders. But these foundational decisions are the difference between a codebase that scales gracefully and one that slowly suffocates the team.
The pattern is the same across all four areas: be explicit, be strict, and don't trust defaults.
- Don't trust that separate repos will stay in sync. Use a monorepo with enforced boundaries.
- Don't trust that npm will resolve dependencies correctly. Use pnpm's strict isolation.
- Don't trust that developers will intuitively organize code well. Enforce feature boundaries with linting rules.
- Don't trust that TypeScript types match runtime reality. Validate at every network boundary.
Every one of these rules exists because we learned the hard way what happens without them. Hopefully you won't have to.
Next up — Part 2: The Build Pipeline & Developer Experience. We'll get into why we finally moved off Webpack, how we cut CI from 30 minutes to under 5, and the exact Git hook setup that catches problems before code ever leaves a developer's machine.
About Satish Pednekar
Technical Consultant | Blogger @ www.frontendpedia.com
Lets connect: https://www.linkedin.com/in/satishpednekar

