Ismat Samadov
  • Tags
  • About
14 min read/3 views

Microservices Ruined My Startup (Monolith Was the Answer)

We had 4 engineers and 11 microservices. Here's how going back to a monolith cut our costs 95% and quadrupled our shipping speed.

ArchitectureMicroservicesStartupBackendDevOps

Related Articles

SLOs Changed How We Ship Software — Error Budgets, Burn Rates, and Why 99.99% Uptime Is a Lie

15 min read

OWASP Top 10 for LLM Applications: The Attacks Your AI App Isn't Ready For

15 min read

Terraform Is Legacy Now — Pulumi, CDKTF, and the Infrastructure-as-Real-Code Movement

14 min read

Enjoyed this article?

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

On this page

  • How We Got Here
  • The first warning signs
  • The Real Cost (In Dollars and Sanity)
  • The Complexity Tax Nobody Warned Us About
  • We Weren't Alone
  • The Migration: How We Went Back
  • Step 1: Map the dependency graph
  • Step 2: Pick the target
  • Step 3: Migrate service by service
  • Step 4: Delete the infrastructure
  • The Results (Six Months Later)
  • When Microservices Actually Make Sense
  • The Modular Monolith: The Real Answer
  • How to Decide: A Practical Checklist
  • The Startup Graveyard of Over-Engineering
  • The Distributed Monolith Trap
  • What I Actually Think
  • Sources

© 2026 Ismat Samadov

RSS

We had four engineers, two paying customers, and eleven microservices. Let that sink in. Eleven services, each with its own repo, its own deployment pipeline, its own set of environment variables, its own way of failing at 3 AM. We hadn't found product-market fit yet, but we had a Kubernetes cluster that would've made a Series B company jealous.

It took us six months to admit what should've been obvious from day one: microservices were killing us. Not slowly, either. Fast. The kind of fast where you watch your runway shrink while your engineers spend three days debugging a message that got lost between Service A and Service B.

This is the story of how we ripped it all out and went back to a monolith. And why it was the best engineering decision we ever made.


How We Got Here

Like a lot of startups in 2024, we drank the Kool-Aid. Every tech talk, every blog post, every "how we scale" article from FAANG companies pointed in one direction: microservices. Netflix does it. Uber does it. Amazon does it. If you're not doing it, you're building a "big ball of mud."

So we did it. From day one.

Our CTO at the time — smart guy, came from a large enterprise — drew the architecture diagram on a whiteboard. Auth service. User service. Notification service. Payment service. Analytics service. API gateway. Message queue. The works.

It looked beautiful on that whiteboard. Clean boxes, clean arrows, clean separation of concerns. We high-fived and started building.

That was the last time anything felt clean.

The first warning signs

Within two months, things started breaking. Not the code — the process. Our notification service needed user data, so it called the user service. The billing service needed auth context, so it called the auth service. The analytics service needed data from everything, so it called everyone.

We'd drawn clean boxes on the whiteboard, but the arrows between them were multiplying. Each arrow was a network call. Each network call was a potential failure. Each failure needed retry logic, circuit breakers, timeout handling.

I remember the exact moment I knew we were in trouble. Our payment service went down for 20 minutes because the notification service was slow. Why? Because the notification service was waiting on the user service, which was waiting on a database connection pool that was exhausted because the analytics service was running a heavy query.

A cascade failure. In production. Because we'd turned function calls into HTTP requests.


The Real Cost (In Dollars and Sanity)

Here's what nobody told us about microservices at our scale.

Infrastructure costs exploded. A small Kubernetes production cluster runs $3,500 to $11,000 per month — roughly $72,000 to $84,000 per year. For a four-person startup burning through a seed round, that's insane. Our equivalent monolith ran on a single $50/month VPS during the prototype phase. We went from $50/month to $4,200/month in infrastructure alone before we had meaningful revenue.

Platform engineering tax. Microservices typically require 2-4 platform engineers on top of your product team, adding $140,000 to $360,000 in annual salary costs. We didn't have platform engineers. We had product engineers pretending to be platform engineers. Half their week went to Kubernetes YAML files, Helm charts, and service mesh configurations instead of building features.

Observability overhead. Microservices observability, orchestration, and service mesh tooling typically consume 30-40% of total infrastructure budget, compared to 10-15% for equivalent monolith tooling. We were spending more on monitoring our services than on the services themselves.

Here's a rough breakdown of what our monthly costs looked like:

CategoryMicroservicesMonolith (After)
Cloud infrastructure$4,200$150
CI/CD pipelines (11 repos)$380$40
Monitoring/observability$520$85
Message queue (managed)$340$0
Platform engineering time (% of team)50%5%
Total monthly infra$5,440$275

That's a 95% reduction. Not theoretical. That's our actual bill from before and after the migration.


The Complexity Tax Nobody Warned Us About

Cost was just the start. The real damage was velocity.

Onboarding took forever. When we hired our fifth engineer, it took him three weeks to understand how the eleven services talked to each other. Three weeks before he could make a meaningful contribution. With a monolith? He would've had a PR up on day two.

Debugging became archaeology. A bug that would take 20 minutes to find in a monolith took us half a day across services. Distributed tracing helped, but it didn't solve the fundamental problem: the logic was scattered across repos, and you needed to hold the entire system in your head to understand why something broke.

Deployments were a ceremony. Each service had its own deployment pipeline. Coordinating a feature that touched three services meant three PRs, three reviews, three deployments, and praying the order was right. A wrong deployment order once took our entire payment flow down for two hours.

Nearly 65% of companies moving to microservices encounter unexpected complexity and delays. We were solidly in that 65%.

The data consistency nightmare. This was the one that almost killed us. Two services writing to related data through separate databases. We chose eventual consistency because "that's what microservices do." But eventual consistency means your user can create an order and then not see it for 3 seconds. Or worse — see a partial order. Or worst of all — see two versions of the same order because a message got replayed.

We spent three weeks building a saga pattern for our checkout flow. Three weeks. For something that would've been a single database transaction in a monolith.

// The saga pattern we built (simplified)
// This replaced what would have been a 10-line
// database transaction in a monolith

async function checkoutSaga(orderId: string) {
  try {
    await reserveInventory(orderId)    // Service 1
    await processPayment(orderId)       // Service 2
    await createShipment(orderId)       // Service 3
    await sendConfirmation(orderId)     // Service 4
  } catch (error) {
    // Compensating transactions — undo everything
    await cancelShipment(orderId)
    await refundPayment(orderId)
    await releaseInventory(orderId)
    // Hope nothing fails during compensation...
  }
}

What happens when a compensating transaction fails? You get orphaned state scattered across four services. We had a Slack channel called #data-inconsistencies. It was the most active channel in our workspace.


We Weren't Alone

This isn't just a "us" problem. The industry is going through a collective reckoning.

Amazon Prime Video famously moved their video quality monitoring from microservices to a monolith and cut their AWS bill by 90%. Let me repeat that: Amazon — the company that sells cloud services — admitted that microservices were costing them too much for that use case.

Segment went through exactly what we did. They broke up their monolith into microservices, and after three years, the costs were too high and velocity had plummeted. Their defect rate exploded. They migrated back to a monolith called Centrifuge that now handles billions of messages per day. Their engineers could finally build new products again instead of babysitting infrastructure.

Shopify runs one of the world's largest e-commerce platforms on a modular monolith. Over 2.8 million lines of Ruby code, 500,000 commits, and they handle 30TB of data per minute. Not microservices. A monolith. They enforce boundaries with an internal tool called Packwerk — clean modules, single deployment.

And according to a 2025 CNCF survey, 42% of organizations that initially adopted microservices have consolidated at least some services back into larger deployable units. The primary drivers? Debugging complexity, operational overhead, and network latency issues.

The monolith backlash isn't fringe. It's mainstream.


The Migration: How We Went Back

Deciding to go back was the hard part. Doing it was surprisingly straightforward.

Step 1: Map the dependency graph

We diagrammed every service-to-service call. Turns out, eight of our eleven services were tightly coupled anyway. They shared the same database (through APIs, but still the same data), deployed together, and couldn't function independently. They were a distributed monolith pretending to be microservices.

That's the dirty secret of most startup microservices architectures. You don't actually have independent services. You have a monolith with network calls instead of function calls.

Step 2: Pick the target

We chose a single Next.js + PostgreSQL app. One repo. One deployment. One database. We kept Drizzle ORM because it was already working well in some of our services.

Step 3: Migrate service by service

We moved one service at a time into the monolith, starting with the lowest-risk ones. Each migration followed the same pattern:

  1. Copy the business logic into the monolith as a module
  2. Write integration tests covering the same behavior
  3. Route traffic to the monolith version
  4. Verify for a week
  5. Kill the old service
// Before: HTTP call between services
const user = await fetch('http://user-service:3001/api/users/123')
const userData = await user.json()

// After: direct function call in monolith
import { getUserById } from '@/modules/users'
const userData = await getUserById(123)

That's it. A network call became a function call. Milliseconds of latency became microseconds. A potential network failure point became a guaranteed in-process call.

Step 4: Delete the infrastructure

This was the most satisfying day of my career. We tore down the Kubernetes cluster, deleted eleven CI/CD pipelines, removed the API gateway, shut down the message queue, and cancelled five monitoring subscriptions.

Our deployment went from "coordinate eleven services" to:

git push origin main
# Vercel auto-deploys in ~45 seconds

Done. One push. One deployment. No coordination needed.


The Results (Six Months Later)

The numbers speak for themselves.

MetricBefore (Microservices)After (Monolith)
Deploy frequency2-3 per week (per service)5-10 per day
Time to deploy15-25 minutes45 seconds
Mean time to recovery2-4 hours15 minutes
Onboarding time (new dev)3 weeks3 days
Infrastructure cost$5,440/mo$275/mo
Features shipped per sprint2-38-12
Production incidents (monthly)6-81-2

The features-per-sprint number is what matters most. We went from shipping 2-3 features per sprint to 8-12. Not because we hired more people. Not because we worked harder. We just stopped fighting our own architecture.


When Microservices Actually Make Sense

I'm not saying microservices are always wrong. They're wrong for most startups, most of the time. But there are real use cases.

You need microservices when:

  • Your team has 50 or more engineers and coordination overhead becomes the bottleneck
  • Different parts of your system have genuinely different scaling requirements (one service handles 100x the traffic of another)
  • You need different tech stacks for different domains (ML pipeline in Python, real-time service in Go, frontend in TypeScript)
  • Teams need to deploy independently because merge conflicts and release coordination are actually slowing you down
  • You have dedicated platform/SRE engineers to manage the infrastructure

You don't need microservices when:

  • Your team is under 20 engineers
  • You haven't found product-market fit yet
  • You're optimizing for developer velocity, not organizational scaling
  • Your services share a database
  • You can't name your services without saying "it depends on what the other service does"

Here's a framework I wish someone had given me:

Team SizeRevenue StageArchitecture
1-5 engineersPre-revenue to seedSimple monolith
5-15 engineersSeed to Series AModular monolith
15-50 engineersSeries A to BModular monolith with 1-2 extracted services
50+ engineersSeries B+Consider microservices for specific domains

The Modular Monolith: The Real Answer

The false choice is "monolith vs. microservices." The real answer for most teams is a modular monolith.

Think of it like an apartment building. A traditional monolith is a studio apartment — everything in one room. Microservices are separate houses across town — independent but expensive and hard to coordinate. A modular monolith is a well-designed apartment building — separate units with clear walls, but shared plumbing, one foundation, one address.

Shopify does this with Packwerk. It enforces module boundaries at the code level. Module A can't reach into Module B's internals. You get the organizational clarity of microservices with the operational simplicity of a monolith.

Here's what that looks like in practice:

src/
  modules/
    auth/
      routes.ts
      service.ts
      repository.ts
      types.ts
    billing/
      routes.ts
      service.ts
      repository.ts
      types.ts
    notifications/
      routes.ts
      service.ts
      repository.ts
      types.ts
  shared/
    database.ts
    middleware.ts
    logger.ts

Each module owns its domain. Modules communicate through well-defined interfaces, not HTTP calls. You deploy the whole thing as one unit. And when (if) you eventually need to extract a module into its own service, the boundaries are already clean.

// Module boundary: billing talks to auth through a defined interface
// NOT by importing auth's internal repository

// Good: defined interface
import { validateUserAccess } from '@/modules/auth/api'

// Bad: reaching into auth's internals
import { authRepository } from '@/modules/auth/repository'

DHH from Basecamp has been preaching this for years. Basecamp launched in 2004 on a shared server and ran with a single box until they had thousands of paying users. One "majestic monolith," as he calls it.

Kelsey Hightower — one of the most respected voices in cloud infrastructure — put it even more directly: "Monoliths are the future because the problem people are trying to solve with microservices doesn't really line up with reality."


How to Decide: A Practical Checklist

Before you adopt microservices, answer these questions honestly:

  1. Do you have more than 50 engineers? If no, you probably don't need them.
  2. Are merge conflicts and deployment coordination actually slowing you down? Not theoretically — actually?
  3. Do you have dedicated DevOps/platform engineers? At least 1 per 10 services?
  4. Can each service be deployed, tested, and rolled back independently? If services always deploy together, you have a distributed monolith.
  5. Is your team spending more time on infrastructure than product? If the answer is yes and you already have microservices, that's your sign.

If you answered "no" to three or more, microservices will slow you down.


The Startup Graveyard of Over-Engineering

I've talked to dozens of startup founders since our migration. The pattern is eerily consistent.

Founders with enterprise backgrounds bring microservices because "that's how you build real software." Junior developers push for microservices because it looks impressive on their resume. CTOs choose microservices because they're afraid of being judged for building a "simple" monolith.

Nobody chooses microservices because their startup actually needs them. They choose them because of ego, fear, or cargo-culting what Netflix does.

Here's the thing about Netflix: they have over 2,000 engineers. They process 700 million hours of streaming per week. They operate in 190 countries. Their problems are not your problems.

For roughly 95% of startups, microservices are not a necessity at the beginning. Most can and should start with a monolith. The ones that don't learn this lesson end up burning runway on infrastructure instead of building the product that will save them.

I've started asking a simple question in every architecture discussion: "What problem are we solving that a modular monolith can't?" If the answer involves the words "scale" or "best practice" without concrete numbers, you're cargo-culting.


The Distributed Monolith Trap

There's something worse than a monolith. There's something worse than microservices. It's the distributed monolith — and it's what most startups actually build when they think they're doing microservices.

A distributed monolith has all the operational complexity of microservices with none of the benefits. You can spot it by these symptoms:

  • Shared database: Multiple services reading from or writing to the same tables
  • Synchronized deployments: You can't deploy Service A without also deploying Service B
  • Shared data models: Services pass around the same DTOs or have coupled schemas
  • Cross-service transactions: Business logic that requires coordinating multiple services to complete

If any of these sound familiar, congratulations — you don't have microservices. You have a monolith that's harder to debug.

We had all four. Our eleven "microservices" were really one application distributed across eleven network boundaries for no reason. Every feature required changes in at least three services. Every deployment was coordinated. We had the worst of both worlds.

The honest test is simple: can you deploy and roll back each service completely independently? Can one service be down without affecting others? If no, you have a distributed monolith. And you should collapse it back into an actual monolith before the complexity buries you.


What I Actually Think

Microservices are a scaling solution masquerading as an architecture best practice. They solve organizational problems — too many engineers stepping on each other's toes — not technical problems. And if you don't have that organizational problem yet, adopting microservices creates a dozen technical problems to solve a problem you don't have.

We lost four months of product development and burned through $65,000 in unnecessary infrastructure costs before we admitted the mistake. That's four months we should've spent talking to customers, iterating on our product, and finding product-market fit.

The monolith we built in two weeks (yes, two weeks to migrate everything back) has served us through 10x traffic growth, three pivots, and two new product lines. It deploys in 45 seconds. New engineers contribute on day three. Our AWS bill is smaller than our Slack bill.

If you're a startup founder reading this: build a monolith. Make it modular. Ship fast. Find your customers. Worry about scaling problems when you actually have scaling problems.

The graveyard of startups that died from "not enough microservices" is empty. The graveyard of startups that died from shipping too slowly is overflowing.

Build the boring thing. It works.


Sources

  1. The Hidden Costs of Kubernetes for Small Teams
  2. Microservices vs Monoliths in 2026 — Java Code Geeks
  3. The True Cost of Microservices — SoftwareSeni
  4. Why Microservices Could Be Your First Big Startup Misstep — Kitrum
  5. Prime Video Service Dumps Microservices, Cuts AWS Bill 90% — The Stack
  6. To Microservices and Back Again: Why Segment Went Back to a Monolith — InfoQ
  7. Under Deconstruction: The State of Shopify's Monolith — Shopify Engineering
  8. DHH on Basecamp's Shared Server Launch — X/Twitter
  9. When Should You Split Services — AKF Partners
  10. Microservices Are a Tax Your Startup Probably Can't Afford — Nexo
  11. The Architecture Decision That Saved Us $2M — Fullscale.io