I run two production apps on PostgreSQL. One of them is a job aggregator with 45 tables and 50 indexes. The other is this blog — the one you're reading right now. Both cost me $5/month on Neon. I haven't touched MongoDB, Redis, or Elasticsearch in over a year. I don't miss them.
That's not some minimalist flex. It's just what happened when I stopped adding databases "just in case" and started asking: can PostgreSQL already do this? The answer, almost every time, was yes.
And I'm not alone. The numbers back this up in a way that's hard to argue with.
The Triple Crown: Three Years Running
The Stack Overflow 2025 Developer Survey gave PostgreSQL something no other database has ever achieved: the triple crown for the third consecutive year. Most popular. Most loved. Most wanted.
Let's put real numbers on that.
55.6% of all developers now use PostgreSQL. That's up from 48.7% in 2024 — a 7-point jump, the largest annual increase any database has ever seen in the survey's history. Among professional developers specifically, it's even higher: 58.2%, with an 18.6-point lead over MySQL at 40.5%.
The "admiration" score (what Stack Overflow now calls the old "loved" metric) sits at 65.5%. The "desired" score (wanting to use it next year) is 46.5%. No other database comes close on any of these three metrics simultaneously. PostgreSQL didn't just win — it ran away with it.
Now, the DB-Engines ranking tells a slightly different story. As of March 2026, Oracle leads with a score of 1182, followed by MySQL (858), SQL Server (711), PostgreSQL (680), and MongoDB (379). PostgreSQL sits at #4. But here's the thing everyone ignores about DB-Engines: it measures total market presence, not momentum. Oracle's score includes decades of enterprise lock-in. PostgreSQL is the fastest-growing database on that list. Give it two more years.
So the question isn't "is PostgreSQL popular?" anymore. The question is: why are teams still running five different databases when one could probably handle it?
PostgreSQL as Five Databases in One
Here's the thing that changed how I think about database architecture. PostgreSQL isn't just a relational database anymore. Through its extension ecosystem — over 300 extensions and counting — it transforms into whatever you need.
Let me walk through each one.
1. The Relational Database (The One Everyone Knows)
This barely needs explaining. PostgreSQL has been a top-tier relational database for decades. ACID compliance, advanced indexing (B-tree, GiST, GIN, BRIN, hash), window functions, CTEs, materialized views, partitioning.
If you're building a standard web app — users table, orders table, products table, a bunch of JOINs — PostgreSQL has been the right choice for years. MySQL was the default in the PHP/WordPress era. That era is over. For new projects in 2026, PostgreSQL beats MySQL on standards compliance, JSON handling, extension support, and advanced indexing.
MySQL still works great for read-heavy CRUD apps and the WordPress ecosystem. But if you're starting fresh, there's no reason to pick MySQL over PostgreSQL. The 15-point gap in the Stack Overflow survey reflects this.
2. The Document Store (JSONB vs MongoDB)
This is the one that surprises people the most.
"But I need flexible schemas! I need to store JSON documents! That's why I use MongoDB!"
Sure. But PostgreSQL's JSONB type does document storage inside a relational model. You get the flexibility of schemaless documents AND the power of SQL. Here's what that looks like:
-- Create a table with a JSONB column
CREATE TABLE events (
id SERIAL PRIMARY KEY,
event_type TEXT NOT NULL,
payload JSONB NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Insert a document
INSERT INTO events (event_type, payload)
VALUES ('page_view', '{"url": "/blog", "user_id": 42, "meta": {"browser": "Chrome", "os": "macOS"}}');
-- Query nested fields
SELECT payload->>'url' AS url,
payload->'meta'->>'browser' AS browser
FROM events
WHERE payload->>'user_id' = '42';
-- Create a GIN index for fast document queries
CREATE INDEX idx_events_payload ON events USING GIN (payload);
-- Query with containment operator — uses the GIN index
SELECT * FROM events
WHERE payload @> '{"event_type": "page_view", "meta": {"browser": "Chrome"}}';
That last query — the containment operator with a GIN index — is where PostgreSQL shines. You get document-style queries with index support, inside a database that also does JOINs, transactions, and foreign keys.
The benchmarks? MongoDB handles 20-35K simple document writes per second. PostgreSQL handles 15-25K mixed OLTP transactions per second. For raw document inserts, MongoDB is faster. But the moment you need to JOIN documents with relational data, run aggregations, or enforce consistency — PostgreSQL wins.
For this blog, all my post content lives in a text column, and my tags are a text[] array. I could've used JSONB for metadata. I didn't need to because Drizzle ORM handles arrays natively. But the point is: I never once considered MongoDB.
3. The Vector Database (pgvector vs Pinecone)
This is the hot one in 2026. Every AI startup thinks they need a dedicated vector database. Pinecone, Weaviate, Qdrant — there's a new one every month.
But pgvector exists. And with pgvectorscale (by Timescale), the performance numbers are genuinely impressive.
-- Enable the extension
CREATE EXTENSION IF NOT EXISTS vector;
-- Create a table with vector embeddings
CREATE TABLE documents (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
content TEXT NOT NULL,
embedding vector(1536) -- OpenAI ada-002 dimensions
);
-- Create an index for approximate nearest neighbor search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
-- Find the 5 most similar documents
SELECT title, content,
1 - (embedding <=> '[0.1, 0.2, ...]'::vector) AS similarity
FROM documents
ORDER BY embedding <=> '[0.1, 0.2, ...]'::vector
LIMIT 5;
The benchmarks from Timescale: pgvector + pgvectorscale achieved 471 queries per second at 99% recall on a dataset of 50 million vectors. That's 28x lower p95 latency than Pinecone, 16x higher throughput, and 75% lower cost.
Read that again. 28x lower latency. 16x more throughput. 75% cheaper. And it runs inside your existing PostgreSQL instance. No separate service, no separate billing, no separate ops overhead.
For datasets under 10 million vectors, pgvector is a no-brainer. You get vector similarity search in the same database where your application data lives. You can JOIN embeddings with your users table. You can filter by metadata using regular SQL WHERE clauses. Try doing that with Pinecone.
If I were building a RAG application today — and I've built a few — I'd start with pgvector inside my existing Neon database. Zero additional infrastructure. The embedding goes in a column right next to the content it represents.
4. The Search Engine (Full-Text Search vs Elasticsearch)
PostgreSQL has had full-text search for years. Most people don't know this because Elasticsearch has better marketing.
-- Add a tsvector column
ALTER TABLE posts ADD COLUMN search_vector tsvector;
-- Populate it
UPDATE posts SET search_vector =
setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
setweight(to_tsvector('english', coalesce(content, '')), 'B');
-- Create a GIN index
CREATE INDEX idx_posts_search ON posts USING GIN (search_vector);
-- Search with ranking
SELECT title, ts_rank(search_vector, query) AS rank
FROM posts, to_tsquery('english', 'postgresql & database') AS query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 10;
This gives you: tokenization, stemming, ranking, phrase search, prefix matching, and index-backed performance. Zero additional infrastructure, ACID consistent with your data, and no synchronization headaches between your primary database and a search index.
For this blog? I search across maybe 20 articles. PostgreSQL full-text search handles that without breaking a sweat. A SaaS with 100K documents? Still fine.
Teams have successfully replaced Elasticsearch with PostgreSQL full-text search for datasets under 1 million records with straightforward keyword search needs. The operational simplicity alone is worth it — one fewer service to monitor, scale, and pay for.
5. The Time-Series Database (TimescaleDB)
Got IoT data? Server metrics? Financial ticks? You might think you need InfluxDB or a dedicated time-series database.
TimescaleDB is a PostgreSQL extension that turns your regular tables into hypertables — automatically partitioned by time, with specialized compression that achieves 90% storage reduction and query performance that's 1,000x faster than vanilla PostgreSQL for time-series workloads.
And it's still just PostgreSQL underneath. You can JOIN your time-series data with your regular application tables. You can use the same connection string, the same ORM, the same backup strategy.
6. The Geospatial Database (PostGIS)
PostGIS has been around for over 20 years. It's the most capable open-source geospatial extension for any database. Period.
If you're building anything with maps, location data, geometry, routing — PostGIS handles it. Most people reach for MongoDB's geospatial features or a specialized GIS database when PostGIS has been doing this longer and better.
The Big Picture
Here's a table that puts it all together:
| Use Case | Dedicated Tool | PostgreSQL Extension | When to Use PostgreSQL | When to Use the Dedicated Tool |
|---|
| Document store | MongoDB | JSONB + GIN indexes | Most apps with mixed relational + document data | Truly schema-less, massive horizontal sharding |
| Vector search | Pinecone / Weaviate | pgvector + pgvectorscale | Under 10M vectors, when data lives in PostgreSQL | 100M+ vectors, extreme low-latency requirements |
| Full-text search | Elasticsearch | tsvector + GIN indexes | Under 1M records, keyword search | 5M+ docs, fuzzy matching, faceted search |
| Time-series | InfluxDB / TimescaleDB (standalone) | TimescaleDB extension | When time-series is part of a larger app | Dedicated massive-scale time-series-only workload |
| Geospatial | Specialized GIS | PostGIS | All but the most extreme geospatial workloads | Almost never — PostGIS is best-in-class |
| Simple caching | Redis | UNLOGGED tables + LISTEN/NOTIFY | Cache that doesn't need sub-ms latency | Sub-millisecond access, pub/sub at massive scale |
The power move here — the thing most teams miss — is that you can run pgvector AND TimescaleDB AND PostGIS in the same database. Same instance. Same connection. Same backup. That's not a theoretical claim. People do this in production.
When PostgreSQL Isn't Enough
I'd be lying if I said PostgreSQL replaces everything. It doesn't. Here's when you genuinely need something else.
MongoDB wins when: you have truly schema-less data that changes shape constantly, you need massive horizontal sharding across dozens of nodes, or your write pattern is predominantly simple document inserts at extreme volume (30K+ writes/sec sustained). Most apps don't fit this description, but some do. Large-scale IoT ingestion. Rapidly evolving product catalogs with wildly different attributes per item.
Elasticsearch wins when: you're searching across 5 million+ documents with complex fuzzy matching, faceted navigation, and relevance tuning. If you're building a product search for an e-commerce site with millions of SKUs and users expect Amazon-quality search with typo tolerance, synonyms, and faceted filters — yes, you need Elasticsearch. PostgreSQL's full-text search tops out in the low millions for complex queries.
Dedicated vector databases win when: you're working with 100 million+ vectors and need specialized features like built-in hybrid search, automatic sharding, or real-time index updates at massive scale. For the 95% of teams working with under 10 million vectors, pgvector is more than sufficient.
Redis wins when: you need sub-millisecond response times for caching hot data. PostgreSQL can do caching with UNLOGGED tables, but it can't match Redis's raw speed for simple key-value lookups. If you're caching session data or rate-limiting at thousands of requests per second, Redis earns its keep.
The pattern is clear though. Each of these tools wins at the extreme end of a specific spectrum. Most applications never reach that extreme. Most applications are running 5 databases because someone read a blog post about microservices, not because they actually need them.
Hosting in 2026: You're Spoiled for Choice
One of the biggest changes in the past two years is how easy it's become to run PostgreSQL in the cloud. The hosting options are excellent and most have free tiers.
| Provider | Free Tier | Standout Feature | Best For |
|---|
| Neon | 0.5 GB storage, scale-to-zero | Database branching, instant provisioning | Startups, side projects, dev/test |
| Supabase | 500 MB storage | Auth, realtime, edge functions bundled | Full-stack apps that want a BaaS |
| AWS RDS | 12 months free tier | Battle-tested, mature | Enterprise, existing AWS shops |
| Google Cloud SQL | $300 credit | Tight GCP integration | GCP-native workloads |
| Azure Database | $200 credit | Enterprise compliance | .NET / Microsoft shops |
| Railway | $5 credit/month | Dead-simple deployment | Hobby projects, quick deploys |
| Render | 256 MB (90-day limit) | Managed + free web hosting | Small projects with web frontend |
I use Neon. The scale-to-zero feature means my blog's database costs literally nothing when nobody's reading at 3 AM. Neon was acquired by Databricks in May 2025 for roughly $1 billion, which tells you how seriously the industry takes serverless PostgreSQL.
Database branching is Neon's killer feature. I can create a full copy of my production database in seconds for testing. It uses copy-on-write storage, so it barely costs anything. It's like Git branches but for your database.
Supabase is the other strong option, especially if you want auth, realtime subscriptions, and edge functions bundled in. It's more opinionated than Neon — more of a Firebase replacement than a pure database host. Depends on what you want.
For enterprise? AWS RDS is still the safe choice. It's boring in the best way. Rock-solid managed PostgreSQL with automated backups, read replicas, and multi-AZ failover.
What I Actually Think
Here's my actual opinion, stripped of nuance.
PostgreSQL should be your default database. Not your only option. Your default. The one you start with until you prove — with data, not vibes — that you need something else.
I've seen teams spin up MongoDB "for the flexible schema" when their schema was known from day one. I've seen startups pay $500/month for Elasticsearch to search across 10,000 documents. I've seen companies run Redis alongside PostgreSQL to cache data that PostgreSQL could've served with a materialized view.
Each additional database adds:
- Another connection string to manage
- Another backup strategy to maintain
- Another monitoring dashboard to watch
- Another set of credentials to rotate
- Another point of failure at 2 AM
- Another skill your team needs to hire for
The operational complexity of running multiple databases is wildly underestimated. It's not just the database itself — it's the synchronization between them. Keeping your Elasticsearch index in sync with your PostgreSQL data. Making sure your Redis cache invalidates when your primary database updates. Handling the inevitable moment when they disagree.
This blog — ismatsamadov.com — runs on Next.js with Drizzle ORM and a single Neon PostgreSQL database. Posts, tags, subscribers, experiences, certifications — it's all in one database. I considered adding Redis for caching. Then I added proper database indexes and the query time dropped to 3ms. Problem solved without adding infrastructure.
My job aggregator is more complex — 45 tables, 50 indexes, full-text search on job listings, JSONB columns for scraped metadata. Still one PostgreSQL database. Still on Neon. Still $5/month.
The "polyglot persistence" philosophy that was trendy in the 2010s — "use the right database for each job!" — sounds smart in a conference talk. In practice, it means your team of three is now operating five different database engines. That's not engineering. That's masochism.
Start with PostgreSQL. Push it hard. Add JSONB for your document needs. Add pgvector for your embeddings. Add full-text search for your search feature. Add TimescaleDB if you get time-series data. Push it until it genuinely can't handle your workload, with benchmarks to prove it.
Then — and only then — add a specialized database for the specific bottleneck you've identified.
Most teams never reach that point.
Sources
- Stack Overflow 2025 Developer Survey — Technology Section
- PostgreSQL in Stack Overflow 2025 Survey — Detailed Analysis
- DB-Engines Database Ranking — March 2026
- MongoDB vs PostgreSQL 2026 Comparison
- JSONB Benchmark: Postgres vs Mongo
- PostgreSQL as a Vector Database: pgvector vs Pinecone vs Weaviate
- pgvector vs Pinecone — Performance Benchmarks
- Postgres Full-Text Search vs Elasticsearch
- Why We Replaced Elasticsearch with Postgres Full-Text Search
- 7 PostgreSQL Extensions That Will Supercharge Your Database in 2026
- Top PostgreSQL Database Free Tiers in 2026
- Neon Pricing
- PostgreSQL vs MySQL 2026