← Back to Blog

PostgreSQL Performance Tuning for Node.js Applications

Dat NguyenAugust 20259 min readPostgreSQL

Practical techniques for optimizing PostgreSQL queries, connection pooling, and indexing strategies when building Node.js APIs.

PostgreSQLNode.jsPerformanceDatabaseBackend

The Usual Suspects

Most PostgreSQL performance issues in Node.js applications fall into three categories:

  1. Missing indexes on frequently queried columns
  2. Connection pool exhaustion under load
  3. N+1 queries in ORM-heavy code

Let's tackle each one.

Connection Pooling Done Right

Every pg connection to PostgreSQL involves a TCP handshake, TLS negotiation, and authentication. Creating a new connection per request adds 20-50ms of latency. Use a connection pool:

import { Pool } from "pg";

const pool = new Pool({
  host: process.env.DB_HOST,
  port: 5432,
  database: "myapp",
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 20,                    // Maximum pool size
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 5000,
});

// Use pool.query() for single queries
const result = await pool.query("SELECT * FROM users WHERE id = $1", [userId]);

// Use pool.connect() for transactions
const client = await pool.connect();
try {
  await client.query("BEGIN");
  await client.query("UPDATE accounts SET balance = balance - $1 WHERE id = $2", [amount, fromId]);
  await client.query("UPDATE accounts SET balance = balance + $1 WHERE id = $2", [amount, toId]);
  await client.query("COMMIT");
} catch (e) {
  await client.query("ROLLBACK");
  throw e;
} finally {
  client.release();
}

For serverless (Lambda): Set max: 1 per invocation and use RDS Proxy to pool connections across Lambda instances. Without RDS Proxy, 100 concurrent Lambda invocations = 100 database connections.

Indexing Strategy

Find Slow Queries First

Enable pg_stat_statements:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top 10 slowest queries by total time
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Use EXPLAIN ANALYZE

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT b.*, u.name as author_name
FROM blog_posts b
JOIN users u ON u.id = b.author_id
WHERE b.status = 'published'
  AND b.category_id = 5
ORDER BY b.published_at DESC
LIMIT 20;

Look for:

  • Seq Scan on large tables → needs an index
  • Nested Loop with high row counts → consider a different join strategy
  • Sort with high memory usage → add an index that matches the ORDER BY

Composite Indexes

For a query that filters on status and category_id then sorts by published_at:

CREATE INDEX idx_posts_status_category_published
ON blog_posts (status, category_id, published_at DESC);

The index column order must match your query pattern: equality filters first, range/sort columns last.

Partial Indexes

If 90% of your queries filter WHERE status = 'published':

CREATE INDEX idx_posts_published
ON blog_posts (category_id, published_at DESC)
WHERE status = 'published';

Smaller index, faster scans, less storage.

Avoiding N+1 Queries

The classic ORM trap:

// BAD: N+1 — one query for posts, then one per author
const posts = await Post.findAll({ where: { status: "published" } });
for (const post of posts) {
  post.author = await User.findByPk(post.authorId); // N queries!
}

// GOOD: JOIN in a single query
const posts = await pool.query(`
  SELECT p.*, u.name as author_name, u.avatar_url
  FROM posts p
  JOIN users u ON u.id = p.author_id
  WHERE p.status = $1
  ORDER BY p.published_at DESC
  LIMIT $2
`, ["published", 20]);

If you use an ORM, always eager-load relations:

// Sequelize
const posts = await Post.findAll({
  where: { status: "published" },
  include: [{ model: User, as: "author" }],
});

Monitoring in Production

Key metrics to track:

  • Active connections vs pool max — approaching the limit means you need RDS Proxy or a bigger pool
  • Query duration p95/p99 — catch regressions before users notice
  • Dead tuples — high counts mean VACUUM isn't keeping up
  • Cache hit ratio — should be >99%. Below that, you need more shared_buffers
-- Cache hit ratio
SELECT
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;

Conclusion

PostgreSQL performance tuning isn't about exotic tricks. It's about fundamentals: pool your connections, index your queries, avoid N+1 patterns, and monitor what matters. These practices alone solve 90% of performance issues in Node.js applications.