Bounded Concurrency Beats Clever Concurrency


Go makes concurrency feel effortless.

go handleRequest(req)

One line. One goroutine. Infinite scalability… right?

In reality, this is one of the fastest ways to take down a production system.

Unbounded concurrency is not scalability. It’s deferred failure.


The hidden cost of “just spawn a goroutine”

Every goroutine is cheap.

But not free.

Each goroutine:

  • consumes memory (stack + heap references)
  • competes for CPU scheduling
  • holds connections, locks, or file descriptors
  • increases pressure on GC

One goroutine is nothing.

Ten thousand is a system design decision.


The real problem: resource amplification

The real danger is not goroutines.

It’s what they touch:

  • database connections
  • HTTP clients
  • Redis pools
  • file handles
  • CPU-heavy work

Unbounded goroutines turn small load spikes into:

  • connection storms
  • thundering herds
  • cascading timeouts
  • self-inflicted denial of service

Your concurrency model amplifies load — for better or worse.


Clever concurrency vs controlled concurrency

Clever concurrency looks like:

for _, item := range items {
    go process(item)
}

It’s short. It’s elegant. It’s wrong for production.

Controlled concurrency looks boring:

sem := make(chan struct{}, 10)

for _, item := range items {
    sem <- struct{}{}
    go func(item Item) {
        defer func() { <-sem }()
        process(item)
    }(item)
}

Boring. Explicit. Predictable.

Production systems reward boring code.


Worker pools: the simplest bound

The most robust pattern is a worker pool:

jobs := make(chan Job)
wg := sync.WaitGroup{}

for i := 0; i < 10; i++ {
    wg.Add(1)
    go func() {
        defer wg.Done()
        for job := range jobs {
            handle(job)
        }
    }()
}

for _, job := range incoming {
    jobs <- job
}

close(jobs)
wg.Wait()

This gives you:

  • a fixed upper bound
  • backpressure
  • natural draining on shutdown
  • predictable resource usage

Bounded concurrency creates backpressure

Backpressure is not a bug.

It’s a feature.

When capacity is reached:

  • queues fill
  • senders block
  • load slows down naturally

This protects:

  • your database
  • your cache
  • your downstream systems
  • your own memory

If your system never pushes back, it will eventually fall over.


Context + bounds = correct cancellation

Bounds alone are not enough.

Combine them with context:

select {
case sem <- struct{}{}:
    go work()
case <-ctx.Done():
    return
}

This ensures:

  • shutdown stops new work
  • blocked producers can exit
  • the system drains cleanly

Bounded + cancellable is the sweet spot.


A Python contrast (only to clarify the model)

In Python async systems:

  • concurrency is implicitly bounded by the event loop
  • but external resources are still unbounded unless limited

You still need:

  • connection pool limits
  • semaphore limits
  • task group limits

Different runtime. Same physics.

The bottleneck is never the coroutine. It’s what the coroutine touches.


When unbounded concurrency is acceptable

Rarely.

Maybe for:

  • fire-and-forget metrics
  • best-effort logging
  • isolated CPU work with strict limits elsewhere

Even then, you usually still want:

  • rate limiting
  • sampling
  • drop strategies

The production rule of thumb

If a piece of code can:

  • accept user input
  • talk to external systems
  • allocate memory
  • run for non-trivial time

Then it must be bounded.

If you can’t say what the max concurrency is, you don’t have a concurrency design.


The real scalability skill

Scalability is not:

  • spawning faster
  • spawning more

Scalability is:

  • knowing when to say “not now”
  • shaping load
  • protecting dependencies
  • staying predictable under stress

The takeaway

Unbounded goroutines feel powerful.

Bounded concurrency is what keeps your service alive at 3am.

Clever concurrency makes demos look good. Bounded concurrency makes systems survive.


  • Graceful Shutdown Is a Feature, Not a Signal Handler
  • Channels Are Not Queues
  • Designing Backpressure in Distributed Systems
  • Retry Storms: The Silent System Killer