Lab 14: Goroutine Patterns

Time: 30 minutes | Level: Practitioner | Docker: docker run -it --rm golang:1.22-alpine sh

Overview

Master production goroutine patterns: worker pools, pipelines, bounded concurrency with semaphores, parallel error collection with errgroup, and graceful shutdown with context + signals.


Step 1: Worker Pool Pattern

A worker pool limits the number of goroutines processing jobs concurrently — essential for bounding CPU/memory/DB connection usage.

package main

import (
    "fmt"
    "sync"
    "time"
)

func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
    defer wg.Done()
    for j := range jobs {
        // Simulate work
        time.Sleep(time.Millisecond)
        results <- j * j // return square of job number
    }
}

func main() {
    const numJobs    = 20
    const numWorkers = 5

    jobs    := make(chan int, numJobs)
    results := make(chan int, numJobs)
    var wg sync.WaitGroup

    // Spawn fixed pool of workers
    for w := 1; w <= numWorkers; w++ {
        wg.Add(1)
        go worker(w, jobs, results, &wg)
    }

    // Send all jobs, then close to signal no more work
    for j := 1; j <= numJobs; j++ {
        jobs <- j
    }
    close(jobs)

    // Wait for all workers, then close results
    go func() {
        wg.Wait()
        close(results)
    }()

    // Collect results
    sum := 0
    for r := range results {
        sum += r
    }
    fmt.Printf("Worker pool: processed %d jobs with %d workers\n", numJobs, numWorkers)
    fmt.Printf("Sum of squares 1..%d = %d\n", numJobs, sum)
}

💡 Always close(jobs) after sending all work — workers exit their range loop when the channel is closed. Without this, workers block forever waiting for more jobs.

Verify:

📸 Verified Output:


Step 2: Pipeline Pattern

Pipelines chain channels through processing stages. Each stage transforms data and passes it downstream.

Verify:

📸 Verified Output:

💡 1²+10=11, 2²+10=14, 3²+10=19, 4²+10=26, 5²+10=35


Step 3: Bounded Concurrency with Semaphore

Use a buffered channel as a semaphore to limit how many goroutines run simultaneously:

💡 sem <- struct{}{} blocks when all slots are taken, creating natural backpressure. <-sem in defer ensures the slot is always released, even on panic.


Step 4: errgroup for Parallel Error Collection

errgroup.Group runs goroutines in parallel and returns the first error encountered:

💡 When errgroup.WithContext is used, the context is cancelled as soon as any goroutine returns an error — downstream goroutines can check ctx.Done() to bail out early.


Step 5: Graceful Shutdown with Context + WaitGroup


Step 6: Fan-Out / Fan-In

💡 Fan-out parallelises a single stream; fan-in merges parallel streams. Together they implement the scatter-gather pattern.


Step 7: Common Mistakes to Avoid


Step 8 (Capstone): Complete Worker Pool with 100 Jobs

📸 Verified Output:


Summary

Pattern
Implementation
Use Case

Worker Pool

Fixed goroutines + job channel

Bound CPU/IO concurrency

Pipeline

Chained <-chan stages

Stream data transformation

Semaphore

Buffered channel as token pool

Limit goroutine count

errgroup

golang.org/x/sync/errgroup

Parallel ops with first-error

Graceful Shutdown

context.Cancel + WaitGroup

Clean process termination

Fan-Out/Fan-In

Multiple goroutines + merge

Scatter-gather parallelism

Last updated