Go’s concurrency model is one of its defining strengths. Goroutines and channels provide a structured way to write concurrent programs that is simpler than thread-based models in most languages. But simplicity does not mean you can skip understanding the patterns — misused concurrency is worse than no concurrency at all.
Fundamentals
A goroutine is a lightweight, cooperatively-scheduled function execution. Starting one costs roughly 2 KB of stack (which grows as needed), compared to the 1-8 MB default for OS threads. You can run millions of goroutines on a single machine.
A channel is a typed conduit for communication between goroutines. Channels enforce synchronization: sending blocks until a receiver is ready (for unbuffered channels), making data races structurally harder to introduce.
func main() { ch := make(chan string)
go func() { ch <- "hello from goroutine" }()
msg := <-ch fmt.Println(msg)}The select statement lets a goroutine wait on multiple channel operations simultaneously, enabling timeouts, cancellation, and multiplexing.
Pattern 1: Fan-Out / Fan-In
Distribute work across multiple goroutines (fan-out), then collect results into a single channel (fan-in).
func fanOut(urls []string) <-chan Result { results := make(chan Result, len(urls)) var wg sync.WaitGroup
for _, url := range urls { wg.Add(1) go func(u string) { defer wg.Done() resp, err := http.Get(u) if err != nil { results <- Result{URL: u, Err: err} return } defer resp.Body.Close() results <- Result{URL: u, Status: resp.StatusCode} }(url) }
go func() { wg.Wait() close(results) }()
return results}The sync.WaitGroup tracks completion so the results channel is closed only after all goroutines finish. The caller ranges over the channel until it closes.
Pattern 2: Worker Pool
Limit concurrency with a fixed number of workers consuming from a shared job channel.
func workerPool(jobs <-chan Job, numWorkers int) <-chan Result { results := make(chan Result, numWorkers) var wg sync.WaitGroup
for i := 0; i < numWorkers; i++ { wg.Add(1) go func(id int) { defer wg.Done() for job := range jobs { result := process(job) results <- result } }(i) }
go func() { wg.Wait() close(results) }()
return results}This is the most common pattern for rate-limiting external API calls, database operations, or CPU-intensive processing. The jobs channel acts as a work queue, and workers pull from it until it closes.
Pattern 3: Pipeline
Chain stages where each stage is a goroutine reading from an input channel and writing to an output channel.
func generate(nums ...int) <-chan int { out := make(chan int) go func() { for _, n := range nums { out <- n } close(out) }() return out}
func square(in <-chan int) <-chan int { out := make(chan int) go func() { for n := range in { out <- n * n } close(out) }() return out}
func main() { // Pipeline: generate -> square -> print for result := range square(generate(2, 3, 4)) { fmt.Println(result) // 4, 9, 16 }}Pipelines compose naturally. Each stage runs concurrently, and back-pressure is automatic — a slow consumer slows down the producer via channel blocking.
Pattern 4: Context-Based Cancellation
Use context.Context for timeouts and cancellation across goroutine trees.
func fetchWithTimeout(ctx context.Context, url string) ([]byte, error) { ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil) if err != nil { return nil, err }
resp, err := http.DefaultClient.Do(req) if err != nil { return nil, err } defer resp.Body.Close()
return io.ReadAll(resp.Body)}Always pass context.Context as the first parameter. Always call cancel() to release resources, even if the operation completes successfully.
Common Mistakes
Goroutine leaks. A goroutine that blocks forever on a channel send or receive is a memory leak. Always ensure channels are eventually closed or that goroutines have a cancellation path.
// BAD: goroutine leaks if nobody reads from chgo func() { ch <- expensiveComputation()}()
// GOOD: use select with context cancellationgo func() { select { case ch <- expensiveComputation(): case <-ctx.Done(): return }}()Race conditions. Sharing state between goroutines without synchronization is undefined behavior. Use the race detector during development:
go test -race ./...go run -race main.goClosing channels from the wrong side. Only the sender should close a channel. Closing a channel that another goroutine sends to causes a panic. If multiple goroutines send to a channel, use a WaitGroup to close it after all senders finish.
Unbounded goroutine creation. Launching a goroutine per incoming request without limits will exhaust memory under load. Use worker pools or semaphore patterns to cap concurrency.
Select for Multiplexing
The select statement is how you combine multiple concurrent operations:
select {case msg := <-messageCh: handleMessage(msg)case <-ticker.C: sendHeartbeat()case <-ctx.Done(): log.Println("shutting down:", ctx.Err()) return}When multiple cases are ready, Go picks one at random — this prevents starvation. A default case makes select non-blocking, useful for try-send patterns.
Key Takeaways
Go’s concurrency primitives are simple but require discipline. Use channels for communication, sync.WaitGroup for tracking completion, context.Context for cancellation, and the race detector for verification. Start with these patterns, and you will handle the vast majority of concurrent workloads cleanly.