Concurrency: Goroutines and Channels
JVM engineers come to Go with strong opinions about concurrency: thread pools, synchronized blocks, volatile, CompletableFuture, ExecutorService. All of that machinery exists because OS threads are expensive and sharing mutable state is error-prone. Go takes a different starting point — goroutines are cheap enough to create by the thousand, and channels are the preferred way to coordinate between them.
The mantra from the Go team: Do not communicate by sharing memory; share memory by communicating.
Goroutines Are Not Threads
A goroutine is a lightweight coroutine managed by the Go runtime, not by the OS. The runtime multiplexes goroutines onto a pool of OS threads (the GOMAXPROCS setting controls the number of threads, defaulting to the number of logical CPUs).
| Property | Java Thread | Go Goroutine |
|---|---|---|
| Stack size | 512 KB–1 MB (fixed) | 2–8 KB (grows/shrinks) |
| Startup cost | ~1 ms, OS syscall | ~few µs, runtime call |
| Practical limit | Thousands | Millions |
| Scheduling | OS preemptive | Go runtime cooperative + preemptive |
// Java — starting a thread
Thread t = new Thread(() -> doWork());
t.start();
t.join();// Go — starting a goroutine
go doWork() // fire and forgetFire-and-forget goroutines are dangerous if the main program exits before they finish. Use sync.WaitGroup or channels to wait.
var wg sync.WaitGroup
for _, item := range items {
wg.Add(1)
go func(i Item) {
defer wg.Done()
process(i)
}(item)
}
wg.Wait()Channels — Typed Pipes Between Goroutines
A channel is a typed FIFO queue with optional buffering. Sending to an unbuffered channel blocks until a receiver is ready; receiving blocks until a sender sends.
ch := make(chan int) // unbuffered
bch := make(chan int, 100) // buffered, capacity 100
// producer
go func() { ch <- 42 }()
// consumer
v := <-ch
fmt.Println(v) // 42Channels replace most of the patterns JVM engineers use BlockingQueue for.
// Java producer-consumer
BlockingQueue<Task> queue = new ArrayBlockingQueue<>(100);
ExecutorService pool = Executors.newFixedThreadPool(4);
pool.submit(() -> { queue.put(task); });
Task t = queue.take();// Go producer-consumer
tasks := make(chan Task, 100)
go func() { tasks <- task }()
// worker
for t := range tasks {
process(t)
}Close a channel with close(ch) when no more values will be sent. Ranging over a closed channel drains remaining values then exits.
Select — Non-Blocking Multi-Channel Coordination
select is Go's way to wait on multiple channels simultaneously, equivalent to CompletableFuture.anyOf but at the language level.
select {
case msg := <-inbound:
handle(msg)
case <-timeout:
log.Println("timed out")
case <-ctx.Done():
return ctx.Err()
}Context — Cancellation Without Global State
Java's Thread.interrupt() is awkward; it requires the code to check isInterrupted() at every blocking call. Go's context.Context is threaded explicitly through function signatures and provides a clean cancellation and deadline mechanism.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
result, err := fetchData(ctx, url)Inside fetchData, every blocking operation — HTTP call, DB query, channel receive — accepts the context and will return early when the deadline expires or cancel() is called.
func fetchData(ctx context.Context, url string) ([]byte, error) {
req, _ := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, fmt.Errorf("fetchData: %w", err)
}
defer resp.Body.Close()
return io.ReadAll(resp.Body)
}The rule: every function that does I/O or blocks should accept a context.Context as its first parameter.
Common Patterns
Worker Pool
func workerPool(ctx context.Context, jobs <-chan Job, results chan<- Result, n int) {
var wg sync.WaitGroup
for range n {
wg.Add(1)
go func() {
defer wg.Done()
for j := range jobs {
select {
case <-ctx.Done():
return
default:
results <- process(j)
}
}
}()
}
go func() {
wg.Wait()
close(results)
}()
}Fan-Out / Fan-In
func merge(cs ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, c := range cs {
wg.Add(1)
go func(ch <-chan int) {
defer wg.Done()
for v := range ch {
out <- v
}
}(c)
}
go func() { wg.Wait(); close(out) }()
return out
}What sync.Mutex Is Still Good For
Channels are not the only primitive. When you have a shared data structure that is too fine-grained for channel coordination — a cache, a counter — sync.Mutex and sync.RWMutex are the right tools.
type SafeMap struct {
mu sync.RWMutex
m map[string]int
}
func (s *SafeMap) Get(key string) (int, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.m[key]
return v, ok
}
func (s *SafeMap) Set(key string, val int) {
s.mu.Lock()
defer s.mu.Unlock()
s.m[key] = val
}Use channels for ownership transfer and pipeline coordination; use mutexes for protecting in-place shared state.
Key Takeaways
- Goroutines are cheap — creating thousands is normal; the runtime scheduler handles the OS thread mapping transparently.
- Channels are typed, blocking pipes that replace most
BlockingQueueandExecutorServicepatterns; closing a channel signals completion to all consumers. selectprovides non-blocking multi-channel coordination equivalent toCompletableFuture.anyOfbut built into the language syntax.- Thread every I/O-bound function call with
context.Contextas the first parameter — it is the idiomatic replacement for Java'sThread.interrupt()and deadline propagation. - Use
sync.WaitGroupto wait for a known set of goroutines; use channels to wait on dynamic work completion. - Reach for
sync.Mutexwhen coordinating access to shared in-place data structures; reach for channels when transferring ownership of data between goroutines.