Go Design Patterns
Go Design Patterns
A curated collection of 56 idiomatic design & application patterns for the Go programming language — actively maintained and 100% implemented.
Abstract Factory Pattern Medium
The abstract factory pattern provides an interface for creating families of related objects without specifying their concrete types. The client code works with factories and products through abstract interfaces, making it independent of the actual types being created.
Implementation
package factory
// Button and Checkbox are the abstract product interfaces.
type Button interface {
Paint() string
}
type Checkbox interface {
Check() string
}
// GUIFactory is the abstract factory interface.
type GUIFactory interface {
CreateButton() Button
CreateCheckbox() Checkbox
}
// --- Windows family ---
type WindowsButton struct{}
func (b *WindowsButton) Paint() string { return "Windows button" }
type WindowsCheckbox struct{}
func (c *WindowsCheckbox) Check() string { return "Windows checkbox" }
type WindowsFactory struct{}
func (f *WindowsFactory) CreateButton() Button { return &WindowsButton{} }
func (f *WindowsFactory) CreateCheckbox() Checkbox { return &WindowsCheckbox{} }
// --- Mac family ---
type MacButton struct{}
func (b *MacButton) Paint() string { return "Mac button" }
type MacCheckbox struct{}
func (c *MacCheckbox) Check() string { return "Mac checkbox" }
type MacFactory struct{}
func (f *MacFactory) CreateButton() Button { return &MacButton{} }
func (f *MacFactory) CreateCheckbox() Checkbox { return &MacCheckbox{} }
Usage
func BuildUI(f factory.GUIFactory) {
button := f.CreateButton()
checkbox := f.CreateCheckbox()
fmt.Println(button.Paint())
fmt.Println(checkbox.Check())
}
// The client code is decoupled from concrete product types.
BuildUI(&factory.WindowsFactory{})
// Windows button
// Windows checkbox
BuildUI(&factory.MacFactory{})
// Mac button
// Mac checkbox
Rules of Thumb
- Use abstract factory when the system needs to be independent of how its products are created and composed.
- Abstract factory is often implemented using factory methods under the hood.
- If you only have one product family, you likely only need the simpler factory method pattern.
Builder Pattern Easy
Builder pattern separates the construction of a complex object from its representation so that the same construction process can create different representations.
In Go, normally a configuration struct is used to achieve the same behavior,
however passing a struct to the builder method fills the code with boilerplate
if cfg.Field != nil {...} checks.
Implementation
package car
type Speed float64
const (
MPH Speed = 1
KPH = 1.60934
)
type Color string
const (
BlueColor Color = "blue"
GreenColor = "green"
RedColor = "red"
)
type Wheels string
const (
SportsWheels Wheels = "sports"
SteelWheels = "steel"
)
type Builder interface {
Color(Color) Builder
Wheels(Wheels) Builder
TopSpeed(Speed) Builder
Build() Interface
}
type Interface interface {
Drive() error
Stop() error
}
Usage
assembly := car.NewBuilder().Paint(car.RedColor)
familyCar := assembly.Wheels(car.SportsWheels).TopSpeed(50 * car.MPH).Build()
familyCar.Drive()
sportsCar := assembly.Wheels(car.SteelWheels).TopSpeed(150 * car.MPH).Build()
sportsCar.Drive()
Factory Method Pattern Easy
Factory method creational design pattern allows creating objects without having to specify the exact type of the object that will be created.
Implementation
The example implementation shows how to provide a data store with different backends such as in-memory, disk storage.
Types
package data
import "io"
type Store interface {
Open(string) (io.ReadWriteCloser, error)
}
Different Implementations
package data
type StorageType int
const (
DiskStorage StorageType = 1 << iota
TempStorage
MemoryStorage
)
func NewStore(t StorageType) Store {
switch t {
case MemoryStorage:
return newMemoryStorage( /*...*/ )
case DiskStorage:
return newDiskStorage( /*...*/ )
default:
return newTempStorage( /*...*/ )
}
}
Usage
With the factory method, the user can specify the type of storage they want.
s, _ := data.NewStore(data.MemoryStorage)
f, _ := s.Open("file")
n, _ := f.Write([]byte("data"))
defer f.Close()
Map / Filter / Reduce Easy
Go generics (1.18+) make it possible to write type-safe functional
transformation functions — Map, Filter, and Reduce — that work with any
slice type. These composable building blocks eliminate repetitive for loops
for common data transformations.
Implementation
package fn
// Map applies a function to every element of a slice, returning a new slice.
func Map[T any, R any](items []T, fn func(T) R) []R {
result := make([]R, len(items))
for i, item := range items {
result[i] = fn(item)
}
return result
}
// Filter returns a new slice containing only elements that satisfy the predicate.
func Filter[T any](items []T, pred func(T) bool) []T {
var result []T
for _, item := range items {
if pred(item) {
result = append(result, item)
}
}
return result
}
// Reduce collapses a slice into a single value using an accumulator function.
func Reduce[T any, R any](items []T, initial R, fn func(R, T) R) R {
acc := initial
for _, item := range items {
acc = fn(acc, item)
}
return acc
}
Usage
numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
// Double every number.
doubled := fn.Map(numbers, func(n int) int { return n * 2 })
// [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
// Keep only even numbers.
evens := fn.Filter(numbers, func(n int) bool { return n%2 == 0 })
// [2, 4, 6, 8, 10]
// Sum all numbers.
sum := fn.Reduce(numbers, 0, func(acc, n int) int { return acc + n })
// 55
// Compose: sum of squares of even numbers.
result := fn.Reduce(
fn.Map(
fn.Filter(numbers, func(n int) bool { return n%2 == 0 }),
func(n int) int { return n * n },
),
0,
func(acc, n int) int { return acc + n },
)
// 220 (4 + 16 + 36 + 64 + 100)
Rules of Thumb
- These functions create new slices — they don’t mutate the input. This is safe but allocates.
- For performance-critical hot loops, a plain
forloop is still faster (no function call overhead per element). Mapcan change types (e.g.[]User→[]stringof names), making it more versatile than a simple loop.- Prefer readability: if the chain gets deeply nested, break it into intermediate variables.
Object Pool Pattern Medium
The object pool creational design pattern is used to prepare and keep multiple instances according to the demand expectation.
Implementation
package pool
type Pool chan *Object
func New(total int) *Pool {
p := make(Pool, total)
for i := 0; i < total; i++ {
p <- new(Object)
}
return &p
}
Usage
Given below is a simple lifecycle example on an object pool.
p := pool.New(2)
select {
case obj := <-p:
obj.Do( /*...*/ )
p <- obj
default:
// No more objects left — retry later or fail
return
}
Rules of Thumb
- Object pool pattern is useful in cases where object initialization is more expensive than the object maintenance.
- If there are spikes in demand as opposed to a steady demand, the maintenance overhead might overweigh the benefits of an object pool.
- It has positive effects on performance due to objects being initialized beforehand.
Singleton Pattern Easy
Singleton creational design pattern restricts the instantiation of a type to a single object.
Implementation
package singleton
import "sync"
type singleton map[string]string
var (
once sync.Once
instance singleton
)
func New() singleton {
once.Do(func() {
instance = make(singleton)
})
return instance
}
Usage
s := singleton.New()
s["this"] = "that"
s2 := singleton.New()
fmt.Println("This is ", s2["this"])
// This is that
Rules of Thumb
- Singleton pattern represents a global state and most of the time reduces testability.
Type-Safe Container Easy
Go generics (1.18+) allow creating type-safe container data structures that
work with any type — no interface{} casts, no code generation. The compiler
enforces type safety at compile time while keeping the implementation reusable.
Implementation
package container
// Stack is a generic LIFO container.
type Stack[T any] struct {
items []T
}
func (s *Stack[T]) Push(item T) {
s.items = append(s.items, item)
}
func (s *Stack[T]) Pop() (T, bool) {
if len(s.items) == 0 {
var zero T
return zero, false
}
item := s.items[len(s.items)-1]
s.items = s.items[:len(s.items)-1]
return item, true
}
func (s *Stack[T]) Peek() (T, bool) {
if len(s.items) == 0 {
var zero T
return zero, false
}
return s.items[len(s.items)-1], true
}
func (s *Stack[T]) Len() int {
return len(s.items)
}
// Queue is a generic FIFO container.
type Queue[T any] struct {
items []T
}
func (q *Queue[T]) Enqueue(item T) {
q.items = append(q.items, item)
}
func (q *Queue[T]) Dequeue() (T, bool) {
if len(q.items) == 0 {
var zero T
return zero, false
}
item := q.items[0]
q.items = q.items[1:]
return item, true
}
func (q *Queue[T]) Len() int {
return len(q.items)
}
Usage
// Type-safe stack of integers — no casting needed.
s := &container.Stack[int]{}
s.Push(1)
s.Push(2)
s.Push(3)
val, _ := s.Pop() // val is int (not interface{}), val == 3
// Type-safe queue of strings.
q := &container.Queue[string]{}
q.Enqueue("first")
q.Enqueue("second")
msg, _ := q.Dequeue() // msg is string, msg == "first"
Rules of Thumb
- Use generics when the logic is truly type-independent (containers, algorithms). Don’t use generics just to avoid writing two similar functions.
- Prefer
anyconstraint for containers; use specific constraints (comparable,constraints.Ordered) when the logic requires equality or ordering. - The
var zero Tpattern returns the zero value of a generic type — this is idiomatic for “not found” returns.
Bridge Pattern Medium
The bridge pattern decouples an abstraction from its implementation so that the two can vary independently. Instead of combining every abstraction variant with every implementation variant (leading to a class explosion), the bridge composes them at runtime through an interface.
Implementation
package bridge
// Renderer is the implementation interface.
type Renderer interface {
RenderCircle(radius float64) string
}
// VectorRenderer draws shapes as vector graphics.
type VectorRenderer struct{}
func (v *VectorRenderer) RenderCircle(radius float64) string {
return fmt.Sprintf("Drawing circle with radius %.1f as vector", radius)
}
// RasterRenderer draws shapes as pixels.
type RasterRenderer struct{}
func (r *RasterRenderer) RenderCircle(radius float64) string {
return fmt.Sprintf("Drawing circle with radius %.1f as pixels", radius)
}
// Shape is the abstraction that delegates to a Renderer.
type Shape struct {
renderer Renderer
}
// Circle extends Shape with circle-specific data.
type Circle struct {
Shape
radius float64
}
func NewCircle(renderer Renderer, radius float64) *Circle {
return &Circle{
Shape: Shape{renderer: renderer},
radius: radius,
}
}
func (c *Circle) Draw() string {
return c.renderer.RenderCircle(c.radius)
}
func (c *Circle) Resize(factor float64) {
c.radius *= factor
}
Usage
vector := &bridge.VectorRenderer{}
raster := &bridge.RasterRenderer{}
circle := bridge.NewCircle(vector, 5)
fmt.Println(circle.Draw())
// Drawing circle with radius 5.0 as vector
circle = bridge.NewCircle(raster, 5)
fmt.Println(circle.Draw())
// Drawing circle with radius 5.0 as pixels
Rules of Thumb
- Use bridge when you want to avoid a permanent binding between an abstraction and its implementation.
- Bridge is designed up-front to let the abstraction and implementation vary independently; adapter is applied after the fact to make unrelated classes work together.
- The abstraction side and the implementation side can be extended independently through composition.
Composite Pattern Medium
The composite pattern composes objects into tree structures to represent part-whole hierarchies. It allows clients to treat individual objects and compositions of objects uniformly through a common interface.
Implementation
package composite
import "fmt"
// Component is the common interface for leaf and composite nodes.
type Component interface {
Search(keyword string)
}
// File is a leaf node.
type File struct {
Name string
}
func (f *File) Search(keyword string) {
fmt.Printf("Searching for '%s' in file: %s\n", keyword, f.Name)
}
// Folder is a composite node that can contain other components.
type Folder struct {
Name string
Components []Component
}
func (f *Folder) Search(keyword string) {
fmt.Printf("Searching for '%s' in folder: %s\n", keyword, f.Name)
for _, c := range f.Components {
c.Search(keyword)
}
}
func (f *Folder) Add(c Component) {
f.Components = append(f.Components, c)
}
Usage
file1 := &composite.File{Name: "main.go"}
file2 := &composite.File{Name: "utils.go"}
file3 := &composite.File{Name: "readme.md"}
src := &composite.Folder{Name: "src"}
src.Add(file1)
src.Add(file2)
root := &composite.Folder{Name: "project"}
root.Add(src)
root.Add(file3)
// Treats files and folders uniformly.
root.Search("pattern")
// Searching for 'pattern' in folder: project
// Searching for 'pattern' in folder: src
// Searching for 'pattern' in file: main.go
// Searching for 'pattern' in file: utils.go
// Searching for 'pattern' in file: readme.md
Rules of Thumb
- Use composite when you want clients to ignore the difference between compositions of objects and individual objects.
- The composite pattern trades the type-safety of individual leaf operations for uniformity — you can call any operation on any node.
- Composite and decorator have similar structure diagrams but different intents: composite groups objects, decorator adds behavior.
Decorator Pattern Easy
Decorator structural pattern allows extending the function of an existing object dynamically without altering its internals.
Decorators provide a flexible method to extend functionality of objects.
Implementation
LogDecorate decorates a function with the signature func(int) int that
manipulates integers and adds input/output logging capabilities.
type Object func(int) int
func LogDecorate(fn Object) Object {
return func(n int) int {
log.Println("Starting the execution with the integer", n)
result := fn(n)
log.Println("Execution is completed with the result", result)
return result
}
}
Usage
func Double(n int) int {
return n * 2
}
f := LogDecorate(Double)
f(5)
// Starting execution with the integer 5
// Execution is completed with the result 10
Rules of Thumb
- Unlike Adapter pattern, the object to be decorated is obtained by injection.
- Decorators should not alter the interface of an object.
Facade Pattern Easy
The facade pattern provides a simplified interface to a complex subsystem. It wraps multiple components behind a single, easy-to-use API so that clients don’t need to interact with each subsystem directly.
Implementation
package computer
import "fmt"
// --- Subsystem components ---
type CPU struct{}
func (c *CPU) Freeze() { fmt.Println("CPU: freeze") }
func (c *CPU) Jump(position int64) { fmt.Printf("CPU: jump to 0x%x\n", position) }
func (c *CPU) Execute() { fmt.Println("CPU: executing") }
type Memory struct{}
func (m *Memory) Load(position int64, data []byte) {
fmt.Printf("Memory: loading %d bytes at 0x%x\n", len(data), position)
}
type HardDrive struct{}
func (h *HardDrive) Read(lba int64, size int) []byte {
fmt.Printf("HardDrive: reading %d bytes from sector %d\n", size, lba)
return make([]byte, size)
}
// --- Facade ---
const bootAddress int64 = 0x7C00
const bootSector int64 = 0
const sectorSize int = 512
type Computer struct {
cpu CPU
memory Memory
hardDrive HardDrive
}
func New() *Computer {
return &Computer{}
}
// Start hides the complex boot sequence behind a single method.
func (c *Computer) Start() {
c.cpu.Freeze()
c.memory.Load(bootAddress, c.hardDrive.Read(bootSector, sectorSize))
c.cpu.Jump(bootAddress)
c.cpu.Execute()
}
Usage
// The client interacts with one simple method instead of
// coordinating CPU, Memory, and HardDrive directly.
pc := computer.New()
pc.Start()
// CPU: freeze
// HardDrive: reading 512 bytes from sector 0
// Memory: loading 512 bytes at 0x7c00
// CPU: jump to 0x7c00
// CPU: executing
Rules of Thumb
- Facade does not prevent clients from accessing subsystems directly if they need to — it simply provides a convenient default path.
- Use facade when you want to layer a subsystem: the facade defines the entry point for each level.
- Facade and mediator are similar in that they abstract existing classes. Facade defines a simpler interface, while mediator introduces new behavior by coordinating between components.
Flyweight Pattern Medium
The flyweight pattern minimizes memory usage by sharing as much data as possible with similar objects. It separates object state into intrinsic (shared, immutable) and extrinsic (unique, context-dependent) parts. Shared intrinsic state is stored once and referenced by many objects.
Implementation
package flyweight
import "fmt"
// TreeType holds the intrinsic (shared) state.
type TreeType struct {
Name string
Color string
Texture string
}
func (t *TreeType) Draw(x, y int) string {
return fmt.Sprintf("Drawing '%s' tree (%s) at (%d, %d)", t.Name, t.Color, x, y)
}
// TreeFactory caches and reuses TreeType instances.
type TreeFactory struct {
types map[string]*TreeType
}
func NewTreeFactory() *TreeFactory {
return &TreeFactory{types: make(map[string]*TreeType)}
}
func (f *TreeFactory) GetTreeType(name, color, texture string) *TreeType {
key := name + "_" + color + "_" + texture
if t, ok := f.types[key]; ok {
return t
}
t := &TreeType{Name: name, Color: color, Texture: texture}
f.types[key] = t
return t
}
// Tree holds the extrinsic (unique) state plus a reference to the shared type.
type Tree struct {
X, Y int
TreeType *TreeType
}
func (t *Tree) Draw() string {
return t.TreeType.Draw(t.X, t.Y)
}
Usage
factory := flyweight.NewTreeFactory()
// Thousands of trees share only a few TreeType instances.
trees := []flyweight.Tree{
{X: 1, Y: 2, TreeType: factory.GetTreeType("Oak", "green", "rough")},
{X: 5, Y: 3, TreeType: factory.GetTreeType("Oak", "green", "rough")}, // reuses same TreeType
{X: 8, Y: 1, TreeType: factory.GetTreeType("Pine", "dark green", "smooth")},
{X: 3, Y: 7, TreeType: factory.GetTreeType("Oak", "green", "rough")}, // reuses same TreeType
}
for _, t := range trees {
fmt.Println(t.Draw())
}
// Drawing 'Oak' tree (green) at (1, 2)
// Drawing 'Oak' tree (green) at (5, 3)
// Drawing 'Pine' tree (dark green) at (8, 1)
// Drawing 'Oak' tree (green) at (3, 7)
// Only 2 TreeType objects created despite 4 Tree instances.
Rules of Thumb
- Use flyweight when the application creates a large number of objects that share most of their state.
- The shared (intrinsic) state must be immutable — if it changes, all references see the change.
- Go’s
stringtype is already a flyweight: identical string literals share the same underlying memory.
Proxy Pattern Easy
The proxy pattern provides an object that controls access to another object, intercepting all calls.
Implementation
The proxy could interface to anything: a network connection, a large object in memory, a file, or some other resource that is expensive or impossible to duplicate.
Short idea of implementation:
// To use proxy and to object they must implement same methods
type IObject interface {
ObjDo(action string)
}
// Object represents real objects which proxy will delegate data
type Object struct {
action string
}
// ObjDo implements IObject interface and handel's all logic
func (obj *Object) ObjDo(action string) {
// Action behavior
fmt.Printf("I can, %s", action)
}
// ProxyObject represents proxy object with intercepts actions
type ProxyObject struct {
object *Object
}
// ObjDo are implemented IObject and intercept action before send in real Object
func (p *ProxyObject) ObjDo(action string) {
if p.object == nil {
p.object = new(Object)
}
if action == "Run" {
p.object.ObjDo(action) // Prints: I can, Run
}
}
Usage
More complex usage of proxy as example: User creates “Terminal” authorizes and PROXY send execution command to real Terminal object See proxy/main.go or view in the Playground.
Chain of Responsibility Pattern Medium
The chain of responsibility pattern avoids coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Handlers are chained together, and the request is passed along the chain until a handler processes it or the chain ends.
Implementation
package chain
// Request represents the data flowing through the chain.
type Request struct {
Amount float64
}
// Handler defines the interface for a link in the chain.
type Handler interface {
SetNext(Handler) Handler
Handle(Request) string
}
// BaseHandler provides default chaining behavior.
type BaseHandler struct {
next Handler
}
func (b *BaseHandler) SetNext(h Handler) Handler {
b.next = h
return h
}
func (b *BaseHandler) HandleNext(r Request) string {
if b.next != nil {
return b.next.Handle(r)
}
return "no handler approved the request"
}
// --- Concrete handlers ---
type Manager struct{ BaseHandler }
func (m *Manager) Handle(r Request) string {
if r.Amount < 1000 {
return "Manager approved"
}
return m.HandleNext(r)
}
type Director struct{ BaseHandler }
func (d *Director) Handle(r Request) string {
if r.Amount < 5000 {
return "Director approved"
}
return d.HandleNext(r)
}
type VP struct{ BaseHandler }
func (v *VP) Handle(r Request) string {
if r.Amount < 10000 {
return "VP approved"
}
return v.HandleNext(r)
}
Usage
manager := &chain.Manager{}
director := &chain.Director{}
vp := &chain.VP{}
manager.SetNext(director).SetNext(vp)
fmt.Println(manager.Handle(chain.Request{Amount: 500}))
// Manager approved
fmt.Println(manager.Handle(chain.Request{Amount: 3000}))
// Director approved
fmt.Println(manager.Handle(chain.Request{Amount: 8000}))
// VP approved
fmt.Println(manager.Handle(chain.Request{Amount: 50000}))
// no handler approved the request
Rules of Thumb
- Use chain of responsibility when more than one object may handle a request, and the handler is determined at runtime.
- The chain can be composed dynamically at runtime, making it easy to add, remove, or reorder handlers.
- HTTP middleware stacks (e.g. in
net/http) are a common real-world example of this pattern.
Command Pattern Easy
The command pattern encapsulates a request as an object, allowing you to parameterize clients with different requests, queue or log requests, and support undoable operations. It decouples the invoker (who triggers the action) from the receiver (who performs it).
Implementation
package command
// Command is the interface all commands implement.
type Command interface {
Execute() string
Undo() string
}
// Receiver is the object that performs the actual work.
type Light struct {
IsOn bool
}
// --- Concrete commands ---
type TurnOnCommand struct {
Light *Light
}
func (c *TurnOnCommand) Execute() string {
c.Light.IsOn = true
return "light turned on"
}
func (c *TurnOnCommand) Undo() string {
c.Light.IsOn = false
return "light turned off (undo)"
}
type TurnOffCommand struct {
Light *Light
}
func (c *TurnOffCommand) Execute() string {
c.Light.IsOn = false
return "light turned off"
}
func (c *TurnOffCommand) Undo() string {
c.Light.IsOn = true
return "light turned on (undo)"
}
// Invoker stores and executes commands.
type RemoteControl struct {
history []Command
}
func (r *RemoteControl) Press(cmd Command) string {
r.history = append(r.history, cmd)
return cmd.Execute()
}
func (r *RemoteControl) UndoLast() string {
if len(r.history) == 0 {
return "nothing to undo"
}
last := r.history[len(r.history)-1]
r.history = r.history[:len(r.history)-1]
return last.Undo()
}
Usage
light := &command.Light{}
remote := &command.RemoteControl{}
on := &command.TurnOnCommand{Light: light}
off := &command.TurnOffCommand{Light: light}
fmt.Println(remote.Press(on)) // light turned on
fmt.Println(remote.Press(off)) // light turned off
fmt.Println(remote.UndoLast()) // light turned on (undo)
Rules of Thumb
- Use command when you need undo/redo, request queuing, or transaction logging.
- Commands can be serialized and sent over the network for distributed execution.
- In Go, simple commands can be represented as
func()closures rather than full interface implementations.
Mediator Pattern Medium
The mediator pattern defines an object that encapsulates how a set of objects interact. It promotes loose coupling by preventing objects from referring to each other directly, forcing them to communicate through the mediator instead.
Implementation
package mediator
import "fmt"
// Mediator defines the communication interface.
type Mediator interface {
Notify(sender Component, event string)
}
// Component is the base for all participants.
type Component struct {
mediator Mediator
}
func (c *Component) SetMediator(m Mediator) {
c.mediator = m
}
// --- Concrete components ---
type AuthService struct{ Component }
func (a *AuthService) Login(user string) string {
msg := fmt.Sprintf("%s logged in", user)
a.mediator.Notify(a, "login")
return msg
}
type Logger struct{ Component }
func (l *Logger) Log(message string) string {
return fmt.Sprintf("LOG: %s", message)
}
type Notifier struct{ Component }
func (n *Notifier) Send(message string) string {
return fmt.Sprintf("NOTIFY: %s", message)
}
// --- Concrete mediator ---
type AppMediator struct {
auth *AuthService
logger *Logger
notifier *Notifier
}
func NewAppMediator() *AppMediator {
m := &AppMediator{
auth: &AuthService{},
logger: &Logger{},
notifier: &Notifier{},
}
m.auth.SetMediator(m)
m.logger.SetMediator(m)
m.notifier.SetMediator(m)
return m
}
func (m *AppMediator) Notify(sender Component, event string) {
switch event {
case "login":
m.logger.Log("user login event")
m.notifier.Send("welcome back!")
}
}
Usage
app := mediator.NewAppMediator()
fmt.Println(app.auth.Login("alice"))
// alice logged in
// (mediator also triggers logger and notifier behind the scenes)
Rules of Thumb
- Use mediator when the communication logic between objects is complex and you want to centralize it in one place.
- Mediator trades complexity in individual components for complexity in the mediator itself — avoid creating a “god object.”
- In Go, channels can serve as a lightweight mediator between goroutines.
Memento Pattern Medium
The memento pattern captures and externalizes an object’s internal state so that the object can be restored to this state later, without violating encapsulation. It is commonly used for implementing undo functionality.
Implementation
package memento
// Memento stores a snapshot of the editor's state.
type Memento struct {
content string
}
// Editor is the originator whose state we want to save and restore.
type Editor struct {
content string
}
func (e *Editor) Type(text string) {
e.content += text
}
func (e *Editor) Content() string {
return e.content
}
func (e *Editor) Save() *Memento {
return &Memento{content: e.content}
}
func (e *Editor) Restore(m *Memento) {
e.content = m.content
}
// History is the caretaker that stores mementos.
type History struct {
snapshots []*Memento
}
func (h *History) Push(m *Memento) {
h.snapshots = append(h.snapshots, m)
}
func (h *History) Pop() *Memento {
if len(h.snapshots) == 0 {
return nil
}
last := h.snapshots[len(h.snapshots)-1]
h.snapshots = h.snapshots[:len(h.snapshots)-1]
return last
}
Usage
editor := &memento.Editor{}
history := &memento.History{}
editor.Type("Hello, ")
history.Push(editor.Save())
editor.Type("World!")
history.Push(editor.Save())
editor.Type(" Extra text.")
fmt.Println(editor.Content())
// Hello, World! Extra text.
// Undo last change
editor.Restore(history.Pop())
fmt.Println(editor.Content())
// Hello, World!
// Undo again
editor.Restore(history.Pop())
fmt.Println(editor.Content())
// Hello,
Rules of Thumb
- The caretaker (history) should never inspect or modify the memento’s contents — it is an opaque token.
- Use memento when a direct interface to obtain the object’s state would expose implementation details.
- Be mindful of memory usage: storing frequent snapshots of large objects can be costly. Consider diffing or compressing snapshots if needed.
Observer Pattern Easy
The observer pattern allows a type instance to “publish” events to other type instances (“observers”) who wish to be updated when a particular event occurs.
Implementation
In long-running applications—such as webservers—instances can keep a collection of observers that will receive notification of triggered events.
Implementations vary, but interfaces can be used to make standard observers and notifiers:
type (
// Event defines an indication of a point-in-time occurrence.
Event struct {
// Data in this case is a simple int, but the actual
// implementation would depend on the application.
Data int64
}
// Observer defines a standard interface for instances that wish to list for
// the occurrence of a specific event.
Observer interface {
// OnNotify allows an event to be "published" to interface implementations.
// In the "real world", error handling would likely be implemented.
OnNotify(Event)
}
// Notifier is the instance being observed. Publisher is perhaps another decent
// name, but naming things is hard.
Notifier interface {
// Register allows an instance to register itself to listen/observe
// events.
Register(Observer)
// Deregister allows an instance to remove itself from the collection
// of observers/listeners.
Deregister(Observer)
// Notify publishes new events to listeners. The method is not
// absolutely necessary, as each implementation could define this itself
// without losing functionality.
Notify(Event)
}
)
Usage
For usage, see observer/main.go or view in the Playground.
Registry Pattern Easy
The registry pattern provides a well-known object that other objects can use to find common objects and services. It acts as a central lookup table where implementations are registered by name or type and retrieved when needed.
Implementation
package registry
import (
"fmt"
"sync"
)
// Service is the common interface for registered services.
type Service interface {
Name() string
Execute() string
}
// Registry is a thread-safe service locator.
type Registry struct {
mu sync.RWMutex
services map[string]Service
}
func New() *Registry {
return &Registry{
services: make(map[string]Service),
}
}
func (r *Registry) Register(svc Service) {
r.mu.Lock()
defer r.mu.Unlock()
r.services[svc.Name()] = svc
}
func (r *Registry) Lookup(name string) (Service, error) {
r.mu.RLock()
defer r.mu.RUnlock()
svc, ok := r.services[name]
if !ok {
return nil, fmt.Errorf("service %q not found", name)
}
return svc, nil
}
func (r *Registry) Deregister(name string) {
r.mu.Lock()
defer r.mu.Unlock()
delete(r.services, name)
}
Usage
type EmailService struct{}
func (e *EmailService) Name() string { return "email" }
func (e *EmailService) Execute() string { return "sending email" }
type SMSService struct{}
func (s *SMSService) Name() string { return "sms" }
func (s *SMSService) Execute() string { return "sending sms" }
r := registry.New()
r.Register(&EmailService{})
r.Register(&SMSService{})
svc, err := r.Lookup("email")
if err == nil {
fmt.Println(svc.Execute()) // sending email
}
svc, err = r.Lookup("sms")
if err == nil {
fmt.Println(svc.Execute()) // sending sms
}
Rules of Thumb
- Registry provides a global access point — use it sparingly to avoid hidden dependencies that make testing difficult.
- Always make the registry thread-safe if it will be accessed from multiple goroutines.
- Consider using Go’s
init()functions to self-register implementations at startup.
State Pattern Medium
The state pattern allows an object to alter its behavior when its internal state changes. The object appears to change its type. Each state is represented as a separate type implementing a common interface, and the context delegates behavior to the current state object.
Implementation
package vending
import "fmt"
// State defines behavior for a particular state of the vending machine.
type State interface {
InsertCoin(v *Machine)
SelectItem(v *Machine)
Dispense(v *Machine)
}
// Machine is the context that holds the current state.
type Machine struct {
current State
idle State
coined State
sold State
}
func New() *Machine {
m := &Machine{}
m.idle = &IdleState{}
m.coined = &CoinedState{}
m.sold = &SoldState{}
m.current = m.idle
return m
}
func (m *Machine) SetState(s State) { m.current = s }
func (m *Machine) InsertCoin() { m.current.InsertCoin(m) }
func (m *Machine) SelectItem() { m.current.SelectItem(m) }
func (m *Machine) Dispense() { m.current.Dispense(m) }
// --- Concrete states ---
type IdleState struct{}
func (s *IdleState) InsertCoin(v *Machine) {
fmt.Println("Coin inserted")
v.SetState(v.coined)
}
func (s *IdleState) SelectItem(v *Machine) { fmt.Println("Insert coin first") }
func (s *IdleState) Dispense(v *Machine) { fmt.Println("Insert coin first") }
type CoinedState struct{}
func (s *CoinedState) InsertCoin(v *Machine) { fmt.Println("Coin already inserted") }
func (s *CoinedState) SelectItem(v *Machine) {
fmt.Println("Item selected")
v.SetState(v.sold)
}
func (s *CoinedState) Dispense(v *Machine) { fmt.Println("Select an item first") }
type SoldState struct{}
func (s *SoldState) InsertCoin(v *Machine) { fmt.Println("Wait, dispensing item") }
func (s *SoldState) SelectItem(v *Machine) { fmt.Println("Wait, dispensing item") }
func (s *SoldState) Dispense(v *Machine) {
fmt.Println("Item dispensed")
v.SetState(v.idle)
}
Usage
m := vending.New()
m.SelectItem() // Insert coin first
m.InsertCoin() // Coin inserted
m.InsertCoin() // Coin already inserted
m.SelectItem() // Item selected
m.Dispense() // Item dispensed
m.Dispense() // Insert coin first
Rules of Thumb
- Use state when an object’s behavior depends on its state and it must change behavior at runtime based on that state.
- State eliminates large conditional statements (
switch/if-elsechains) that select behavior based on the current state. - State objects can be shared across contexts if they carry no instance-specific data (they become flyweights).
Strategy Pattern Easy
Strategy behavioral design pattern enables an algorithm’s behavior to be selected at runtime.
It defines algorithms, encapsulates them, and uses them interchangeably.
Implementation
Implementation of an interchangeable operator object that operates on integers.
type Operator interface {
Apply(int, int) int
}
type Operation struct {
Operator Operator
}
func (o *Operation) Operate(leftValue, rightValue int) int {
return o.Operator.Apply(leftValue, rightValue)
}
Usage
Addition Operator
type Addition struct{}
func (Addition) Apply(lval, rval int) int {
return lval + rval
}
add := Operation{Addition{}}
add.Operate(3, 5) // 8
Multiplication Operator
type Multiplication struct{}
func (Multiplication) Apply(lval, rval int) int {
return lval * rval
}
mult := Operation{Multiplication{}}
mult.Operate(3, 5) // 15
Rules of Thumb
- Strategy pattern is similar to Template pattern except in its granularity.
- Strategy pattern lets you change the guts of an object. Decorator pattern lets you change the skin.
Template Pattern Easy
The template pattern defines the skeleton of an algorithm in a base operation, deferring some steps to subclasses. It lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure.
In Go, since there is no classical inheritance, we use an interface for the varying steps and a function for the fixed skeleton.
Implementation
package template
import "fmt"
// DataMiner defines the steps that vary between implementations.
type DataMiner interface {
Open(path string) error
Extract() ([]string, error)
Parse(raw []string) []Record
Close()
}
// Record represents a parsed data entry.
type Record struct {
Fields map[string]string
}
// Mine is the template method. It defines the fixed algorithm skeleton and
// delegates the varying steps to the DataMiner interface.
func Mine(miner DataMiner, path string) ([]Record, error) {
if err := miner.Open(path); err != nil {
return nil, fmt.Errorf("open: %w", err)
}
defer miner.Close()
raw, err := miner.Extract()
if err != nil {
return nil, fmt.Errorf("extract: %w", err)
}
records := miner.Parse(raw)
return records, nil
}
A concrete implementation for CSV files:
package template
import (
"encoding/csv"
"os"
"strings"
)
type CSVMiner struct {
file *os.File
}
func (c *CSVMiner) Open(path string) error {
f, err := os.Open(path)
if err != nil {
return err
}
c.file = f
return nil
}
func (c *CSVMiner) Extract() ([]string, error) {
reader := csv.NewReader(c.file)
records, err := reader.ReadAll()
if err != nil {
return nil, err
}
var lines []string
for _, row := range records {
lines = append(lines, strings.Join(row, ","))
}
return lines, nil
}
func (c *CSVMiner) Parse(raw []string) []Record {
var records []Record
for _, line := range raw {
fields := strings.Split(line, ",")
r := Record{Fields: map[string]string{"raw": strings.Join(fields, " | ")}}
records = append(records, r)
}
return records
}
func (c *CSVMiner) Close() {
if c.file != nil {
c.file.Close()
}
}
Usage
miner := &template.CSVMiner{}
records, err := template.Mine(miner, "data.csv")
if err != nil {
log.Fatal(err)
}
for _, r := range records {
fmt.Println(r.Fields["raw"])
}
Rules of Thumb
- Use template when you have several classes that contain nearly identical algorithms with some minor differences.
- In Go, the template function accepts an interface rather than relying on inheritance — this is idiomatic composition over inheritance.
- Hook methods (optional steps with default behavior) can be expressed as interfaces with default implementations using embedding.
Visitor Pattern Hard
The visitor pattern separates an algorithm from the object structure on which it operates. It allows you to add new operations to existing object structures without modifying the structures themselves.
Implementation
package visitor
import "fmt"
// Shape is the element interface that accepts a visitor.
type Shape interface {
Accept(Visitor) string
}
// Visitor defines operations for each concrete element type.
type Visitor interface {
VisitCircle(*Circle) string
VisitRectangle(*Rectangle) string
}
// --- Concrete elements ---
type Circle struct {
Radius float64
}
func (c *Circle) Accept(v Visitor) string {
return v.VisitCircle(c)
}
type Rectangle struct {
Width, Height float64
}
func (r *Rectangle) Accept(v Visitor) string {
return v.VisitRectangle(r)
}
// --- Concrete visitors ---
type AreaCalculator struct{}
func (a *AreaCalculator) VisitCircle(c *Circle) string {
area := 3.14159 * c.Radius * c.Radius
return fmt.Sprintf("Circle area: %.2f", area)
}
func (a *AreaCalculator) VisitRectangle(r *Rectangle) string {
area := r.Width * r.Height
return fmt.Sprintf("Rectangle area: %.2f", area)
}
type PerimeterCalculator struct{}
func (p *PerimeterCalculator) VisitCircle(c *Circle) string {
perim := 2 * 3.14159 * c.Radius
return fmt.Sprintf("Circle perimeter: %.2f", perim)
}
func (p *PerimeterCalculator) VisitRectangle(r *Rectangle) string {
perim := 2 * (r.Width + r.Height)
return fmt.Sprintf("Rectangle perimeter: %.2f", perim)
}
Usage
shapes := []visitor.Shape{
&visitor.Circle{Radius: 5},
&visitor.Rectangle{Width: 3, Height: 4},
}
area := &visitor.AreaCalculator{}
perim := &visitor.PerimeterCalculator{}
for _, s := range shapes {
fmt.Println(s.Accept(area))
fmt.Println(s.Accept(perim))
}
// Circle area: 78.54
// Circle perimeter: 31.42
// Rectangle area: 12.00
// Rectangle perimeter: 14.00
Rules of Thumb
- Use visitor when you need to perform many unrelated operations on an object structure and you don’t want to pollute the element classes with these operations.
- Adding a new element type requires updating every visitor — the pattern works best when the element hierarchy is stable.
- Visitor can accumulate state as it traverses a structure, making it useful for compilers, serializers, and report generators.
Condition Variable Pattern Medium
A condition variable allows goroutines to wait for a specific condition to become true. Rather than busy-looping and repeatedly checking, a goroutine suspends itself on a condition variable and is woken up when another goroutine signals that the condition may have changed.
Go provides sync.Cond in the standard library, built on top of a sync.Mutex
or sync.RWMutex.
Implementation
package queue
import "sync"
// BlockingQueue is a thread-safe queue where consumers wait until an item is
// available. It uses a condition variable to avoid busy-waiting.
type BlockingQueue struct {
mu sync.Mutex
cond *sync.Cond
items []interface{}
}
func New() *BlockingQueue {
q := &BlockingQueue{}
q.cond = sync.NewCond(&q.mu)
return q
}
// Enqueue adds an item and signals one waiting consumer.
func (q *BlockingQueue) Enqueue(item interface{}) {
q.mu.Lock()
defer q.mu.Unlock()
q.items = append(q.items, item)
q.cond.Signal()
}
// Dequeue blocks until an item is available, then removes and returns it.
func (q *BlockingQueue) Dequeue() interface{} {
q.mu.Lock()
defer q.mu.Unlock()
// Wait must be called inside a loop because spurious wakeups can occur.
for len(q.items) == 0 {
q.cond.Wait()
}
item := q.items[0]
q.items = q.items[1:]
return item
}
Usage
q := queue.New()
// Producer
go func() {
for i := 0; i < 5; i++ {
q.Enqueue(i)
fmt.Printf("produced: %d\n", i)
}
}()
// Consumer — blocks until items arrive
for i := 0; i < 5; i++ {
item := q.Dequeue()
fmt.Printf("consumed: %v\n", item)
}
Rules of Thumb
- Always call
cond.Wait()inside aforloop that checks the actual condition, not anif— spurious wakeups are possible. - The goroutine must hold the associated lock before calling
Wait().Waitatomically releases the lock and suspends the goroutine; upon waking it re-acquires the lock. - Use
Signal()to wake one waiting goroutine,Broadcast()to wake all of them. - In most Go code, channels are the preferred synchronization mechanism. Use
sync.Condwhen you need to wake multiple waiters on the same condition or when channels would be awkward (e.g. waiting on a size threshold).
Lock/Mutex Pattern Easy
A mutex (mutual exclusion) enforces exclusive access to a shared resource. Only one goroutine can hold the lock at any time — all others block until the lock is released. This prevents data races when multiple goroutines read and write shared state concurrently.
Go provides sync.Mutex in the standard library.
Implementation
package counter
import "sync"
// Counter is a thread-safe counter protected by a mutex.
type Counter struct {
mu sync.Mutex
value int
}
func (c *Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
func (c *Counter) Decrement() {
c.mu.Lock()
defer c.mu.Unlock()
c.value--
}
func (c *Counter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.value
}
Usage
c := &counter.Counter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
c.Increment()
}()
}
wg.Wait()
fmt.Println(c.Value()) // 1000
Rules of Thumb
- Always use
defer mu.Unlock()immediately afterLock()to guarantee the lock is released even if the function panics. - Keep the critical section (code between Lock and Unlock) as short as possible to minimize contention.
- Never copy a
sync.Mutexafter first use — embed it in a struct and pass the struct by pointer. - If reads vastly outnumber writes, consider
sync.RWMutexinstead (see Read-Write Lock).
Monitor Pattern Medium
A monitor combines a mutex with one or more condition variables to protect shared state while allowing goroutines to wait for specific conditions. The mutex guarantees exclusive access, while the condition variables coordinate goroutines that need to wait for or signal state changes.
In Go, a monitor is composed from sync.Mutex (or sync.RWMutex) and
sync.Cond.
Implementation
package monitor
import "sync"
// BoundedBuffer is a classic monitor example: a fixed-size buffer where
// producers block when full and consumers block when empty.
type BoundedBuffer struct {
mu sync.Mutex
notFull *sync.Cond
notEmpty *sync.Cond
buf []interface{}
capacity int
}
func New(capacity int) *BoundedBuffer {
b := &BoundedBuffer{
buf: make([]interface{}, 0, capacity),
capacity: capacity,
}
b.notFull = sync.NewCond(&b.mu)
b.notEmpty = sync.NewCond(&b.mu)
return b
}
// Put adds an item, blocking if the buffer is full.
func (b *BoundedBuffer) Put(item interface{}) {
b.mu.Lock()
defer b.mu.Unlock()
for len(b.buf) == b.capacity {
b.notFull.Wait()
}
b.buf = append(b.buf, item)
b.notEmpty.Signal()
}
// Get removes and returns an item, blocking if the buffer is empty.
func (b *BoundedBuffer) Get() interface{} {
b.mu.Lock()
defer b.mu.Unlock()
for len(b.buf) == 0 {
b.notEmpty.Wait()
}
item := b.buf[0]
b.buf = b.buf[1:]
b.notFull.Signal()
return item
}
Usage
buf := monitor.New(5)
// Producer goroutines
for i := 0; i < 3; i++ {
go func(id int) {
for j := 0; j < 10; j++ {
buf.Put(fmt.Sprintf("producer-%d: item-%d", id, j))
}
}(i)
}
// Consumer goroutines
for i := 0; i < 3; i++ {
go func(id int) {
for j := 0; j < 10; j++ {
item := buf.Get()
fmt.Printf("consumer-%d got %v\n", id, item)
}
}(i)
}
Rules of Thumb
- A monitor is a higher-level abstraction than a raw mutex + condition variable — prefer it when you have multiple conditions on the same shared state (e.g. “not full” and “not empty”).
- Always check conditions in a
forloop, not anif, because of spurious wakeups. - In idiomatic Go, a buffered channel (
make(chan T, N)) already implements a bounded-buffer monitor. Use explicit monitors when you need more complex waiting conditions that channels cannot express.
Read-Write Lock Pattern Medium
A read-write lock allows multiple goroutines to hold the lock simultaneously for read operations, but only one goroutine can hold it for a write operation. This improves throughput when reads are far more frequent than writes.
Go provides sync.RWMutex in the standard library.
Implementation
package cache
import "sync"
// Cache is a thread-safe key-value store that uses a read-write lock to allow
// concurrent reads while serializing writes.
type Cache struct {
mu sync.RWMutex
store map[string]string
}
func New() *Cache {
return &Cache{
store: make(map[string]string),
}
}
// Get reads a value. Multiple goroutines can call Get concurrently.
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
val, ok := c.store[key]
return val, ok
}
// Set writes a value. Only one goroutine can call Set at a time, and it
// blocks all readers until the write completes.
func (c *Cache) Set(key, value string) {
c.mu.Lock()
defer c.mu.Unlock()
c.store[key] = value
}
// Delete removes a key.
func (c *Cache) Delete(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.store, key)
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
c.mu.RLock()
defer c.mu.RUnlock()
return len(c.store)
}
Usage
c := cache.New()
// Writers
go c.Set("language", "Go")
go c.Set("pattern", "RWMutex")
// Concurrent readers — these do not block each other
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
defer wg.Done()
if val, ok := c.Get("language"); ok {
fmt.Println(val)
}
}()
}
wg.Wait()
Rules of Thumb
- Use
RWMutexonly when reads significantly outnumber writes. If the ratio is roughly equal, a plainMutexhas less overhead. - Never call
Lock()(write) while already holdingRLock()(read) in the same goroutine — this causes a deadlock. - For simple key-value caches,
sync.Mapmay be more convenient, butRWMutexgives you more control and is generally faster for known access patterns.
Semaphore Pattern Medium
A semaphore is a synchronization pattern/primitive that imposes mutual exclusion on a limited number of resources.
Implementation
package semaphore
var (
ErrNoTickets = errors.New("semaphore: could not aquire semaphore")
ErrIllegalRelease = errors.New("semaphore: can't release the semaphore without acquiring it first")
)
// Interface contains the behavior of a semaphore that can be acquired and/or released.
type Interface interface {
Acquire() error
Release() error
}
type implementation struct {
sem chan struct{}
timeout time.Duration
}
func (s *implementation) Acquire() error {
select {
case s.sem <- struct{}{}:
return nil
case <-time.After(s.timeout):
return ErrNoTickets
}
}
func (s *implementation) Release() error {
select {
case _ = <-s.sem:
return nil
case <-time.After(s.timeout):
return ErrIllegalRelease
}
return nil
}
func New(tickets int, timeout time.Duration) Interface {
return &implementation{
sem: make(chan struct{}, tickets),
timeout: timeout,
}
}
Usage
Semaphore with Timeouts
tickets, timeout := 1, 3*time.Second
s := semaphore.New(tickets, timeout)
if err := s.Acquire(); err != nil {
panic(err)
}
// Do important work
if err := s.Release(); err != nil {
panic(err)
}
Semaphore without Timeouts (Non-Blocking)
tickets, timeout := 0, 0
s := semaphore.New(tickets, timeout)
if err := s.Acquire(); err != nil {
if err != semaphore.ErrNoTickets {
panic(err)
}
// No tickets left, can't work :(
os.Exit(1)
}
N-Barrier Pattern Medium
The barrier pattern prevents a group of N goroutines from proceeding until all of them have reached the barrier point. Once the last goroutine arrives, all are released simultaneously. This is useful for phased computations where each phase must complete before the next begins.
Implementation
package barrier
import "sync"
// Barrier blocks until N goroutines have called Wait.
type Barrier struct {
n int
count int
mu sync.Mutex
cond *sync.Cond
}
func New(n int) *Barrier {
b := &Barrier{n: n}
b.cond = sync.NewCond(&b.mu)
return b
}
// Wait blocks the calling goroutine until all N goroutines have called Wait.
// Once all have arrived, every goroutine is released and the barrier resets
// for reuse.
func (b *Barrier) Wait() {
b.mu.Lock()
defer b.mu.Unlock()
b.count++
if b.count == b.n {
// Last goroutine arrived — release everyone and reset.
b.count = 0
b.cond.Broadcast()
return
}
// Wait until the barrier is released.
b.cond.Wait()
}
Usage
const workers = 5
b := barrier.New(workers)
var wg sync.WaitGroup
for i := 0; i < workers; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Printf("worker %d: phase 1 done\n", id)
b.Wait() // all workers sync here
fmt.Printf("worker %d: phase 2 done\n", id)
b.Wait() // sync again before phase 3
fmt.Printf("worker %d: phase 3 done\n", id)
}(i)
}
wg.Wait()
Rules of Thumb
- The barrier is reusable — after all goroutines pass through, it resets automatically.
- The number of goroutines calling
Waitmust exactly match N; otherwise the barrier will deadlock. - Go’s
sync.WaitGroupcovers the simpler case of waiting for goroutines to finish. Use a barrier when goroutines need to synchronize at intermediate points, not just at completion.
Bounded Parallelism Pattern Medium
Bounded parallelism is similar to parallelism, but allows limits to be placed on allocation.
Implementation and Example
An example showing implementation and usage can be found in bounded_parallelism.go.
Broadcast Pattern Medium
The broadcast pattern transfers a message to all recipients simultaneously. A single producer sends a value, and every registered consumer receives a copy. Unlike fan-out (which distributes different items to workers), broadcast delivers the same item to all subscribers.
Implementation
package broadcast
import "sync"
// Broadcaster sends messages to all registered listeners.
type Broadcaster[T any] struct {
mu sync.RWMutex
listeners []chan T
}
func New[T any]() *Broadcaster[T] {
return &Broadcaster[T]{}
}
// Register adds a new listener and returns the channel it will receive on.
func (b *Broadcaster[T]) Register() <-chan T {
b.mu.Lock()
defer b.mu.Unlock()
ch := make(chan T, 1)
b.listeners = append(b.listeners, ch)
return ch
}
// Send delivers the message to every registered listener.
func (b *Broadcaster[T]) Send(msg T) {
b.mu.RLock()
defer b.mu.RUnlock()
for _, ch := range b.listeners {
ch <- msg
}
}
// Close shuts down all listener channels.
func (b *Broadcaster[T]) Close() {
b.mu.Lock()
defer b.mu.Unlock()
for _, ch := range b.listeners {
close(ch)
}
b.listeners = nil
}
Usage
bc := broadcast.New[string]()
ch1 := bc.Register()
ch2 := bc.Register()
ch3 := bc.Register()
bc.Send("hello everyone")
fmt.Println(<-ch1) // hello everyone
fmt.Println(<-ch2) // hello everyone
fmt.Println(<-ch3) // hello everyone
bc.Close()
Rules of Thumb
- Use buffered channels for listeners to prevent a slow consumer from blocking the broadcaster.
- If consumers have different speeds, consider adding a timeout or dropping messages for slow consumers.
- For a more robust implementation, combine with context cancellation so listeners can unsubscribe.
Coroutines Pattern Hard
Coroutines are subroutines that allow suspending and resuming execution at certain locations. Unlike regular functions that run to completion, coroutines can yield intermediate values and be resumed later. In Go, goroutines combined with channels naturally model coroutine-style cooperative multitasking.
Implementation
package coroutine
// Coroutine represents a resumable computation that yields values of type T.
type Coroutine[T any] struct {
ch chan T
done chan struct{}
}
// New creates a coroutine from a function. The function receives a yield
// callback to suspend execution and emit a value.
func New[T any](fn func(yield func(T))) *Coroutine[T] {
c := &Coroutine[T]{
ch: make(chan T),
done: make(chan struct{}),
}
go func() {
defer close(c.ch)
fn(func(val T) {
c.ch <- val
})
}()
return c
}
// Next returns the next yielded value and a boolean indicating if the
// coroutine is still active.
func (c *Coroutine[T]) Next() (T, bool) {
val, ok := <-c.ch
return val, ok
}
// All returns a channel for range-based iteration over yielded values.
func (c *Coroutine[T]) All() <-chan T {
return c.ch
}
Usage
// A coroutine that yields Fibonacci numbers.
fib := coroutine.New(func(yield func(int)) {
a, b := 0, 1
for i := 0; i < 10; i++ {
yield(a)
a, b = b, a+b
}
})
for val := range fib.All() {
fmt.Println(val)
}
// 0, 1, 1, 2, 3, 5, 8, 13, 21, 34
// Or consume one at a time:
counter := coroutine.New(func(yield func(string)) {
yield("first")
yield("second")
yield("third")
})
val, ok := counter.Next()
fmt.Println(val, ok) // first true
val, ok = counter.Next()
fmt.Println(val, ok) // second true
Rules of Thumb
- Go’s goroutines are not true coroutines (they are preemptively scheduled), but goroutine + channel pairs can model cooperative coroutine semantics.
- The channel-based approach provides natural backpressure: the producer blocks on yield until the consumer reads.
- For simple sequences, a generator function returning
<-chan Tis often sufficient (see Generators).
Generator Pattern Medium
Generators yields a sequence of values one at a time.
Implementation
func Count(start int, end int) chan int {
ch := make(chan int)
go func(ch chan int) {
for i := start; i <= end ; i++ {
// Blocks on the operation
ch <- i
}
close(ch)
}(ch)
return ch
}
Usage
fmt.Println("No bottles of beer on the wall")
for i := range Count(1, 99) {
fmt.Println("Pass it around, put one up,", i, "bottles of beer on the wall")
// Pass it around, put one up, 1 bottles of beer on the wall
// Pass it around, put one up, 2 bottles of beer on the wall
// ...
// Pass it around, put one up, 99 bottles of beer on the wall
}
fmt.Println(100, "bottles of beer on the wall")
Reactor Pattern Hard
The reactor pattern demultiplexes service requests delivered concurrently to a service handler and dispatches them synchronously to the associated request handlers. It uses a single-threaded event loop to monitor multiple event sources and dispatches events to registered handlers as they arrive.
Implementation
package reactor
import "fmt"
// EventType identifies the kind of event.
type EventType string
// Handler processes a specific event type.
type Handler func(data interface{})
// Reactor is the event dispatcher. It registers handlers and dispatches
// incoming events to the appropriate handler synchronously.
type Reactor struct {
handlers map[EventType]Handler
events chan Event
quit chan struct{}
}
// Event represents something that happened.
type Event struct {
Type EventType
Data interface{}
}
func New(bufferSize int) *Reactor {
return &Reactor{
handlers: make(map[EventType]Handler),
events: make(chan Event, bufferSize),
quit: make(chan struct{}),
}
}
// Register associates a handler with an event type.
func (r *Reactor) Register(eventType EventType, handler Handler) {
r.handlers[eventType] = handler
}
// Dispatch submits an event to the reactor's event queue.
func (r *Reactor) Dispatch(e Event) {
r.events <- e
}
// Run starts the single-threaded event loop. It processes events sequentially
// until Stop is called.
func (r *Reactor) Run() {
for {
select {
case e := <-r.events:
if handler, ok := r.handlers[e.Type]; ok {
handler(e.Data)
} else {
fmt.Printf("no handler for event type: %s\n", e.Type)
}
case <-r.quit:
return
}
}
}
// Stop terminates the event loop.
func (r *Reactor) Stop() {
close(r.quit)
}
Usage
r := reactor.New(100)
r.Register("connect", func(data interface{}) {
fmt.Printf("client connected: %v\n", data)
})
r.Register("message", func(data interface{}) {
fmt.Printf("received message: %v\n", data)
})
r.Register("disconnect", func(data interface{}) {
fmt.Printf("client disconnected: %v\n", data)
})
go r.Run()
r.Dispatch(reactor.Event{Type: "connect", Data: "client-1"})
r.Dispatch(reactor.Event{Type: "message", Data: "hello"})
r.Dispatch(reactor.Event{Type: "disconnect", Data: "client-1"})
time.Sleep(100 * time.Millisecond)
r.Stop()
// client connected: client-1
// received message: hello
// client disconnected: client-1
Rules of Thumb
- The reactor processes events synchronously in a single goroutine. If a handler blocks, it delays all subsequent events. Keep handlers fast.
- For CPU-intensive handlers, offload work to a goroutine pool and return immediately.
- Go’s
net/httpserver uses a variation of this pattern internally: it accepts connections on a single listener and dispatches each to a handler goroutine.
Parallelism Pattern Easy
Parallelism allows multiple “jobs” or tasks to be run concurrently and asynchronously.
Implementation and Example
An example showing implementation and usage can be found in parallelism.go.
Producer Consumer Pattern Medium
The producer-consumer pattern separates task generation from task execution. Producers push work items into a shared buffer, and consumers pull items from the buffer to process them. This decouples the rate of production from the rate of consumption and allows both sides to operate independently.
In Go, a buffered channel is the natural shared buffer.
Implementation
package prodcon
import "sync"
// Item represents a unit of work.
type Item struct {
ID int
Data string
}
// Producer generates items and sends them to the work channel.
func Producer(id int, count int, ch chan<- Item) {
for i := 0; i < count; i++ {
item := Item{
ID: id*1000 + i,
Data: fmt.Sprintf("item-%d from producer-%d", i, id),
}
ch <- item
}
}
// Consumer reads items from the work channel and processes them.
func Consumer(id int, ch <-chan Item, fn func(Item), wg *sync.WaitGroup) {
defer wg.Done()
for item := range ch {
fn(item)
}
}
Usage
ch := make(chan prodcon.Item, 10)
// Start 3 producers
var producerWg sync.WaitGroup
for i := 0; i < 3; i++ {
producerWg.Add(1)
go func(id int) {
defer producerWg.Done()
prodcon.Producer(id, 5, ch)
}(i)
}
// Start 2 consumers
var consumerWg sync.WaitGroup
for i := 0; i < 2; i++ {
consumerWg.Add(1)
go prodcon.Consumer(i, ch, func(item prodcon.Item) {
fmt.Printf("consumer-%d processed: %s\n", i, item.Data)
}, &consumerWg)
}
// Wait for all producers to finish, then close the channel.
producerWg.Wait()
close(ch)
// Wait for all consumers to drain the remaining items.
consumerWg.Wait()
Rules of Thumb
- The buffer size controls backpressure: a full buffer blocks producers, giving consumers time to catch up.
- Always close the channel from the producer side (or a coordinator) — never from the consumer.
- Tune the number of producers, consumers, and buffer size based on observed throughput and latency.
- For graceful shutdown, use
context.Contextto signal cancellation to both producers and consumers.
Errgroup (Structured Concurrency) Medium
The errgroup pattern provides structured concurrency by running a group of goroutines and waiting for all of them to complete. If any goroutine returns an error, the group’s context is cancelled and the first error is returned. This prevents fire-and-forget goroutine leaks.
Go provides golang.org/x/sync/errgroup, but the core idea is simple enough
to implement with standard library primitives.
Implementation
package errgroup
import (
"context"
"sync"
)
// Group manages a set of goroutines that share a cancellable context.
type Group struct {
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
once sync.Once
err error
}
func WithContext(ctx context.Context) (*Group, context.Context) {
ctx, cancel := context.WithCancel(ctx)
return &Group{ctx: ctx, cancel: cancel}, ctx
}
// Go launches fn in a new goroutine. The first non-nil error cancels the group.
func (g *Group) Go(fn func(ctx context.Context) error) {
g.wg.Add(1)
go func() {
defer g.wg.Done()
if err := fn(g.ctx); err != nil {
g.once.Do(func() {
g.err = err
g.cancel()
})
}
}()
}
// Wait blocks until all goroutines finish and returns the first error.
func (g *Group) Wait() error {
g.wg.Wait()
g.cancel()
return g.err
}
Usage
g, ctx := errgroup.WithContext(context.Background())
g.Go(func(ctx context.Context) error {
return fetchUserProfile(ctx, userID)
})
g.Go(func(ctx context.Context) error {
return fetchUserOrders(ctx, userID)
})
g.Go(func(ctx context.Context) error {
return fetchUserPreferences(ctx, userID)
})
// If any fetch fails, the others are cancelled via ctx.
if err := g.Wait(); err != nil {
log.Fatal(err)
}
Rules of Thumb
- Always check
ctx.Done()inside goroutines so they actually respond to cancellation. - Errgroup replaces the common
WaitGroup+error channel+sync.Onceboilerplate. - For production use, prefer
golang.org/x/sync/errgroupwhich also supports concurrency limits viaSetLimit.
Worker Pool Medium
A worker pool maintains a fixed number of goroutines that process tasks from a shared channel. This bounds resource usage while maximizing throughput — new tasks queue up instead of spawning unbounded goroutines.
Implementation
package pool
import "sync"
// Task represents a unit of work.
type Task[T any, R any] struct {
Input T
Result R
Err error
}
// Pool runs a fixed number of workers that process tasks from an input channel.
type Pool[T any, R any] struct {
workers int
fn func(T) (R, error)
}
func New[T any, R any](workers int, fn func(T) (R, error)) *Pool[T, R] {
return &Pool[T, R]{workers: workers, fn: fn}
}
// Run processes all inputs and returns results in completion order.
func (p *Pool[T, R]) Run(inputs []T) []Task[T, R] {
in := make(chan T, len(inputs))
out := make(chan Task[T, R], len(inputs))
// Start workers
var wg sync.WaitGroup
for i := 0; i < p.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for input := range in {
result, err := p.fn(input)
out <- Task[T, R]{Input: input, Result: result, Err: err}
}
}()
}
// Feed inputs
for _, input := range inputs {
in <- input
}
close(in)
// Wait and close output
go func() {
wg.Wait()
close(out)
}()
// Collect results
var results []Task[T, R]
for task := range out {
results = append(results, task)
}
return results
}
Usage
pool := pool.New(4, func(url string) (int, error) {
resp, err := http.Get(url)
if err != nil {
return 0, err
}
defer resp.Body.Close()
return resp.StatusCode, nil
})
urls := []string{
"https://golang.org",
"https://pkg.go.dev",
"https://go.dev",
}
for _, task := range pool.Run(urls) {
if task.Err != nil {
fmt.Printf("%s -> error: %v\n", task.Input, task.Err)
} else {
fmt.Printf("%s -> %d\n", task.Input, task.Result)
}
}
Rules of Thumb
- Size the pool based on the bottleneck: CPU-bound work →
runtime.NumCPU(), I/O-bound → higher. - The input channel’s buffer controls backpressure. A full buffer blocks the producer.
- For cancellation support, pass a
context.Contextthrough the task and check it in the worker function.
Pipeline Medium
A pipeline is a series of stages connected by channels, where each stage is a group of goroutines that receives values from upstream, performs a function on that data, and sends the results downstream. Pipelines allow you to compose complex data processing from simple, reusable stages.
Implementation
package pipeline
// Stage transforms input values into output values.
type Stage[In any, Out any] func(in <-chan In) <-chan Out
// Generate creates the initial stage by feeding a slice into a channel.
func Generate[T any](items ...T) <-chan T {
out := make(chan T)
go func() {
defer close(out)
for _, item := range items {
out <- item
}
}()
return out
}
// Map applies a function to each value in the input channel.
func Map[In any, Out any](in <-chan In, fn func(In) Out) <-chan Out {
out := make(chan Out)
go func() {
defer close(out)
for v := range in {
out <- fn(v)
}
}()
return out
}
// Filter passes through only values that satisfy the predicate.
func Filter[T any](in <-chan T, pred func(T) bool) <-chan T {
out := make(chan T)
go func() {
defer close(out)
for v := range in {
if pred(v) {
out <- v
}
}
}()
return out
}
// Collect drains a channel into a slice.
func Collect[T any](in <-chan T) []T {
var result []T
for v := range in {
result = append(result, v)
}
return result
}
Usage
// Pipeline: generate numbers → square them → keep only even results
numbers := pipeline.Generate(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
squared := pipeline.Map(numbers, func(n int) int { return n * n })
even := pipeline.Filter(squared, func(n int) bool { return n%2 == 0 })
results := pipeline.Collect(even)
fmt.Println(results) // [4 16 36 64 100]
Rules of Thumb
- Each stage owns its output channel — the stage that creates it is responsible for closing it.
- Pipelines provide natural backpressure: a slow stage causes upstream stages to block.
- For cancellation, pass a
donechannel orcontext.Contextand select on it alongside channel operations. - Fan-out a stage into multiple goroutines for CPU-bound stages; merge with fan-in at the end.
Rate Limiter Medium
A rate limiter controls how frequently an operation can be performed. It
prevents overloading a service, API, or resource by throttling requests to a
maximum rate. Go’s time.Ticker provides a simple token-bucket style
implementation.
Implementation
package ratelimit
import "time"
// Limiter allows at most `rate` operations per second.
type Limiter struct {
tokens chan struct{}
stop chan struct{}
}
func New(rate int) *Limiter {
l := &Limiter{
tokens: make(chan struct{}, rate),
stop: make(chan struct{}),
}
// Pre-fill the bucket.
for i := 0; i < rate; i++ {
l.tokens <- struct{}{}
}
// Refill tokens at the specified rate.
go func() {
interval := time.Second / time.Duration(rate)
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
select {
case l.tokens <- struct{}{}:
default: // bucket is full
}
case <-l.stop:
return
}
}
}()
return l
}
// Wait blocks until a token is available.
func (l *Limiter) Wait() {
<-l.tokens
}
// Stop shuts down the refill goroutine.
func (l *Limiter) Stop() {
close(l.stop)
}
Usage
limiter := ratelimit.New(5) // 5 requests per second
defer limiter.Stop()
for i := 0; i < 10; i++ {
limiter.Wait()
fmt.Printf("Request %d at %s\n", i, time.Now().Format("15:04:05.000"))
}
// Requests are spaced ~200ms apart (5 per second).
Rules of Thumb
- Use a token bucket for bursty traffic (allows short bursts up to bucket size) or a fixed ticker for smooth rate limiting.
- For production use,
golang.org/x/time/rateprovides a more robust limiter with reservation and cancellation support. - Rate limiting should happen at system boundaries — close to where requests enter your service.
Fan-In Messaging Patterns Easy
Fan-In is a messaging pattern used to create a funnel for work amongst workers (clients: source, server: destination).
We can model fan-in using the Go channels.
// Merge different channels in one channel
func Merge(cs ...<-chan int) <-chan int {
var wg sync.WaitGroup
out := make(chan int)
// Start an send goroutine for each input channel in cs. send
// copies values from c to out until c is closed, then calls wg.Done.
send := func(c <-chan int) {
for n := range c {
out <- n
}
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go send(c)
}
// Start a goroutine to close out once all the send goroutines are
// done. This must start after the wg.Add call.
go func() {
wg.Wait()
close(out)
}()
return out
}
The Merge function converts a list of channels to a single channel by starting a goroutine for each inbound channel that copies the values to the sole outbound channel.
Once all the output goroutines have been started, Merge a goroutine is started to close the main channel.
Fan-Out Messaging Pattern Easy
Fan-Out is a messaging pattern used for distributing work amongst workers (producer: source, consumers: destination).
We can model fan-out using the Go channels.
// Split a channel into n channels that receive messages in a round-robin fashion.
func Split(ch <-chan int, n int) []<-chan int {
cs := make([]chan int)
for i := 0; i < n; i++ {
cs = append(cs, make(chan int))
}
// Distributes the work in a round robin fashion among the stated number
// of channels until the main channel has been closed. In that case, close
// all channels and return.
distributeToChannels := func(ch <-chan int, cs []chan<- int) {
// Close every channel when the execution ends.
defer func(cs []chan<- int) {
for _, c := range cs {
close(c)
}
}(cs)
for {
for _, c := range cs {
select {
case val, ok := <-ch:
if !ok {
return
}
c <- val
}
}
}
}
go distributeToChannels(ch, cs)
return cs
}
The Split function converts a single channel into a list of channels by using
a goroutine to copy received values to channels in the list in a round-robin fashion.
Futures & Promises Pattern Medium
A future acts as a placeholder for a result that is initially unknown because the computation has not yet completed. It provides a way to access the result of an asynchronous operation synchronously when the value is needed.
In Go, a future is naturally modeled with a goroutine that computes the result and a channel (or struct) that delivers it.
Implementation
package future
// Future represents an asynchronous computation that will produce a value.
type Future[T any] struct {
ch chan result[T]
}
type result[T any] struct {
value T
err error
}
// New starts an asynchronous computation and returns a Future.
func New[T any](fn func() (T, error)) *Future[T] {
f := &Future[T]{
ch: make(chan result[T], 1),
}
go func() {
val, err := fn()
f.ch <- result[T]{value: val, err: err}
}()
return f
}
// Get blocks until the result is available and returns it.
func (f *Future[T]) Get() (T, error) {
r := <-f.ch
// Put it back so subsequent calls to Get return the same result.
f.ch <- r
return r.value, r.err
}
Usage
// Start two expensive operations concurrently.
priceFuture := future.New(func() (float64, error) {
// simulate API call
time.Sleep(2 * time.Second)
return 99.95, nil
})
stockFuture := future.New(func() (int, error) {
// simulate DB query
time.Sleep(1 * time.Second)
return 42, nil
})
// Both are running in parallel. Block only when we need the values.
price, err := priceFuture.Get()
if err != nil {
log.Fatal(err)
}
stock, err := stockFuture.Get()
if err != nil {
log.Fatal(err)
}
fmt.Printf("Price: $%.2f, Stock: %d\n", price, stock)
// Price: $99.95, Stock: 42
// Total wall-clock time: ~2s (not 3s), since both ran concurrently.
Rules of Thumb
- Futures are ideal when you need to kick off multiple independent operations and collect results later.
- The
Getmethod is idempotent — calling it multiple times returns the same cached result. - For timeout support, combine with
context.WithTimeoutor use aselectwithtime.Afteron the channel. - Go channels are already one-shot futures. For simple cases, a plain
chan Tis sufficient.
Publish & Subscribe Messaging Pattern Medium
Publish-Subscribe is a messaging pattern used to communicate messages between different components without these components knowing anything about each other’s identity.
It is similar to the Observer behavioral design pattern.
The fundamental design principals of both Observer and Publish-Subscribe is the decoupling of
those interested in being informed about Event Messages from the informer (Observers or Publishers).
Meaning that you don’t have to program the messages to be sent directly to specific receivers.
To accomplish this, an intermediary, called a “message broker” or “event bus”, receives published messages, and then routes them on to subscribers.
There are three components messages, topics, users.
type Message struct {
// Contents
}
type Subscription struct {
ch chan<- Message
Inbox chan Message
}
func (s *Subscription) Publish(msg Message) error {
if _, ok := <-s.ch; !ok {
return errors.New("Topic has been closed")
}
s.ch <- msg
return nil
}
type Topic struct {
Subscribers []Session
MessageHistory []Message
}
func (t *Topic) Subscribe(uid uint64) (Subscription, error) {
// Get session and create one if it's the first
// Add session to the Topic & MessageHistory
// Create a subscription
}
func (t *Topic) Unsubscribe(Subscription) error {
// Implementation
}
func (t *Topic) Delete() error {
// Implementation
}
type User struct {
ID uint64
Name string
}
type Session struct {
User User
Timestamp time.Time
}
Improvements
Events can be published in a parallel fashion by utilizing stackless goroutines.
Performance can be improved by dealing with straggler subscribers by using a buffered inbox and you stop sending events once the inbox is full.
Push & Pull Pattern Medium
The push-pull pattern distributes messages to multiple workers arranged in a pipeline. A pusher sends work items downstream, workers (pullers) process them, and the results are collected by a sink. This creates a multi-stage pipeline where each stage can scale independently.
Implementation
package pushpull
import "sync"
// Pusher sends work items to a pool of pullers.
func Pusher(items []string) <-chan string {
out := make(chan string)
go func() {
defer close(out)
for _, item := range items {
out <- item
}
}()
return out
}
// Puller processes items from the input channel and sends results downstream.
func Puller(in <-chan string, process func(string) string) <-chan string {
out := make(chan string)
go func() {
defer close(out)
for item := range in {
out <- process(item)
}
}()
return out
}
// Sink collects results from multiple puller channels into a single channel.
func Sink(channels ...<-chan string) <-chan string {
out := make(chan string)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan string) {
defer wg.Done()
for val := range c {
out <- val
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
Usage
// Push work items
work := pushpull.Pusher([]string{"task-1", "task-2", "task-3", "task-4"})
// Create 2 parallel pullers that process items
process := func(s string) string {
time.Sleep(100 * time.Millisecond) // simulate work
return "done: " + s
}
puller1 := pushpull.Puller(work, process)
puller2 := pushpull.Puller(work, process)
// Collect all results into a single stream
for result := range pushpull.Sink(puller1, puller2) {
fmt.Println(result)
}
// done: task-1
// done: task-2
// done: task-3
// done: task-4
Rules of Thumb
- Each stage of the pipeline communicates through channels, making it easy to add, remove, or scale stages independently.
- Push-pull naturally provides load balancing: faster workers pull more items from the shared channel.
- For ordered results, add sequence numbers to work items and sort at the sink.
- Combine with
context.Contextfor graceful cancellation of the entire pipeline.
Bulkhead Pattern Medium
The bulkhead pattern is inspired by the sectioned partitions (bulkheads) of a ship’s hull. If one section is breached, only that section floods — the rest of the ship stays afloat. In software, the pattern isolates elements of an application into pools so that if one fails, the others continue to function.
By partitioning resource access (e.g. connection pools, goroutine pools, or semaphores), the bulkhead pattern prevents a single failing component from consuming all resources and cascading into a system-wide outage.
Implementation
Below is a Bulkhead that limits concurrent access to a downstream service
using a buffered channel as a semaphore.
package bulkhead
import (
"errors"
"time"
)
var (
ErrBulkheadFull = errors.New("bulkhead capacity full")
)
// Bulkhead limits the number of concurrent calls to a function.
type Bulkhead struct {
sem chan struct{}
timeout time.Duration
}
// New creates a Bulkhead with the given maximum concurrent capacity and a
// timeout for acquiring a slot.
func New(capacity int, timeout time.Duration) *Bulkhead {
return &Bulkhead{
sem: make(chan struct{}, capacity),
timeout: timeout,
}
}
// Execute runs fn if a slot is available within the configured timeout.
// If the bulkhead is full it returns ErrBulkheadFull without executing fn.
func (b *Bulkhead) Execute(fn func() error) error {
select {
case b.sem <- struct{}{}:
defer func() { <-b.sem }()
return fn()
case <-time.After(b.timeout):
return ErrBulkheadFull
}
}
Usage
orderBulkhead := bulkhead.New(10, 1*time.Second)
paymentBulkhead := bulkhead.New(5, 1*time.Second)
// The order service is isolated from the payment service.
// If payments exhaust their 5 slots, orders can still proceed
// with their independent pool of 10.
err := orderBulkhead.Execute(func() error {
return orderService.Place(order)
})
err = paymentBulkhead.Execute(func() error {
return paymentService.Charge(order)
})
if errors.Is(err, bulkhead.ErrBulkheadFull) {
log.Println("service is at capacity, try again later")
}
Rules of Thumb
- Size each bulkhead based on the downstream service’s capacity and expected latency.
- Combine with the circuit breaker pattern: a bulkhead limits concurrency while a circuit breaker stops calls to an already-failing service.
- Monitor bulkhead rejection rates — a consistently full bulkhead indicates the pool is undersized or the downstream is too slow.
Circuit Breaker Pattern Medium
Similar to electrical fuses that prevent fires when a circuit that is connected to the electrical grid starts drawing a high amount of power which causes the wires to heat up and combust, the circuit breaker design pattern is a fail-first mechanism that shuts down the circuit, request/response relationship or a service in the case of software development, to prevent bigger failures.
Note: The words “circuit” and “service” are used synonymously throught this document.
Implementation
Below is the implementation of a very simple circuit breaker to illustrate the purpose of the circuit breaker design pattern.
Operation Counter
circuit.Counter is a simple counter that records success and failure states of
a circuit along with a timestamp and calculates the consecutive number of
failures.
package circuit
import (
"time"
)
type State int
const (
UnknownState State = iota
FailureState
SuccessState
)
type Counter interface {
Count(State)
ConsecutiveFailures() uint32
LastActivity() time.Time
Reset()
}
Circuit Breaker
Circuit is wrapped using the circuit.Breaker closure that keeps an internal operation counter.
It returns a fast error if the circuit has failed consecutively more than the specified threshold.
After a while it retries the request and records it.
Note: Context type is used here to carry deadlines, cancelation signals, and other request-scoped values across API boundaries and between processes.
package circuit
import (
"context"
"time"
)
type Circuit func(context.Context) error
func Breaker(c Circuit, failureThreshold uint32) Circuit {
cnt := NewCounter()
return func(ctx context) error {
if cnt.ConsecutiveFailures() >= failureThreshold {
canRetry := func(cnt Counter) {
backoffLevel := Cnt.ConsecutiveFailures() - failureThreshold
// Calculates when should the circuit breaker resume propagating requests
// to the service
shouldRetryAt := cnt.LastActivity().Add(time.Seconds * 2 << backoffLevel)
return time.Now().After(shouldRetryAt)
}
if !canRetry(cnt) {
// Fails fast instead of propagating requests to the circuit since
// not enough time has passed since the last failure to retry
return ErrServiceUnavailable
}
}
// Unless the failure threshold is exceeded the wrapped service mimics the
// old behavior and the difference in behavior is seen after consecutive failures
if err := c(ctx); err != nil {
cnt.Count(FailureState)
return err
}
cnt.Count(SuccessState)
return nil
}
}
Related Works
- sony/gobreaker is a well-tested and intuitive circuit breaker implementation for real-world use cases.
Deadline Pattern Easy
The deadline pattern allows a client to stop waiting for a response once a specified amount of time has passed, at which point the probability of a successful response becomes too low to be useful. This avoids tying up resources indefinitely on slow or unresponsive operations.
In Go, the context package provides first-class support for deadlines and
timeouts, making it the idiomatic way to implement this pattern.
Implementation
package deadline
import (
"context"
"time"
)
// Work represents a unit of work that respects context cancellation.
type Work func(ctx context.Context) error
// WithDeadline wraps a unit of work with a deadline. If the work does not
// complete before the deadline, the context is cancelled and an error is
// returned.
func WithDeadline(timeout time.Duration, work Work) Work {
return func(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
done := make(chan error, 1)
go func() {
done <- work(ctx)
}()
select {
case err := <-done:
return err
case <-ctx.Done():
return ctx.Err()
}
}
}
Usage
slowOperation := func(ctx context.Context) error {
select {
case <-time.After(5 * time.Second):
fmt.Println("operation completed")
return nil
case <-ctx.Done():
return ctx.Err()
}
}
// Wrap the slow operation with a 2-second deadline.
wrapped := deadline.WithDeadline(2*time.Second, slowOperation)
err := wrapped(context.Background())
if err != nil {
fmt.Println(err) // context deadline exceeded
}
Rules of Thumb
- Always propagate the
context.Contextinto downstream calls so that cancellation reaches every layer. - Choose deadline values based on observed latency percentiles (e.g. p99) rather than arbitrary round numbers.
- Prefer
context.WithTimeoutfor relative durations andcontext.WithDeadlinefor absolute wall-clock deadlines.
Fail-Fast Pattern Easy
The fail-fast pattern checks the availability of required resources at the start of a request and fails immediately if the requirements are not satisfied. Rather than performing expensive work only to discover a missing dependency halfway through, the request is rejected early with a clear error.
This reduces wasted computation, frees resources faster, and gives callers immediate feedback so they can retry or fall back.
Implementation
package failfast
import "errors"
// Checker validates whether a required precondition is met.
type Checker func() error
// Handler represents the actual business logic to execute.
type Handler func() error
// FailFast verifies all preconditions before invoking the handler.
// If any check fails, it returns immediately without calling the handler.
func FailFast(handler Handler, checks ...Checker) error {
for _, check := range checks {
if err := check(); err != nil {
return err
}
}
return handler()
}
A concrete example — a request handler that validates a database connection and cache availability before processing:
package failfast
import "errors"
var (
ErrDBUnavailable = errors.New("database is unavailable")
ErrCacheUnavailable = errors.New("cache is unavailable")
)
func CheckDB(db *sql.DB) Checker {
return func() error {
if err := db.Ping(); err != nil {
return ErrDBUnavailable
}
return nil
}
}
func CheckCache(cache CacheClient) Checker {
return func() error {
if !cache.IsAlive() {
return ErrCacheUnavailable
}
return nil
}
}
Usage
err := failfast.FailFast(
func() error {
// Expensive business logic that requires both DB and cache.
return processOrder(order)
},
failfast.CheckDB(db),
failfast.CheckCache(cache),
)
if err != nil {
log.Printf("request rejected early: %v", err)
}
Rules of Thumb
- Only check preconditions that are cheap to verify (pings, health flags). The goal is to fail fast, not to add latency.
- Combine with the circuit breaker pattern: a circuit breaker remembers previous failures, while fail-fast verifies current readiness.
- Return specific error types so callers can distinguish between “not ready” and “processing failed.”
Handshaking Pattern Medium
The handshaking pattern allows a component to ask another component whether it can accept more load before sending actual work. If the target component signals that it is at capacity, the request is declined without even attempting the operation. This protects both the caller and the callee from being overwhelmed.
Unlike the circuit breaker, which reacts to failures after they happen, handshaking is a proactive, cooperative mechanism — the service itself advertises its readiness.
Implementation
package handshaking
import "errors"
var (
ErrServiceAtCapacity = errors.New("service is at capacity")
)
// Service represents a downstream component that supports health negotiation.
type Service interface {
// IsReady reports whether the service can accept new work.
IsReady() bool
// Do performs the actual work.
Do(req Request) (Response, error)
}
// Request and Response are domain-specific types.
type Request struct {
Payload interface{}
}
type Response struct {
Result interface{}
}
// Call performs a handshake with the target service before sending the request.
// If the service is not ready, it returns ErrServiceAtCapacity immediately.
func Call(svc Service, req Request) (Response, error) {
if !svc.IsReady() {
return Response{}, ErrServiceAtCapacity
}
return svc.Do(req)
}
A concrete service implementation using active connection tracking:
package handshaking
import "sync/atomic"
type TrackedService struct {
active int64
capacity int64
handler func(Request) (Response, error)
}
func NewTrackedService(capacity int64, handler func(Request) (Response, error)) *TrackedService {
return &TrackedService{
capacity: capacity,
handler: handler,
}
}
func (s *TrackedService) IsReady() bool {
return atomic.LoadInt64(&s.active) < s.capacity
}
func (s *TrackedService) Do(req Request) (Response, error) {
atomic.AddInt64(&s.active, 1)
defer atomic.AddInt64(&s.active, -1)
return s.handler(req)
}
Usage
svc := handshaking.NewTrackedService(100, func(req handshaking.Request) (handshaking.Response, error) {
result, err := processWork(req.Payload)
return handshaking.Response{Result: result}, err
})
resp, err := handshaking.Call(svc, handshaking.Request{Payload: data})
if errors.Is(err, handshaking.ErrServiceAtCapacity) {
log.Println("service busy, back off and retry later")
}
Rules of Thumb
- The
IsReadycheck must be cheap — it should read a counter or flag, not run diagnostics. - Handshaking works best for in-process or sidecar communication. For remote services, consider a health-check endpoint that returns HTTP 503 when at capacity.
- Combine with retry and backoff logic on the caller side to handle transient capacity limits gracefully.
Steady-State Pattern Easy
The steady-state pattern states that for every service that accumulates a resource, some other mechanism must recycle that resource. Without active cleanup, unbounded growth of logs, caches, temporary files, or connections will eventually exhaust the system and cause failures.
The goal is to keep the system in a stable, predictable operating range without human intervention.
Implementation
Below is a generic Purger that periodically cleans up accumulated resources to
maintain a steady state.
package steadystate
import (
"log"
"time"
)
// Resource represents an accumulating resource that can report its size and
// be purged.
type Resource interface {
// Size returns the current amount of accumulated resources.
Size() int64
// Purge removes resources that are older than the given threshold.
Purge(olderThan time.Duration) (purged int64, err error)
}
// Purger periodically checks a resource and purges entries that exceed the
// maximum age, keeping the system in a steady state.
type Purger struct {
resource Resource
maxAge time.Duration
interval time.Duration
stop chan struct{}
}
// NewPurger creates a purger that checks the resource at the given interval
// and removes entries older than maxAge.
func NewPurger(r Resource, maxAge, interval time.Duration) *Purger {
return &Purger{
resource: r,
maxAge: maxAge,
interval: interval,
stop: make(chan struct{}),
}
}
// Start begins the periodic purge loop in a background goroutine.
func (p *Purger) Start() {
ticker := time.NewTicker(p.interval)
go func() {
for {
select {
case <-ticker.C:
before := p.resource.Size()
purged, err := p.resource.Purge(p.maxAge)
if err != nil {
log.Printf("purge error: %v", err)
continue
}
log.Printf("purged %d items (before: %d, after: %d)",
purged, before, before-purged)
case <-p.stop:
ticker.Stop()
return
}
}
}()
}
// Stop terminates the purge loop.
func (p *Purger) Stop() {
close(p.stop)
}
Usage
// LogDir implements the steadystate.Resource interface for a log directory.
type LogDir struct {
path string
}
func (d *LogDir) Size() int64 {
entries, _ := os.ReadDir(d.path)
return int64(len(entries))
}
func (d *LogDir) Purge(olderThan time.Duration) (int64, error) {
entries, err := os.ReadDir(d.path)
if err != nil {
return 0, err
}
var purged int64
cutoff := time.Now().Add(-olderThan)
for _, e := range entries {
info, err := e.Info()
if err != nil {
continue
}
if info.ModTime().Before(cutoff) {
os.Remove(filepath.Join(d.path, e.Name()))
purged++
}
}
return purged, nil
}
// Purge log files older than 7 days, checking every hour.
purger := steadystate.NewPurger(&LogDir{path: "/var/log/myapp"}, 7*24*time.Hour, 1*time.Hour)
purger.Start()
defer purger.Stop()
Rules of Thumb
- Every accumulating resource (logs, temp files, cache entries, sessions) must have a corresponding cleanup mechanism.
- Prefer time-based purging over size-based purging — it is simpler and more predictable.
- Monitor the resource size over time. If it trends upward despite purging, the purge interval or threshold needs adjustment.
- Run purgers as background goroutines with graceful shutdown support to avoid data loss.
Timing Functions Easy
When optimizing code, sometimes a quick and dirty time measurement is required as opposed to utilizing profiler tools/frameworks to validate assumptions.
Time measurements can be performed by utilizing time package and defer statements.
Implementation
package profile
import (
"time"
"log"
)
func Duration(invocation time.Time, name string) {
elapsed := time.Since(invocation)
log.Printf("%s lasted %s", name, elapsed)
}
Usage
func BigIntFactorial(x big.Int) *big.Int {
// Arguments to a defer statement is immediately evaluated and stored.
// The deferred function receives the pre-evaluated values when its invoked.
defer profile.Duration(time.Now(), "IntFactorial")
y := big.NewInt(1)
for one := big.NewInt(1); x.Sign() > 0; x.Sub(x, one) {
y.Mul(y, x)
}
return x.Set(y)
}
Functional Options Easy
Functional options are a method of implementing clean/eloquent APIs in Go. Options implemented as a function set the state of that option.
Implementation
Options
package file
type Options struct {
UID int
GID int
Flags int
Contents string
Permissions os.FileMode
}
type Option func(*Options)
func UID(userID int) Option {
return func(args *Options) {
args.UID = userID
}
}
func GID(groupID int) Option {
return func(args *Options) {
args.GID = groupID
}
}
func Contents(c string) Option {
return func(args *Options) {
args.Contents = c
}
}
func Permissions(perms os.FileMode) Option {
return func(args *Options) {
args.Permissions = perms
}
}
Constructor
package file
func New(filepath string, setters ...Option) error {
// Default Options
args := &Options{
UID: os.Getuid(),
GID: os.Getgid(),
Contents: "",
Permissions: 0666,
Flags: os.O_CREATE | os.O_EXCL | os.O_WRONLY,
}
for _, setter := range setters {
setter(args)
}
f, err := os.OpenFile(filepath, args.Flags, args.Permissions)
if err != nil {
return err
} else {
defer f.Close()
}
if _, err := f.WriteString(args.Contents); err != nil {
return err
}
return f.Chown(args.UID, args.GID)
}
Usage
if err := file.New("/tmp/empty.txt"); err != nil {
panic(err)
}
if err := file.New("/tmp/file.txt", file.UID(1000), file.Contents("Lorem Ipsum Dolor Amet")); err != nil {
panic(err)
}
Context Propagation Easy
Context propagation passes request-scoped values, deadlines, and cancellation
signals through the call chain. In Go, context.Context is the standard
mechanism — it flows as the first parameter of every function in the chain,
ensuring the entire operation can be cancelled or timed out as a unit.
Implementation
package middleware
import (
"context"
"net/http"
)
type contextKey string
const RequestIDKey contextKey = "request_id"
// WithRequestID injects a request ID into the context.
func WithRequestID(ctx context.Context, id string) context.Context {
return context.WithValue(ctx, RequestIDKey, id)
}
// GetRequestID retrieves the request ID from the context.
func GetRequestID(ctx context.Context) string {
if id, ok := ctx.Value(RequestIDKey).(string); ok {
return id
}
return "unknown"
}
// RequestIDMiddleware extracts or generates a request ID and adds it to the context.
func RequestIDMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
id := r.Header.Get("X-Request-ID")
if id == "" {
id = generateID()
}
ctx := WithRequestID(r.Context(), id)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
Usage
func handleOrder(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
reqID := middleware.GetRequestID(ctx)
// The context flows through every layer, carrying the request ID
// and honouring any deadline set by the client or middleware.
order, err := fetchOrder(ctx, orderID)
if err != nil {
log.Printf("[%s] fetchOrder failed: %v", reqID, err)
http.Error(w, "internal error", 500)
return
}
details, err := enrichOrder(ctx, order)
if err != nil {
log.Printf("[%s] enrichOrder failed: %v", reqID, err)
}
// ...
}
Rules of Thumb
- Pass
context.Contextas the first parameter of every function that does I/O or may block. - Never store a context in a struct — pass it explicitly through function calls.
- Use typed keys (not bare strings) for
context.WithValueto avoid collisions across packages. - Keep context values limited to request-scoped data (request IDs, auth tokens). Do not use context as a general-purpose bag of state.
Error Wrapping & Sentinel Errors Easy
Go 1.13 introduced error wrapping with fmt.Errorf and %w, along with
errors.Is and errors.As for inspecting wrapped error chains. This replaces
ad-hoc string matching with structured, composable error handling.
Sentinel errors are package-level variables that represent specific failure conditions. Combined with wrapping, they let callers check what went wrong while preserving where it went wrong.
Implementation
package store
import (
"errors"
"fmt"
)
// Sentinel errors — callers check these with errors.Is.
var (
ErrNotFound = errors.New("not found")
ErrUnauthorized = errors.New("unauthorized")
ErrConflict = errors.New("conflict")
)
// Custom error with structured context.
type ValidationError struct {
Field string
Message string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation: %s — %s", e.Field, e.Message)
}
// GetUser wraps sentinel errors with context using %w.
func GetUser(id string) (*User, error) {
row, err := db.QueryRow("SELECT ...", id)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
return nil, fmt.Errorf("GetUser(%s): %w", id, ErrNotFound)
}
return nil, fmt.Errorf("GetUser(%s): %w", id, err)
}
return row, nil
}
Usage
user, err := store.GetUser("abc-123")
if err != nil {
// Check sentinel error — works through any number of wrapping layers.
if errors.Is(err, store.ErrNotFound) {
http.Error(w, "user not found", 404)
return
}
// Check for a specific error type.
var valErr *store.ValidationError
if errors.As(err, &valErr) {
http.Error(w, valErr.Message, 400)
return
}
// Unknown error.
log.Printf("unexpected: %v", err)
http.Error(w, "internal error", 500)
}
Rules of Thumb
- Use
%w(not%v) infmt.Errorfto preserve the error chain forerrors.Isanderrors.As. - Define sentinel errors at the package level with
errors.New. Keep them stable — callers depend on them. - Use
errors.Isfor value comparison (sentinels),errors.Asfor type assertion (custom error types). - Add context when wrapping (
fmt.Errorf("GetUser(%s): %w", id, err)) so the error message describes the path. - Never compare errors with
==if they might be wrapped — always useerrors.Is.
Table-Driven Tests Easy
Table-driven testing is Go’s idiomatic approach to writing test cases. Instead of separate test functions per scenario, you define a table (slice of structs) where each entry describes an input/expected-output pair, then loop over the table. This reduces duplication and makes it trivial to add new test cases.
Implementation
package mathutil
func Abs(n int) int {
if n < 0 {
return -n
}
return n
}
func Clamp(val, min, max int) int {
if val < min {
return min
}
if val > max {
return max
}
return val
}
Usage
package mathutil_test
import "testing"
func TestAbs(t *testing.T) {
tests := []struct {
name string
input int
want int
}{
{"positive", 5, 5},
{"negative", -3, 3},
{"zero", 0, 0},
{"min int edge", -1, 1},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Abs(tt.input)
if got != tt.want {
t.Errorf("Abs(%d) = %d, want %d", tt.input, got, tt.want)
}
})
}
}
func TestClamp(t *testing.T) {
tests := []struct {
name string
val, min, max int
want int
}{
{"within range", 5, 0, 10, 5},
{"below min", -3, 0, 10, 0},
{"above max", 15, 0, 10, 10},
{"at min", 0, 0, 10, 0},
{"at max", 10, 0, 10, 10},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Clamp(tt.val, tt.min, tt.max)
if got != tt.want {
t.Errorf("Clamp(%d, %d, %d) = %d, want %d",
tt.val, tt.min, tt.max, got, tt.want)
}
})
}
}
Rules of Thumb
- Always use
t.Run(tt.name, ...)to create subtests — this gives each case its own name in test output and allows running individual cases with-run. - Name the test struct variable
tt(ortc) and the slicetests— this is the community convention. - Include both typical and edge-case inputs in the table.
- For error-returning functions, add a
wantErr boolorwantErr errorfield to the struct. - Table-driven tests work well with
t.Parallel()— add it insidet.Runfor concurrent test execution.
Dependency Injection Easy
Dependency injection (DI) in Go means passing dependencies (usually as interfaces) into a struct or function rather than having them create their own. This makes code testable, composable, and decoupled from concrete implementations. No framework needed — Go interfaces and constructors are sufficient.
Implementation
package order
// Repository is the dependency interface — any storage backend can satisfy it.
type Repository interface {
Save(o Order) error
FindByID(id string) (Order, error)
}
// Notifier is another dependency.
type Notifier interface {
Notify(userID, message string) error
}
// Service is the business logic layer. Dependencies are injected via the constructor.
type Service struct {
repo Repository
notifier Notifier
}
func NewService(repo Repository, notifier Notifier) *Service {
return &Service{repo: repo, notifier: notifier}
}
func (s *Service) PlaceOrder(o Order) error {
if err := s.repo.Save(o); err != nil {
return fmt.Errorf("save order: %w", err)
}
return s.notifier.Notify(o.UserID, "Your order has been placed")
}
Usage
// Production wiring — real implementations.
db := postgres.NewOrderRepo(connStr)
email := smtp.NewNotifier(smtpHost)
svc := order.NewService(db, email)
// Test wiring — mock implementations.
func TestPlaceOrder(t *testing.T) {
mockRepo := &MockRepo{SaveFn: func(o Order) error { return nil }}
mockNotify := &MockNotifier{NotifyFn: func(uid, msg string) error { return nil }}
svc := order.NewService(mockRepo, mockNotify)
err := svc.PlaceOrder(Order{ID: "1", UserID: "alice"})
if err != nil {
t.Fatal(err)
}
}
Rules of Thumb
- Accept interfaces, return structs. Define the interface where it’s consumed (not where it’s implemented).
- Inject dependencies through constructors (
NewService(...)) — avoid global state and init-time wiring. - Keep interfaces small. One or two methods is ideal — Go’s implicit interface satisfaction makes this natural.
- Don’t reach for a DI framework. Constructor injection + interfaces covers 99% of Go use cases.
Cascading Failures (Anti-Pattern) Medium
A cascading failure occurs when a failure in one component of an interconnected system triggers failures in dependent components, creating a domino effect that can bring down the entire system. This is an anti-pattern — something to recognize and prevent.
How It Happens
Service A (overloaded)
→ times out responding to Service B
→ Service B's thread pool fills up waiting on A
→ Service C can't reach B
→ System-wide outage
Example: The Problem
package main
import (
"fmt"
"net/http"
"time"
)
// BAD: No timeout, no circuit breaker, no bulkhead.
// If serviceA is slow, this handler holds a goroutine and connection
// indefinitely, eventually exhausting server resources.
func handleRequest(w http.ResponseWriter, r *http.Request) {
resp, err := http.Get("http://service-a/api/data")
if err != nil {
// Service A is down — but we've already waited a long time.
// Meanwhile, hundreds of requests piled up behind us.
http.Error(w, "service unavailable", http.StatusServiceUnavailable)
return
}
defer resp.Body.Close()
fmt.Fprintf(w, "got data from service A")
}
Prevention Strategies
1. Timeouts
Always set deadlines on outbound calls.
client := &http.Client{
Timeout: 2 * time.Second,
}
resp, err := client.Get("http://service-a/api/data")
2. Circuit Breaker
Stop calling a failing service to give it time to recover (see Circuit-Breaker).
3. Bulkheads
Isolate resource pools per dependency so one slow service doesn’t consume all resources (see Bulkheads).
4. Fail-Fast
Check dependency health before attempting expensive work (see Fail-Fast).
5. Graceful Degradation
Return cached or default responses when a dependency is unavailable.
func getData(client *http.Client, cache *Cache) (string, error) {
resp, err := client.Get("http://service-a/api/data")
if err != nil {
// Fall back to cached data instead of failing entirely.
if cached, ok := cache.Get("data"); ok {
return cached, nil
}
return "", err
}
defer resp.Body.Close()
// ... process response
}
Rules of Thumb
- Every network call needs a timeout. No exceptions.
- Design for failure: assume every dependency will fail and plan what happens when it does.
- Monitor inter-service latency and error rates. Cascading failures often start with a subtle latency increase long before a hard failure.
- Test failure scenarios with chaos engineering tools to verify that your safeguards actually work.
- Combine multiple stability patterns (timeouts + circuit breaker + bulkhead) for defense in depth.
Contributing to Go Design Patterns
Thanks for your interest in contributing! This is an actively maintained collection of Go design and application patterns. Whether you’re fixing a typo, improving an existing pattern, or proposing a new one, your help is appreciated.
Ways to Contribute
- Improve existing patterns — better examples, clearer explanations, bug fixes in code snippets
- Add new patterns — propose patterns not yet covered (open an issue first to discuss)
- Fix issues — check the open issues for things to work on
- Review — read through patterns and report anything unclear or incorrect
Getting Started
- Fork this repository
- Create a feature branch:
git checkout -b <category>/<pattern-name> - Make your changes
- Commit following the message guidelines below
- Push to your fork and open a pull request
Pull Request Guidelines
- Make an individual pull request for each suggestion.
- Choose the corresponding patterns section for your suggestion.
- List, after your addition, should be in lexicographical order.
- Ensure Go code snippets compile and are idiomatic — use
gofmtstyle. - Keep examples minimal and focused. Avoid unnecessary boilerplate.
Commit Messages Guidelines
- The message should be in imperative form and uncapitalized.
- If possible, please include an explanation in the commit message body.
- Use the form
<pattern-section>/<pattern-name>: <message>- e.g.
creational/singleton: refactor singleton constructor - e.g.
behavioral/visitor: fix interface example
- e.g.
Pattern Template
Each pattern should have a single markdown file containing the important part of the implementation, the usage and the explanations for it. This is to ensure that the reader doesn’t have to read a bunch of boilerplate to understand what’s going on and the code is as simple as possible and not simpler.
Please use the following template for adding new patterns:
# <Pattern-Name>
<Pattern description>
## Implementation
```go
// Go implementation here
Usage
// Usage example here
Rules of Thumb
- Bullet points with practical advice
## Code Style
- All Go code should follow standard `gofmt` formatting
- Use meaningful variable and function names
- Include comments only where the logic isn't self-evident
- Prefer standard library packages over external dependencies
- Use Go generics where they improve clarity (Go 1.18+)
## Questions?
Open an issue if you're unsure about anything. We're happy to help!