Golang concurrency

From wikinotes

This page is about the methods of concurrency provided by go.
If you're looking for synchronization primitives (ex. mutexes, semaphores, ... see golang synchronization)


Chan https://pkg.go.dev/go/types@go1.18.3#Chan

Not Present

Go does not:

  • expose OS-threads, you only have access to it's green-threads
  • abstract multiprocessing, but you could roll your own with a subprocess and IPC if you wanted to
  • provide a message-queue implementation, use channels instead



Goroutines use green-threads rather than os-threads.
An OS thread is relatively expensive in setup and memory. One thread is reserved for a particular stack.
Go abstracts threads/threadpools with goroutines to make threads relatively cheap.

func doThing() {

func main() {
    go sayHello()  // <-- run in thread

Go functions default to using value objects rather than references.
Depending on your datastructure, this makes goroutines fairly concurrency-safe, since it operates on a copy of the data, rather than the same data.

func printThing(a string) {

go printThing("abc")


go run -race foo.go  # run, checking for race conditions

Goroutines cannot be stopped externally, you must use the poison pill pattern for eventloops.
If spinning up eventloops while testing, make sure to explicitly stop the goroutine after each test.


Threads are a finite resource. You only have so many CPU cores, and CPU cores can only evaluate one thread at a time. Go defaults to allowing one thread per core, but you can generally get additional performance by increasing this.

require "runtime"

runtime.GOMAXPROCS(-1)  // show configured max-number of threads
runtime.GOMAXPROCS(2)   // set max-number of threads

cpu core count



Channels serve as a message queue for go's goroutines.
Channels are typed, and you may optionally restrict it to direction (ex. read/write only).


Create channel

ch := make(chan int)       // channel (sends/recvs ints, enqueues a max of 1 int at a time)
ch := make(chan int, 100)  // bufferred channel (sends/recvs ints, enqueues a max of 100 ints at a time)

Read/write channel

num := <- ch      // read next item from channel
num, ok := <- ch  // read next item from channel (ok false when channel is closed)

ch <- 123         // append next item to channel

Channel functions

ch := make(chan int)
close(ch)  // close it so no more items can be written/read
len(ch)    // number of unread elements enqueued

Channel direction in a method signature

go func(ch <-chan int) { ... } // read-only channel
go func(ch chan<- int) { ... } // write-only channel

Deadlock Protection

Go tries to protect you from deadlocks.
You can iterate over enqueued messages in the channel,
but if it is impossible for your application to enqueue any more items,
and you are still looping through enqueued messages, go will panic.

In order to avoid this, you must close the channel when you know you are done sending messages.

require "sync"

var wg = sync.WaitGroup{}

func recv(ch <-chan int) {
    for i:= range ch {  // <-- iterate over enqueued messages, as they become ready

func send(ch chan<- int) {
    for i := 0; i < 10; i++ {
        ch <- i
    close(ch)  // <-- indicate that no more items will be sent

func main() {
    ch := make(chan int, 50)
    go send(ch)
    go recv(ch)

Select on Read

Go's implementation of select is fairly unique.

// blocking select statement
for {
    select {
    case signal := <- signals_chan
        // ...
    case command := <- commands_chan
        // ...

// optionally, if you pass in a 'default' section,
// the select statement becomes non-blocking.
// (your default code is run whenever no channels have data)
for {
    select {
    case signal := <- signals_chan
        // ...
    case command := <- commands_chan
        // ...
        // ...

Channel Buffer on Writes

You can enqueue as much as you'd like to a buffered channel.
Your program will automatically pause until there is room for more items in the buffer.