Concurrency — threads and channels
Why concurrency?
In Chapter 17, you built a TCP server that handles one client at a time. While it is talking to client A, clients B, C, and D have to wait. For a chat app or a web server, that is unacceptable.
Concurrency means doing multiple things at once — or at least appearing to. With concurrency, your server can handle hundreds of clients simultaneously.
Nyx provides three tools for concurrency: threads, channels, and mutexes.
What is a thread?
A thread is like a second worker inside your program. Your main program is one thread. When you spawn a new thread, you get a second worker that runs at the same time as the first.
Think of a restaurant kitchen: one chef can only cook one dish at a time. Two chefs can cook two dishes simultaneously. Threads are your extra chefs.
Spawning a thread
fn compute() -> int { var total: int = 0 var i: int = 0 while i < 1000000 { total += 1 i += 1 } return total } fn main() { print("Starting thread...") let handle: int = thread_spawn(compute) print("Main thread continues while worker computes...") let result: int = thread_join(handle) print("Thread finished with: " + int_to_string(result)) }
thread_spawn(function)starts a new thread running the given function. Returns a handle.thread_join(handle)waits for the thread to finish and returns its result.
Between spawn and join, both threads run simultaneously.
Multiple threads
fn work_a() -> int { print("Worker A started") var i: int = 0 while i < 500000 { i += 1 } print("Worker A done") return 1 } fn work_b() -> int { print("Worker B started") var i: int = 0 while i < 500000 { i += 1 } print("Worker B done") return 2 } fn work_c() -> int { print("Worker C started") var i: int = 0 while i < 500000 { i += 1 } print("Worker C done") return 3 } fn main() { let ha: int = thread_spawn(work_a) let hb: int = thread_spawn(work_b) let hc: int = thread_spawn(work_c) let ra: int = thread_join(ha) let rb: int = thread_join(hb) let rc: int = thread_join(hc) print("Results: " + int_to_string(ra) + ", " + int_to_string(rb) + ", " + int_to_string(rc)) }
All three workers run concurrently. The output order of "started"/"done" messages may vary between runs — that is concurrency in action.
The problem with shared data
What happens when two threads try to modify the same variable?
Thread A: reads counter (0) Thread B: reads counter (0) Thread A: writes counter (1) Thread B: writes counter (1) ← should be 2, but it's 1!
This is called a race condition — both threads "race" to use the data, and the result depends on who gets there first. Race conditions cause subtle, hard-to-reproduce bugs.
Mutexes: protecting shared data
A mutex (mutual exclusion) is a lock. Only one thread can hold the lock at a time. If thread B tries to lock a mutex that thread A already holds, thread B waits until thread A unlocks it.
fn main() { let m: Map = mutex_new() var counter: int = 0 fn worker_a() -> int { var i: int = 0 while i < 100 { mutex_lock(m) counter = counter + 1 mutex_unlock(m) i += 1 } return 0 } fn worker_b() -> int { var i: int = 0 while i < 100 { mutex_lock(m) counter = counter + 1 mutex_unlock(m) i += 1 } return 0 } let ha: int = thread_spawn(worker_a) let hb: int = thread_spawn(worker_b) thread_join(ha) thread_join(hb) print("Counter: " + int_to_string(counter)) // 200 }
Without the mutex, the counter might end up less than 200 due to race conditions. With it, exactly one thread modifies counter at a time.
Key rules:
mutex_new()creates a new mutex.mutex_lock(m)acquires the lock (waits if another thread holds it).mutex_unlock(m)releases the lock.- Always unlock what you lock. Forgetting to unlock causes deadlocks — threads waiting forever.
Channels: safe communication between threads
A channel is a pipe between threads. One thread sends values in, another thread takes values out. Channels are the safest way for threads to communicate.
fn main() { let ch: Map = channel_new(8) fn producer() -> int { var i: int = 0 while i < 5 { channel_send(ch, i * 10) i += 1 } return 0 } let handle: int = thread_spawn(producer) var i: int = 0 while i < 5 { let value: int = channel_recv(ch) print("Received: " + int_to_string(value)) i += 1 } thread_join(handle) }
Output:
Received: 0 Received: 10 Received: 20 Received: 30 Received: 40
channel_new(capacity)creates a channel that can buffer up tocapacityitems.channel_send(ch, value)sends a value. Blocks if the channel is full.channel_recv(ch)receives a value. Blocks if the channel is empty.channel_try_recv(ch)tries to receive without blocking. Returns -1 if empty.
Pattern: worker pool
A common pattern is to have multiple worker threads reading from the same channel:
fn main() { let jobs: Map = channel_new(16) let results: Map = channel_new(16) fn worker() -> int { while 1 > 0 { let job: int = channel_recv(jobs) if job < 0 { return 0 } // "Process" the job: square the number channel_send(results, job * job) } return 0 } // Start 4 workers let w1: int = thread_spawn(worker) let w2: int = thread_spawn(worker) let w3: int = thread_spawn(worker) let w4: int = thread_spawn(worker) // Send 8 jobs var i: int = 1 while i <= 8 { channel_send(jobs, i) i += 1 } // Collect 8 results var total: int = 0 i = 0 while i < 8 { let r: int = channel_recv(results) print("Result: " + int_to_string(r)) total += r i += 1 } print("Total: " + int_to_string(total)) // Signal workers to stop channel_send(jobs, -1) channel_send(jobs, -1) channel_send(jobs, -1) channel_send(jobs, -1) thread_join(w1) thread_join(w2) thread_join(w3) thread_join(w4) }
The workers run in parallel, each grabbing the next available job from the channel. This is how real-world servers distribute work across CPU cores.
Multi-threaded HTTP server
Nyx's standard library includes http_serve_mt — a multi-threaded HTTP server that uses a thread pool internally:
import { http_serve_mt, http_response } from "std/http" fn on_request(request: Array) -> String { let path: String = request[2] if path == "/" { return http_response(200, "Hello from a multi-threaded server!") } return http_response(404, "Not Found") } fn main() { print("Multi-threaded server on http://localhost:8080") http_serve_mt(8080, 4, on_request) }
The second parameter (4) is the number of worker threads. Each thread handles requests independently, so multiple clients are served simultaneously. This is how Nyx achieves 73,000+ requests per second.
Practical example: parallel computation
Split a large task across multiple threads:
fn main() { let ch: Map = channel_new(4) fn sum_range_1() -> int { var total: int = 0 var i: int = 1 while i <= 250000 { total += i i += 1 } channel_send(ch, total) return 0 } fn sum_range_2() -> int { var total: int = 0 var i: int = 250001 while i <= 500000 { total += i i += 1 } channel_send(ch, total) return 0 } fn sum_range_3() -> int { var total: int = 0 var i: int = 500001 while i <= 750000 { total += i i += 1 } channel_send(ch, total) return 0 } fn sum_range_4() -> int { var total: int = 0 var i: int = 750001 while i <= 1000000 { total += i i += 1 } channel_send(ch, total) return 0 } let h1: int = thread_spawn(sum_range_1) let h2: int = thread_spawn(sum_range_2) let h3: int = thread_spawn(sum_range_3) let h4: int = thread_spawn(sum_range_4) var grand_total: int = 0 var i: int = 0 while i < 4 { grand_total += channel_recv(ch) i += 1 } thread_join(h1) thread_join(h2) thread_join(h3) thread_join(h4) print("Sum of 1 to 1,000,000 = " + int_to_string(grand_total)) // 500000500000 }
Four threads each compute a quarter of the sum, then the main thread adds the partial results.
Exercises
- Write a program that spawns 3 threads, each printing its own message, and waits for all to finish.
- Use a mutex to safely increment a shared counter from 5 threads, each adding 1000. The final count should be 5000.
- Build a producer-consumer system: one thread generates numbers 1-20 and sends them through a channel. Another thread receives and prints them.
- Write a parallel
map: split an array into chunks, process each chunk in a separate thread, and collect results via a channel.
- Build a multi-threaded echo server using
tcp_listen,tcp_accept, andthread_spawn— spawn a new thread for each client connection.
Summary
thread_spawn(fn)starts a new thread.thread_join(handle)waits for it.- Race conditions happen when threads access shared data without coordination.
mutex_new(),mutex_lock(m),mutex_unlock(m)protect shared data.channel_new(cap),channel_send(ch, val),channel_recv(ch)let threads communicate safely.- Worker pools use channels to distribute work across threads.
http_serve_mt(port, workers, handler)runs a multi-threaded HTTP server.- Rule of thumb: prefer channels over mutexes. "Don't communicate by sharing memory; share memory by communicating."
Next chapter: Your second project — A web server →