Async and event loop
The problem with blocking
In Chapter 17, you learned that tcp_accept blocks — the program stops and waits until a client connects. In Chapter 18, you solved this with threads: spawn a new thread for each client.
But threads have costs. Each thread uses memory for its stack (~8KB minimum). With 10,000 concurrent connections, that is 80MB just for stacks. And switching between threads has overhead.
Async I/O is a different approach: instead of blocking on each operation, you tell the operating system "notify me when data is ready" and continue doing other work. One thread can handle thousands of connections.
Goroutines in Nyx
Nyx provides lightweight concurrency with spawn — similar to goroutines in Go:
fn main() { spawn { print("Hello from goroutine 1") } spawn { print("Hello from goroutine 2") } sleep(50) // wait for goroutines to finish print("Main done") }
Unlike thread_spawn (which creates an OS thread), spawn creates a lightweight task managed by Nyx's M:N scheduler. Many goroutines map onto fewer OS threads.
The M:N scheduler
Nyx's scheduler maps N goroutines onto M OS threads:
Goroutines: [g1] [g2] [g3] [g4] [g5] [g6] [g7] [g8]
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
OS Threads: [Thread 1] [Thread 2] [Thread 3] [Thread 4]
The scheduler uses work-stealing: if one thread runs out of goroutines, it steals work from another thread's queue. This keeps all CPU cores busy.
Select on channels
When you have multiple channels, select lets you wait for whichever one has data first:
fn main() { let ch1: Map = channel_new(4) let ch2: Map = channel_new(4) spawn { sleep(100) channel_send(ch1, 42) } spawn { sleep(50) channel_send(ch2, 99) } select { case ch1 => { let v: int = channel_recv(ch1) print("ch1: " + int_to_string(v)) } case ch2 => { let v: int = channel_recv(ch2) print("ch2: " + int_to_string(v)) } } }
select is like match for channels — it picks the first channel that has data. If none are ready, it blocks until one is.
The event loop
Under the hood, Nyx uses an epoll-based event loop (on Linux) for async I/O. When a goroutine needs to wait for network data, instead of blocking the entire OS thread, it:
- Registers interest with epoll ("tell me when this socket has data").
- Yields the goroutine, allowing others to run.
- Gets woken up when data arrives.
This is why http_serve_mt can handle 73,000+ requests per second — it combines thread pools with event-driven I/O.
Async/await syntax
Nyx also provides async/await syntax:
async fn fetch_data() -> String { return "data loaded" } fn main() { let result: String = await fetch_data() print(result) // data loaded }
Important note: In the current version, async/await is syntactic sugar — it does not provide additional parallelism beyond what spawn and threads offer. The async fn creates a closure, and await calls it. For real parallelism, use spawn or thread_spawn.
When to use what
| Tool | Best for | Overhead |
|---|---|---|
thread_spawn |
CPU-intensive work | ~8KB per thread + OS overhead |
spawn |
I/O-bound goroutines | ~1KB per goroutine |
channel_new + workers |
Task distribution | Minimal |
http_serve_mt |
HTTP servers | Combines threads + channels |
select |
Waiting on multiple channels | Zero overhead |
Rule of thumb: Use spawn for I/O-bound work (network, files). Use thread_spawn for CPU-bound work (computation, compression). Use http_serve_mt for HTTP servers.
Practical example: concurrent fetcher
fn main() { let results: Map = channel_new(8) // Simulate fetching data from 4 sources concurrently spawn { sleep(100) // simulate network delay channel_send(results, 1) } spawn { sleep(200) channel_send(results, 2) } spawn { sleep(50) channel_send(results, 3) } spawn { sleep(150) channel_send(results, 4) } // Collect all results var i: int = 0 while i < 4 { let r: int = channel_recv(results) print("Got result: " + int_to_string(r)) i += 1 } print("All done!") }
Results arrive in order of completion (fastest first), not in order of creation.
Practical example: timeout pattern
fn main() { let result: Map = channel_new(1) let timeout: Map = channel_new(1) spawn { sleep(500) // simulate slow operation channel_send(result, 42) } spawn { sleep(200) // timeout after 200ms channel_send(timeout, 0) } select { case result => { let v: int = channel_recv(result) print("Got result: " + int_to_string(v)) } case timeout => { channel_recv(timeout) print("Timed out!") } } }
If the operation takes longer than 200ms, the timeout fires first.
Exercises
- Write a program that spawns 10 goroutines, each sending its ID through a channel. Collect and print all IDs.
- Implement a fan-out/fan-in pattern: one producer sends numbers 1-100 to a channel, 4 workers square each number, and a collector sums the results.
- Use
selectto implement a simple priority system: high-priority and low-priority channels, where high-priority is always checked first.
- Write a concurrent countdown: spawn 5 goroutines that each count down from 5 to 1 with different delays, all printing their progress.
- Implement a timeout wrapper: a function that runs a computation in a goroutine and returns a default value if it takes too long.
Summary
spawn { }creates lightweight goroutines managed by the M:N scheduler.- The scheduler uses work-stealing to keep all CPU cores busy.
select { case ch => { } }waits on multiple channels.- Nyx uses an epoll-based event loop for efficient async I/O.
async/awaitis syntactic sugar (not additional parallelism).- Use
spawnfor I/O-bound work,thread_spawnfor CPU-bound work. - Channels + goroutines enable patterns: fan-out/fan-in, timeouts, priority queues.
Next chapter: Systems — inline assembly, volatile, atomic →