Case study — How nyx-kv was built
What is nyx-kv?
nyx-kv is a Redis-compatible key-value database written entirely in Nyx. It speaks the Redis protocol (RESP), so any Redis client — redis-cli, Python's redis-py, Node's ioredis — can connect to it without modification.
It handles 6.76 million SET operations per second and 21.57 million GET operations per second in benchmarks. This chapter walks through how it was designed and built, decision by decision.
The architecture at a glance
┌─────────────┐
redis-cli ─────────►│ TCP Accept │
python ─────────►│ Loop │
node.js ─────────►│ (main) │
└──────┬──────┘
│ channel_send(fd)
┌──────▼──────┐
│ Channel │
│ (capacity: │
│ 512) │
└──────┬──────┘
│ channel_recv(fd)
┌──────────────┼──────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Worker 1 │ │ Worker 2 │ │Worker 128│
│ (RESP │ │ (RESP │ │ (RESP │
│ parser │ │ parser │ │ parser │
│ + cmds) │ │ + cmds) │ │ + cmds) │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
└──────────────┼──────────────┘
▼
┌───────────────┐
│ Global Store │
│ (Map + TTL) │
└───────────────┘
Three components:
- Accept loop — the main thread accepts TCP connections and distributes file descriptors via a channel.
- Worker pool — 128 threads read commands from clients and execute them.
- Global store — a Map for data and a Map for TTL (time-to-live) expiration.
Decision 1: Why a channel-based architecture?
The simplest server model is "one thread per connection." But creating a thread for every connection is expensive. Redis itself uses a single-threaded event loop.
nyx-kv takes a middle path: a fixed pool of 128 workers connected to the accept loop via a channel. This gives:
- Bounded resource usage — at most 128 threads, regardless of connection count.
- Simple load distribution — the channel acts as a queue; idle workers pick up the next client.
- No thread creation overhead — workers are pre-spawned.
fn main() { g_ch = channel_new(512) let server: int = tcp_listen("0.0.0.0", port) // Pre-spawn all workers var i: int = 0 while i < 128 { thread_spawn(kv_worker) i = i + 1 } // Accept loop while 1 > 0 { let client: int = tcp_accept(server) if client >= 0 { channel_send(g_ch, client) } } }
Decision 2: The RESP protocol
Redis uses a protocol called RESP (REdis Serialization Protocol). It is text-based and simple:
*3\r\n ← array of 3 elements $3\r\n ← bulk string of 3 bytes SET\r\n ← the string "SET" $4\r\n ← bulk string of 4 bytes name\r\n ← the string "name" $5\r\n ← bulk string of 5 bytes Alice\r\n ← the string "Alice"
Responses:
+OK\r\n— simple string-ERR message\r\n— error:42\r\n— integer$5\r\nAlice\r\n— bulk string$-1\r\n— null
The parser (resp.nx) reads this format from the socket:
fn resp_read_command(fd: int) -> Array { // Read *N to get argument count // For each argument, read $len then the data // Return array of strings: ["SET", "name", "Alice"] }
The key optimization: the parser also supports inline commands (space-separated), so redis-cli commands like SET name Alice work directly.
Decision 3: The storage layer
The simplest possible storage: two global Maps.
var g_store: Map = Map.new() // key → value var g_ttl: Map = Map.new() // key → expiration (microseconds)
Why Maps and not a custom data structure?
- Nyx Maps use Robin Hood hashing — open addressing with good worst-case performance.
- Zero external dependencies — no need for FFI to a C hash table.
- Good enough performance — 21M+ GET ops/sec proves the point.
Lazy expiration
nyx-kv does not run a background thread to expire keys. Instead, it checks expiration lazily — every time a key is accessed:
fn kv_check_expired(key: String) -> bool { if g_ttl.contains(key) { let expires: int = string_to_int(g_ttl.get(key)) if expires > 0 and time_us() >= expires { g_store.remove(key) g_ttl.remove(key) return true } } return false }
This is the same strategy Redis uses. It avoids the overhead of a background scanner and keeps the implementation simple.
Decision 4: Command dispatch optimization
Benchmarks showed that SET and GET account for ~95% of traffic. So the command dispatcher uses first-character matching for the fast path:
fn dispatch_command(cmd: Array, fd: int) -> String { let first: int = cmd_name.charAt(0) // Fast path: SET (S=83, s=115) if first == 83 or first == 115 { if cmd_name == "SET" or cmd_name == "set" { kv_set(cmd[1], cmd[2]) return RESP_OK // cached "+OK\r\n" } } // Fast path: GET (G=71, g=103) if first == 71 or first == 103 { if cmd_name == "GET" or cmd_name == "get" { // Write directly to socket — zero allocation resp_write_bulk(fd, kv_get(cmd[1])) return "" // response already sent } } // Slow path: normalize and match other commands // ... }
Key optimizations:
- Cached constants:
RESP_OK = "+OK\r\n"is allocated once, not per request. - Zero-allocation GET: instead of building a response string,
resp_write_bulkwrites directly to the socket. - First-char dispatch: avoids string comparison for the common case.
Decision 5: Zero-allocation GET responses
The biggest performance win was eliminating allocations from GET responses. Instead of:
// Allocates a new string every time let response: String = resp_bulk_string(value) tcp_write(client, response)
nyx-kv writes the RESP framing directly to the socket:
fn resp_write_bulk(fd: int, value: String) { // Write "$<len>\r\n<value>\r\n" directly to fd tcp_write(fd, "$") tcp_write(fd, int_to_string(value.length())) tcp_write(fd, "\r\n") tcp_write(fd, value) tcp_write(fd, "\r\n") }
This avoids creating an intermediate string, reducing GC pressure in the hot path.
Supported commands
AUTH and multi-tenancy
nyx-kv supports multi-tenant isolation via authentication tokens. Each connection can authenticate with AUTH <token>, which assigns the connection to a user namespace.
$ redis-cli -p 6380
127.0.0.1:6380> AUTH abc123def456
OK
127.0.0.1:6380> WHOAMI
"alice:pro"
127.0.0.1:6380> SET name Alice ← stored as alice::name internally
OK
Tokens are created by an admin from localhost:
127.0.0.1:6380> TOKEN_CREATE alice pro "abc123def456..."
Three plans control resource limits:
| Plan | Rate | Max keys | Max value | Forced TTL |
|---|---|---|---|---|
| free | 100 req/s | 1,000 | 100 KB | 72 hours |
| pro | 10,000 req/s | 100,000 | 1 MB | None |
| enterprise | unlimited | unlimited | unlimited | None |
Connections without AUTH get the free tier automatically, keyed by IP address. Namespace isolation is transparent — all key operations pass through auth_prefix_key, which prepends the user's ID.
Pub/Sub
nyx-kv supports the publish/subscribe messaging pattern. Subscribers register interest in channels, and publishers broadcast messages to all subscribers of a channel.
$ redis-cli -p 6380 127.0.0.1:6380> SUBSCRIBE notifications Reading messages... 1) "subscribe" 2) "notifications" 3) (integer) 1
From another terminal:
$ redis-cli -p 6380 127.0.0.1:6380> PUBLISH notifications "user signed up" (integer) 1
The subscriber receives:
1) "message" 2) "notifications" 3) "user signed up"
This is fan-out delivery — every subscriber on a channel gets every message. Unlike message queues, there is no persistence or acknowledgment. Once a subscriber enters SUBSCRIBE mode, only SUBSCRIBE, UNSUBSCRIBE, and PING commands are accepted.
Persistence (.ndb format)
nyx-kv persists data to disk using a binary .ndb format. The format starts with a NYXDB magic header, a version byte, then key-value entries, followed by a CRC32 checksum and an 0xFF end-of-file marker.
Persistence is always active:
- Background saver — a separate thread saves a snapshot every 60 seconds or after 100 changes, whichever comes first.
- SIGTERM handler — on graceful shutdown (
killorsystemctl stop), a final save is triggered before the process exits. - Manual save —
SAVE(blocking) andBGSAVE(background) commands are available for pro/enterprise users. Free tier blocks these to prevent abuse.
On startup, persist_load reads the .ndb file and restores all keys, lists, sets, and hashes to memory.
Supported commands
nyx-kv implements 52+ commands across strings, lists, sets, hashes, Pub/Sub, and server management:
| Command | Description |
|---|---|
PING |
Health check, returns PONG |
SET key value |
Store a value |
GET key |
Retrieve a value |
DEL key [key...] |
Delete keys |
EXISTS key |
Check if key exists |
KEYS |
List all keys |
EXPIRE key seconds |
Set time-to-live |
TTL key |
Get remaining TTL |
INCR key |
Atomic increment |
DECR key |
Atomic decrement |
MSET k1 v1 k2 v2... |
Bulk set |
MGET k1 k2... |
Bulk get |
DBSIZE |
Key count |
FLUSHDB |
Clear all data |
INFO |
Server information |
CONFIG/COMMAND |
Redis compatibility stubs |
Performance results
Tested with redis-benchmark:
SET: 6,760,000 ops/sec (pipelined) GET: 21,570,000 ops/sec (pipelined) SET: 161,000 ops/sec (non-pipelined) GET: 170,000 ops/sec (non-pipelined)
For context, Redis itself achieves about 100,000 ops/sec non-pipelined and 1-2 million pipelined on similar hardware.
Lessons learned
- Start simple, optimize later. The first version used string concatenation for responses. Profiling showed this was the bottleneck. Only then was zero-allocation GET added.
- The channel pattern scales. 128 workers + 1 accept loop handles thousands of concurrent connections cleanly.
- Nyx Maps are fast enough. No need for a custom hash table — Robin Hood hashing in the runtime handles 21M+ ops/sec.
- Lazy expiration works. A background timer would add complexity. Checking on access is simple and correct.
- Protocol compatibility matters. By implementing RESP, nyx-kv works with every Redis client in every language — for free.
The complete main module
import "products/kv/resp" import "products/kv/store" import "products/kv/commands" var g_ch: Map = Map.new() fn kv_worker() -> int { while 1 > 0 { let client: int = channel_recv(g_ch) if client < 0 { return 0 } g_connections = g_connections + 1 var connected: bool = true while connected { let cmd: Array = resp_read_command_fast(client) if cmd.length() == 0 { connected = false } else { let response: String = dispatch_command(cmd, client) if response.length() > 0 { tcp_write(client, response) } } } tcp_close(client) } return 0 } fn main() { let port: int = 6380 let num_workers: int = 128 g_ch = channel_new(512) let server: int = tcp_listen("0.0.0.0", port) var i: int = 0 while i < num_workers { thread_spawn(kv_worker) i = i + 1 } while 1 > 0 { let client: int = tcp_accept(server) if client >= 0 { channel_send(g_ch, client) } } return 0 }
72 lines of code for the main module. The entire database is about 400 lines across 4 files.
Exercises
- Extend nyx-kv with an
APPEND key valuecommand that appends to an existing value.
- Add a
SETNX key valuecommand that only sets the key if it does not already exist.
- Implement a
RANDOMKEYcommand that returns a random key from the store.
- Add persistence: save the store to a file on
FLUSHDBand load it on startup.
- Build your own mini database: pick a protocol (HTTP, raw TCP, or custom), a data model, and implement it using the patterns from this chapter.
Summary
- nyx-kv is a Redis-compatible database written in ~400 lines of Nyx.
- Architecture: accept loop → channel → worker pool → global store.
- RESP protocol for compatibility with all Redis clients.
- Lazy expiration: check TTL on access, not with a background thread.
- Fast path optimization: first-char dispatch, cached constants, zero-allocation GET.
- 128 pre-spawned workers handle thousands of concurrent connections.
- Performance: 6.76M SET/s, 21.57M GET/s (pipelined).