Overview
Lattice uses structured concurrency. All spawned tasks are bound to a scope block and are guaranteed to complete before the scope exits. There are no orphaned tasks or background goroutines that outlive their parent.
Communication between concurrent tasks happens through channels. Values sent on a channel are deep-copied, ensuring full isolation between tasks. Only frozen (crystal-phase) values can be sent on channels, enforcing immutability at the boundary.
The runtime uses pthreads internally, so spawned tasks run as real OS threads. The three core primitives are:
// 1. Channels — typed communication pipes
fix ch = Channel::new()
// 2. Scope — structured boundary for concurrent work
scope {
// 3. Spawn — launch a concurrent task
spawn {
ch.send(freeze(42))
}
spawn {
flux val = ch.recv()
print(val) // 42
}
}
// both spawns are guaranteed complete hereChannels
Channels are the primary mechanism for communication between concurrent tasks. A channel is a first-class value that can be passed to functions, stored in data structures, and shared across spawned tasks.
Creates a new unbuffered channel. The channel blocks the sender until a receiver is ready, and vice versa.
fix ch = Channel::new()Sends a value on the channel. The value must be frozen (crystal phase). Use freeze() to convert a mutable value before sending. Blocks until a receiver is available.
flux data = [1, 2, 3]
ch.send(freeze(data))
// Literals are automatically frozen
ch.send(freeze("hello"))
ch.send(freeze(42))Receives a value from the channel. Blocks until a value is available. Returns nil when the channel is closed and empty.
flux val = ch.recv()
if val != nil {
print("received: ${val}")
}Closes the channel. After closing, send() will have no effect and recv() returns nil once all buffered values are consumed. Always close channels when production is complete.
ch.close()Scope & Spawn
The scope block creates a structured concurrency boundary. All spawn blocks within a scope launch concurrent tasks, and the scope blocks the parent thread until every spawned task has completed. This guarantees no task outlives its scope.
fix ch = Channel::new()
scope {
spawn {
// task 1
ch.send(freeze("hello"))
}
spawn {
// task 2
ch.send(freeze("world"))
}
}
// both tasks guaranteed complete hereSpawned tasks get deep copies of any variables they capture from the enclosing scope. This means mutations inside a spawned closure are fully isolated and do not affect the parent or sibling tasks.
flux counter = 0
fix results = Channel::new()
scope {
spawn {
// this is a COPY of counter, not a reference
flux local = counter + 1
results.send(freeze(local))
}
spawn {
flux local = counter + 2
results.send(freeze(local))
}
}
print(counter) // still 0 — spawns got copiesscope block completes, all spawned work within it is finished. This eliminates an entire class of resource-leak bugs.
Select
The select statement multiplexes across multiple channels, waiting for whichever is ready first. It is the primary tool for handling multiple concurrent communication paths.
fix ch1 = Channel::new()
fix ch2 = Channel::new()
select {
msg from ch1 => {
print("got from ch1: ${msg}")
}
msg from ch2 => {
print("got from ch2: ${msg}")
}
default => {
print("no messages ready")
}
}When multiple channels have values ready simultaneously, Lattice uses a Fisher-Yates shuffle to randomly select which arm executes. This ensures fairness: no channel is systematically starved over repeated select calls.
The default arm is optional. When present, select is non-blocking: if no channel has a value ready, the default arm runs immediately. Without a default arm, select blocks until one of the channels is ready.
// Blocking select — waits until one channel is ready
select {
val from ch1 => { print("ch1: ${val}") }
val from ch2 => { print("ch2: ${val}") }
}
// Non-blocking select — falls through to default if nothing ready
select {
val from ch1 => { print("ch1: ${val}") }
default => { print("nothing available yet") }
}name from channel to bind the received value. The binding variable is scoped to that arm's block. Use _ as the binding name if you don't need the value.
Fan-Out / Fan-In
The fan-out pattern distributes work from a single input channel across multiple workers. Each worker pulls from the shared input and pushes results to a shared output channel. This is useful for parallelizing CPU-bound or I/O-bound work.
fn process(item: Int) {
return item * item
}
fix input = Channel::new()
fix output = Channel::new()
let data = [1, 2, 3, 4, 5, 6, 7, 8]
scope {
// fan-out: 4 workers
for i in 0..4 {
spawn {
flux running = true
while running {
flux item = input.recv()
if item == nil { running = false } else {
output.send(freeze(process(item)))
}
}
}
}
// producer
spawn {
for item in data {
input.send(freeze(item))
}
input.close()
}
}recv() will eventually return nil, causing it to exit its loop. This is the standard pattern for signaling "no more work."
Pipeline
A pipeline chains multiple processing stages together, each connected by a channel. Data flows from one stage to the next, with each stage transforming values independently and concurrently.
fn stage(name: String, input: Channel, output: Channel, transform: Fn) {
spawn {
flux running = true
while running {
flux val = input.recv()
if val == nil { running = false } else {
output.send(freeze(transform(val)))
}
}
output.close()
}
}
fix ch1 = Channel::new()
fix ch2 = Channel::new()
fix ch3 = Channel::new()
scope {
stage("double", ch1, ch2, |x: Int| { x * 2 })
stage("add-one", ch2, ch3, |x: Int| { x + 1 })
// producer: feed values into the pipeline
spawn {
for i in 1..6 { ch1.send(freeze(i)) }
ch1.close()
}
// consumer: read results from the end of the pipeline
flux running = true
while running {
flux result = ch3.recv()
if result == nil { running = false } else {
print(result) // 3, 5, 7, 9, 11
}
}
}
Each stage reads from its input channel, applies a transformation, and writes to its output channel. When the input is exhausted (returns nil), the stage closes its output, which cascades the shutdown signal through the entire pipeline.
Worker Pool
A worker pool maintains a fixed number of concurrent workers that pull jobs from a shared queue. This bounds the level of parallelism and prevents resource exhaustion when processing a large number of tasks.
fn worker_pool(jobs: Channel, results: Channel, num_workers: Int) {
for i in 0..num_workers {
spawn {
flux running = true
while running {
flux job = jobs.recv()
if job == nil { running = false } else {
flux result = job * job // compute
results.send(freeze(result))
}
}
}
}
}
fix jobs = Channel::new()
fix results = Channel::new()
scope {
// start 4 workers
worker_pool(jobs, results, 4)
// submit 10 jobs
spawn {
for i in 1..11 {
jobs.send(freeze(i))
}
jobs.close()
}
// collect 10 results
spawn {
for i in 0..10 {
flux r = results.recv()
print("result: ${r}")
}
}
}Producer / Consumer
The classic producer-consumer pattern uses a channel as a buffer between a producing task and a consuming task. The producer writes values and closes the channel when done. The consumer reads until it receives nil.
fix buffer = Channel::new()
scope {
// producer
spawn {
for i in 1..11 {
buffer.send(freeze(i))
}
buffer.close()
}
// consumer
spawn {
flux running = true
while running {
flux item = buffer.recv()
if item == nil { running = false } else {
print("consumed: ${item}")
}
}
}
}This pattern naturally extends to multiple producers or multiple consumers sharing the same channel. Each consumer will receive a different value from the channel, achieving automatic load distribution.
Timeout
Lattice does not have a built-in timeout primitive, but you can implement one using select with a dedicated timeout channel. Spawn a task that sleeps for the desired duration and then sends a signal.
fix result_ch = Channel::new()
fix timeout_ch = Channel::new()
scope {
spawn {
// slow computation
flux sum = 0
for i in 0..1000000 { sum = sum + i }
result_ch.send(freeze(sum))
}
spawn {
sleep(1000) // 1 second timeout
timeout_ch.send(freeze("timeout"))
}
}
select {
val from result_ch => { print("result: ${val}") }
_ from timeout_ch => { print("timed out!") }
}
The select will resolve whichever channel receives a value first. If the computation finishes before the sleep, the result arm runs. If the sleep completes first, the timeout arm runs.
scope block. The timeout pattern controls which result you act on, but does not cancel the slow task. Design long-running tasks to check a cancellation channel if you need cooperative cancellation.
Broadcast
The broadcast pattern sends every message from a single source to multiple consumers. Each consumer receives its own copy of every message via its own channel.
fn broadcast(source: Channel, outputs: Array) {
spawn {
flux running = true
while running {
flux msg = source.recv()
if msg == nil {
running = false
for ch in outputs { ch.close() }
} else {
for ch in outputs { ch.send(freeze(msg)) }
}
}
}
}
fix source = Channel::new()
fix sub1 = Channel::new()
fix sub2 = Channel::new()
fix sub3 = Channel::new()
scope {
broadcast(source, [sub1, sub2, sub3])
// subscriber 1
spawn {
flux running = true
while running {
flux msg = sub1.recv()
if msg == nil { running = false } else {
print("sub1: ${msg}")
}
}
}
// subscriber 2
spawn {
flux running = true
while running {
flux msg = sub2.recv()
if msg == nil { running = false } else {
print("sub2: ${msg}")
}
}
}
// subscriber 3
spawn {
flux running = true
while running {
flux msg = sub3.recv()
if msg == nil { running = false } else {
print("sub3: ${msg}")
}
}
}
// publisher
spawn {
source.send(freeze("event-a"))
source.send(freeze("event-b"))
source.send(freeze("event-c"))
source.close()
}
}send() deep-copies the value for each receiver, every subscriber gets an independent copy. Mutations by one subscriber cannot affect another.
Phase Integration
Lattice's phase system integrates directly with concurrency. Only frozen (crystal) values can be sent on channels. This is enforced at runtime: attempting to send a flux (mutable) value will produce an error.
// Freeze before sending
flux data = [1, 2, 3]
fix frozen_data = freeze(data)
ch.send(frozen_data)
// Thaw after receiving if mutation is needed
flux received = ch.recv()
flux mutable = thaw(received)
mutable.push(4)
print(mutable) // [1, 2, 3, 4]The phase transitions for concurrency are:
Transitions a value to crystal (immutable) phase. Required before sending on a channel. The frozen value is a deep copy; the original mutable value is unaffected.
Transitions a frozen value back to flux (mutable) phase. Use this on received channel values when you need to modify them. Returns a new mutable deep copy.
Temporarily borrows a frozen value as mutable within the callback scope. The value is re-frozen when the callback returns. Useful for scoped mutation of otherwise immutable data.
fix config = freeze(["a", "b", "c"])
borrow(config, |arr| {
arr.push("d")
// arr is mutable inside this block
})
// config is frozen again hereBest Practices
Follow these guidelines to write safe, efficient, and maintainable concurrent Lattice code.
Closing a channel signals to all consumers that no more values will arrive. Without closing, consumers will block forever waiting for the next value.
spawn {
for item in work {
ch.send(freeze(item))
}
ch.close() // always close when done
}The scope block guarantees all spawns complete before proceeding. This prevents orphaned tasks, leaked resources, and use-after-free bugs. Never spawn work outside of a scope.
scope {
spawn { /* task a */ }
spawn { /* task b */ }
}
// safe: both tasks are doneDo not rely on captured mutable variables for inter-task communication. Spawned tasks get deep copies of captured values, so mutations will not propagate. Use channels instead.
// BAD: mutations are invisible to parent
flux count = 0
scope {
spawn { count = count + 1 } // modifies a copy
}
print(count) // still 0
// GOOD: communicate via channels
fix ch = Channel::new()
scope {
spawn { ch.send(freeze(1)) }
}
flux count = ch.recv()
print(count) // 1Always use freeze() before send(). This is enforced at runtime, but making it explicit in your code documents intent and avoids runtime errors.
Instead of polling channels in a loop with non-blocking receives, use select to efficiently wait on multiple channels simultaneously. The runtime handles the waiting without busy-spinning.
// BAD: busy-polling wastes CPU
flux done = false
while !done {
select {
v from ch => { done = true }
default => { /* spin */ }
}
}
// GOOD: blocking select, no wasted CPU
select {
v from ch1 => { handle_a(v) }
v from ch2 => { handle_b(v) }
}Break complex processing into small, composable stages connected by channels. Each stage has a single responsibility and can be tested in isolation.
Since spawned tasks deep-copy all captured variables, large captures are expensive. Extract shared data into channels or pass only what each task needs. Keep closure bodies focused on a single task.
Lattice