Skip to content

Internals

Take a peek under the hood, to see how ngn works.

FeatureConcurrency vs ParallelismHow it works under the hoodBlocking behaviorPractical differences / caveats
threadConcurrency (cooperative fibers), not OS-thread parallel by itselfthread compiles to OpCode::Spawn; VM creates a new Fiber, queues it, and returns a 1-slot completion channel (crates/ngn/src/compiler.rs, crates/ngn/src/vm.rs)Fiber yields when suspended; but sleep() in main VM run loop uses std::thread::sleep, so it blocks the VM thread during that sleepGood for interleaving tasks; not true CPU parallelism; closure captures are by value snapshot (except shared state)
channelCoordination primitive used by both cooperative and parallel pathsChannel = Arc<Mutex<VecDeque>> + capacity + closed flag (crates/ngn/src/value.rs)VM Send/Receive suspend the fiber when full/empty; <-? (ReceiveMaybe) is non-blocking and returns Maybe::Null if empty (crates/ngn/src/vm.rs)Default capacity is 10; receive on closed+empty channel panics; .close() marks closed but doesn’t drain
stateShared mutable state across fibers/threadsStored as Value::State(Arc<Mutex>); .update() runs closure then writes return value (crates/ngn/src/vm.rs, crates/ngn/src/value.rs)Mutex lock is blockingSafest way to share mutable data; .write() can race semantically with .update() if ordering matters
spawn.cpuParallelism (real OS worker threads)Submits job to global bounded CPU pool (num_cpus threads) via crossbeam channel (crates/ngn/src/blocking_pool.rs, crates/ngn/src/vm.rs)Caller is non-blocking (gets channel immediately); worker runs task to completionReturns channel<Result<any,string>>; non-Result return auto-wrapped as Ok; queue-full gives Error(“spawn.cpu() queue is full”)
spawn.blockParallelism (real OS worker threads for blocking/IO-ish work)Submits to global blocking pool (cores * 4, capped at 64) (crates/ngn/src/blocking_pool.rs, crates/ngn/src/vm.rs)Same non-blocking call/async completion channel pattern as spawn.cpuUse for file/process/network waits; queue-full gives Error(“spawn.block() queue is full”)
spawn.allParallelism (bounded worker threads)VM starts a bounded OS-thread worker set, runs each task in its own fiber, collects via std::sync::mpsc, then reorders by input index (crates/ngn/src/vm.rs)Call blocks until scheduled tasks finishReturns array of all results; options.concurrency limits how many tasks run at once
spawn.tryParallelism (bounded worker threads, fail-fast launch)Same worker engine as spawn.all, but sets fail-fast signal on first Result::Error to stop launching new tasks (crates/ngn/src/vm.rs)Returns partial ordered results up to first error; launched tasks may still wind downIn-flight cancellation is best-effort/cooperative; blocking/native work may not stop immediately
spawn.race()Parallelism (thread-per-task currently)Spawns all tasks and returns first successful message observed on mpsc; if no success, returns first error seen (crates/ngn/src/vm.rs)Returns as soon as winner arrives“First success wins”; non-Result value treated as success (Ok(value))
fetchConcurrency at language level, parallel blocking I/O under hoodfetch creates result channel, runs reqwest::blocking request on global blocking pool (crates/ngn/src/vm.rs)Non-blocking to caller until <- receiveReturns channel (not Result); queue-full or transport failure encoded as Response with status 0
httpAsync network concurrency + optional fiber/task parallelismHTTP server runs on Tokio multi-thread runtime; each request handled asynchronously, with ngn handler fiber run in cooperative batches (run_steps + yield_now) (crates/ngn/src/toolbox/http.rs)Network I/O is async; channel-backed response writers poll channels with short sleeps (10ms) when emptyScales by connection/task; streaming/SSE/WS are channel-driven; websocket session uses async reader/writer tasks and closes channels on session end
tbx::processParallelism via OS threads around child process I/OSpawns shell child, plus dedicated wait/stdout/stderr threads; communicates through ngn channels (crates/ngn/src/toolbox/process.rs)Caller gets immediate channel(s); worker threads block on process/IOrun returns single-result channel; stream returns stdout/stderr/done channels; backpressure handled via bounded channels + short sleep retry