SPEED TEST

The Speed Test panel is a sanity-check benchmark for post systems: it measures network throughput/latency and storage performance (sequential + random I/O) so you can spot the obvious bottlenecks before you commit to long transcodes, proxy builds, copies, or remote workflows.
What it’s for (real workflows)
-
Validate whether a local SSD / RAID / shuttle drive / NAS mount can sustain the read/write rates your workflow needs.
-
Catch “looks mounted, feels slow” volumes before you start ingest, proxies, or batch exports.
-
Sanity-check remote editorial conditions (VPN, captive portals, blocked test services, hotel Wi-Fi reality).
-
Compare multiple volumes side-by-side (up to three drive selections in the UI).
-
Quick check after an OS update, new enclosure, new cable, or new network path.
Network Test (latency + throughput)
-
The network test is executed in the main process and returns a compact result: download Mbps, upload Mbps, ping ms.
-
Implementation uses fast-cli (Fast.com) invoked via Electron’s bundled Node (ELECTRON_RUN_AS_NODE=1) rather than assuming a system Node install.
-
Because Fast.com is browser-backed, the app looks for a bundled Chromium dependency and fails with a clear “browser missing” error if it isn’t present.
-
Guardrails:
-
Single active network test per renderer (prevents overlapping tests and stuck UI).
-
Timeout (60 seconds) with a hard cancel fallback so the UI reliably returns to idle.
-
Cancellation attempts graceful termination first and escalates on Unix-like platforms if needed.
-
What this means in practice: if Fast.com is blocked by your network (corporate firewall/DNS filtering/VPN/captive portal), you’ll usually see a parse/unreadable output style failure rather than a fake success.
Drive Tests (sequential + random)
The drive tests write temporary files to the selected volume and measure sustained performance.
-
Sequential mode: writes and reads large contiguous data to estimate “export/copy style” throughput.
-
Random mode: stresses 4K block reads/writes across a temp file to reveal latency + metadata/cache behavior (the stuff that kills NLE responsiveness on questionable storage).
Test structure details:
-
Runs 5 iterations; the first iteration is treated as a warm-up (your UI labels this explicitly), and the reported min/max/avg reflect the meaningful passes.
-
A per-drive hidden folder is used for test artifacts: /.lead-speedtest/ (stale test files are cleaned up).
-
A progress signal is streamed back to the renderer so the panel can show a real-time progress bar without guessing.
Disk-space and safety guardrails
Drive tests have explicit “don’t brick a nearly-full drive” behavior:
-
Before testing, the main process runs a disk-space preflight that estimates peak temp usage and adds headroom:
-
A fixed minimum safety buffer (256 MiB),
-
plus ~10% extra on top of peak temp usage.
-
-
If there isn’t enough space, the test fails before writing, returning structured info like required bytes vs available bytes.
-
The panel also verifies the test directory is actually writable (read-only mounts and permission problems fail cleanly).
If disk space can’t be verified (filesystem/tooling limitations), the panel can proceed but will surface an explicit warning (“proceed at your own risk”)—no silent behavior.
Security model
-
All tests run in the main process, not the renderer. The renderer requests actions via allowlisted IPC channels (no arbitrary command execution surface).
-
Drive paths must be approved for the current session (typically via the OS folder picker). That blocks the UI from probing random filesystem locations without explicit user selection.
-
Cancellation state is tracked per renderer, preventing one window from affecting another.
-
The feature is also license-gated (“speed-test” entitlement), and the main process enforces it (not just the UI).
Interpreting results
-
Small tests can look “too good” because OS caching exists. Larger test sizes give a closer approximation of sustained performance for proxies/exports.
-
Random I/O results matter more than people think when you’re dealing with:
-
shared storage mounts,
-
busy RAID controllers,
-
SMB/NFS oddities,
-
or “fast on paper” USB bridges that fall apart under real workloads.
-