alint benchmarks
Sub-second on 100K-file repos. ~1.1 s on a 100K-file
synthetic workspace bundle, ~12 s at 1M files. Real-world: a 79-rule
pass over NixOS/nixpkgs (39,101 files) completes in
273 ms wall-clock, faster than git status
on the same repo on a cold cache.
Hardware fingerprint: linux-x86_64 · AMD Ryzen 9 3900X
(12-core) · 62 GB · ext4 · rustc 1.95. Absolute numbers are not
directly comparable across machines. See
methodology for what does and doesn't
transfer.
What we publish, and how often
- 9 macro scenarios (S1–S9) driven by hyperfine, run
at 4 sizes (1k / 10k / 100k / 1M files) in two
modes (
fullall-files scan andchangedvia--changed). That's 72 (scenario, mode, size) cells per release. - 12 micro benches (criterion) covering pure-CPU primitives: glob compile, regex content scans, engine fan-out, walker, formatters. Floor-tested vs the v0.7.0 publication on every PR; anything >10 % slower fails CI.
- Per release. Numbers regenerate via the
bench-record.ymlworkflow on a self-hosted runner with a known hardware fingerprint, then land indocs/benchmarks/HISTORY.md(the canonical source of truth this page summarises).
Real-world: NixOS/nixpkgs
The largest non-trivial OSS monorepo on GitHub, used as a real-world
stress beyond the synthetic 100K bench. The 79-rule pass includes a
for_each_dir over the 20,678-directory
pkgs/by-name/*/*/ tree, exactly the cross-file dispatch
shape the v0.9.5 + v0.9.6 + v0.9.8 engine work was designed to make
linear.
| Metric | NixOS/nixpkgs |
|---|---|
| Files in tree | 39,101 (sparse-cloned) |
pkgs/by-name/*/*/ directories iterated | 20,678 |
| alint config rule count | 79 |
| Wall-clock for full check pass | 273 ms |
Synthetic: per-release trajectory
1M-file headline cells across the most-stressed scenarios. S3 is the
realistic-monorepo anchor; S6 is the per-file content fan-out; S7 is
the cross-file relational cliff that v0.9.8 closed; S9 is the nested-
polyglot scenario the v0.9.6 scope_filter: primitive
exists for.
| Version | Date | 1M S3 full | 1M S6 full | 1M S7 full | 1M S9 full |
|---|---|---|---|---|---|
| v0.9.20 | 2026-05-10 | 12.38 s | 12.20 s | 15.30 s | 7.88 s |
| v0.9.19 | 2026-05-09 | 12.15 s | 11.79 s | 15.38 s | 7.60 s |
| v0.9.18 | 2026-05-08 | 11.70 s | 11.19 s | 15.56 s | 7.47 s |
| v0.9.17 | 2026-05-06 | 11.67 s | 11.38 s | 15.37 s | 7.37 s |
| v0.9.16 | 2026-05-06 | 11.58 s | 10.72 s | 15.35 s | 7.21 s |
| v0.9.14 | 2026-05-05 | 12.06 s | 11.19 s | 15.31 s | 7.33 s |
| v0.9.13 | 2026-05-04 | 11.46 s | 11.18 s | 15.45 s | 7.22 s |
| v0.9.12 | 2026-05-03 | 11.98 s | 11.33 s | 15.36 s | 7.46 s |
| v0.9.11 | 2026-05-03 | 11.84 s | 11.91 s | 17.29 s | 8.50 s |
| v0.9.10 | 2026-05-03 | 11.62 s | 11.22 s | 15.50 s | 7.21 s |
| v0.9.9 | 2026-05-03 | 13.23 s | 11.94 s | 17.32 s | 7.91 s |
| v0.9.8 | 2026-05-02 | 11.33 s | 10.89 s | 15.41 s | 7.32 s |
| v0.9.7 | 2026-05-02 | 11.89 s | 11.35 s | 614.4 s | 7.36 s |
| v0.9.6 | 2026-05-02 | 11.09 s | 11.40 s | 623.7 s | 7.12 s |
| v0.9.5 | 2026-05-01 | 12.59 s | 11.85 s | 652.4 s | n/a |
| v0.9.4 | 2026-04-30 | 731.9 s | n/a | n/a | n/a |
| v0.5.7 | 2026-04-26 | n/a | n/a | n/a | n/a |
| v0.5.6 | 2026-04-26 | 569.1 s | n/a | n/a | n/a |
Two cliffs to notice. v0.9.4 → v0.9.5 on S3 1M (731.9 s → 12.59 s) is the lazy path-index + literal-path fast paths fix. v0.9.7 → v0.9.8 on S7 1M (614.4 s → 15.41 s) is the cross-file dispatch fast paths, round 2. Both shipped with investigation write-ups documenting the diagnostic data, the bisect, and the fix. See investigations/.
What each scenario stresses
| Scenario | Shape | Catches regressions in… |
|---|---|---|
| S1 Filename hygiene | 8 filename-only rules | Walker + scope-match |
| S2 Existence + content | 8 existence + content rules | Per-file content fan-out |
| S3 Workspace bundle | extends: oss-baseline + rust + monorepo + cargo-workspace (~34 rules)
| Realistic monorepo workload |
| S4 Agent-era hygiene | 5 rules from agent-hygiene@v1 | Agent-era rule shapes |
| S5 Fix-pass content edits | 4 content-edit rules under --fix | Fix-pipeline regressions |
| S6 Per-file content fan-out | 13 content rules over **/*.rs | Per-file inner-loop |
| S7 Cross-file relational |
6 cross-file kinds (pair,
unique_by, for_each_dir, …)
| Cross-file dispatch cliff |
| S8 Git overlay |
S3 reshape + git_no_denied_paths +
git_tracked_only | Git-aware dispatch |
| S9 Nested polyglot | extends: rust + node + python (~26 rules) over
polyglot tree with scope_filter: | Polyglot scope-filter dispatch |
Methodology, short version
Two layers. criterion for pure-CPU micro-benchmarks
(stable, cross-platform, run on every PR).
hyperfine driven by xtask bench-scale
for end-to-end CLI wall-time (cross-platform, reproducible, honest
about variance, run before each release tag).
Three deliberate methodology choices:
- Hyperfine, not a custom Rust harness. Hyperfine measures wall-time of an external command from outside the process: exactly the cost shape a CLI user pays, including process startup, dynamic linker overhead, stdio buffering, format selection. A Rust-internal harness would skip those and overstate alint's speed.
- Deterministic synthetic monorepo, not a real-world repo
for cross-version comparison. Synthetic trees are byte-identical
across machines given the same seed (
0xA11E47): 1k = 1,001 files exactly, 1M = 1,000,001 files exactly, no tree-size drift contaminating cross-version comparisons. The nixpkgs data point above is the non-synthetic complement. - Not CodSpeed / iai-callgrind. Both are Valgrind-
based; alint's hot path is syscall-heavy (the
ignore-crate walk), and Valgrind's instruction counts drift whenever the CI runner's glibc or kernel updates, exactly the part of alint we most want stable numbers for.
Honest comparisons
Mostly we don't have apples-to-apples public benches, because the other tools haven't published any.
- vs Repolinter. No public benches exist; Repolinter was archived 2026-02. Architectural shape (Node startup ~100 ms + per-rule JS execution) suggests 1–2 orders of magnitude slower at 100K files, but that's an estimate, not a measurement.
- vs ls-lint. No public benches at scale. ls-lint is a Go binary doing filename + directory matching only: narrower scope, likely faster than alint at S1's specific shape. If filename conventions are the only thing you care about, ls-lint will likely be faster.
- vs Megalinter. Shape-mismatch comparison. Megalinter is a Docker orchestrator, not a linter. Its wall-time is dominated by container startup + per-tool execution, not by any single tool's hot path. Use Megalinter alongside alint, not instead of.
- vs custom shell scripts. Each repo's
verify-*.shdirectory is bespoke; the wall-time comparison doesn't generalise. The kubernetes case study replaced 17 of 50 verify scripts with one alint config; the win shows up in CI wall-time variance more than in raw speed.
See /compare/ for the full feature-matrix comparison.
Every published number is reproducible end-to-end on your own hardware.