Skip to content

Benchmark — vfs Saves 98.6% Tokens vs Reading Files

Self-benchmark on this repository (pattern "Extract", 4,178 lines of source):

Read all filesgrepvfs
Output size101.9 KB13.8 KB1.5 KB
Lines4,17814815
Est. tokens26,0793,537373
  • vfs saves 98.6% tokens vs reading all files (26,079 -> 373)
  • vfs saves 89.5% tokens vs grep (3,537 -> 373)
Terminal window
vfs bench --self # self-test on vfs source
vfs bench -f HandleLogin /path/to/go-project # benchmark on any project
vfs bench -f Login /path/to/project --show-output # show actual output

The benchmark command compares three approaches on your codebase and prints a side-by-side table of output size, line count, and estimated token count.

Reading all files returns everything — imports, comments, function bodies, blank lines. An AI agent processing this pays for every token, even though most of it is irrelevant.

Grep narrows it down to matching lines, but still includes partial function bodies, duplicate matches, and surrounding context that isn’t useful for discovering function locations.

vfs parses source via AST and returns only the exported signature — one line per function with the exact file and line number. No bodies, no noise.

For an AI coding agent making 10 code searches per session, the difference compounds:

MethodTokens per search10 searchesCost impact
Read files~26,000~260,000High
grep~3,500~35,000Medium
vfs~370~3,700Low

Over a typical development session, vfs can save hundreds of thousands of tokens — which translates directly to faster responses, lower costs, and longer context windows for the AI.