April 30, 2026
Methodology · Part 1
Before generating "good" fuzz harnesses, we need a checkable definition of "good". This post offers the Four Principles, then validates them on 586 production OSS-Fuzz harnesses across 70 projects. 53 fix PRs filed, 45 verified or merged, 14 false positives caught at audit, two real upstream bugs surfaced when broken harnesses were repaired.
Series · Part 1
April 30, 2026
Vulnerability Discovery
Point the four principles at Google Chromium and its vendor stack (libvpx, libwebp, libaom, openscreen, dawn, pdfium, sqlite, v8, …). Flip the principles into a generation criterion, pair with Logic Group as the semantic-unit slicer, every harness goes through Stage-4 adversarial probing. Two weeks: 472 harnesses, 30 vulnerabilities filed upstream (16 acked so far — 9 confirmed + 7 fixed), and 52 candidates dropped before any could reach a maintainer.
Series · Part 2
April 30, 2026
Verification
A fuzzer can produce 24,000 crashes overnight, and almost none of them deserve to be filed. Three gates hold every candidate before it reaches the maintainer, keeping our false-positive rate under 5%.
Series · Part 3
February 6, 2026
Security Research
How our AI-assisted fuzzing system discovered and helped fix critical zero-day vulnerabilities in widely-used open source software.
Published
October 6, 2025
Architecture
How we coordinate four services across ~100 VMs to run thousands of concurrent jobs and 100K+ LLM requests with robust validation.
Published
September 29, 2025
Technical Deep-dive
A practical breakdown of 10 discovery and 13 patching strategies.
Published
September 22, 2025
Live Demo
Claude Code finds a libpng overflow, builds a PoC exploit, and ships a clean patch.
Published
September 15, 2025
Research Tools
Benchmarking models on POV and patch generation using AIxCC challenges.
Published
September 8, 2025
Competition
Finalist performance highlights: 28 vulnerabilities, 14 patches, 6 zero-days.
Published