Unbeknownst to the world; Vancouver, Canada has cornered the market in the niche domain of fast JSON-based rule matching software libraries. There’s the Java-based Ruler. Then there’s the Go-based quamina. And finally I have the Rust-based port in quamina-rs. Each one is special in its own way, but the last one is the newest.
It’s also by far the fastest and the most memory-efficient of the trio. It has the most matching capabilities as well. Maybe. I can’t be fully certain on these last few points because it was written mostly through LLM agents, across minutes of my daily spare time over the past few months, and with minimal oversight.
For context, while quamina-rs is the most successful port of quamina that I’m aware of, it isn’t my only attempt. Over the past year or so, I’ve tried this exact Go-to-Rust conversion multiple times. Quamina, the Go version, was always a great reference because its finite automata based operation is unique. Having maintained ruler for a few years, the algorithms weren’t alien to me. For performance, it does some really outstanding but counter-intuitive things like writing its own JSON parser. Quamina has zero dependencies, a bucket load of high-quality tests, tons of documentation, a backlog of things left to do, and most importantly good taste.
I picked Rust because I wanted to give it a go after hearing a lot of noise from friends I’d trust on these things. Before I started, I didn’t even have it installed on my laptop, so I couldn’t cheat my way out to a working solution; making this a good test for how far agents can take me, or at least force me to learn the language for good. Sadly, while I did pick up how to read Rust over this project, I still struggle with writing it because it’s not a muscle I’ve practiced. I haven’t had to spend much of my time writing source code because talking to the agents was enough to get me to a very decent spot.
A Very Decent Spot Link to heading
By day, I’ve been helping folks work through their AI adoption journey, so I’ve had exposure to a fair amount of models, agents, tools, patterns, orchestrations, and more shiny AI things. The hype, the anxiety, the frustration, and the wastefulness of it all isn’t lost on me. But compared to all that jazz, it’s important to know that quamina-rs was never optimized for AI. I had some files because I didn’t want to rewrite all the context across session. Other than that, most of the time I was simply running a few lines of a prompt in a loop before it was cool:
Continue quamina-rs port
Rust: ~/workspaces/quamina_go_rs/quamina-rs
Go: ~/workspaces/quamina_go_rs/quamina
Read Rust code, don't trust the docs as they may be stale but spec.md is often
updated (< 300 lines). For Go behavior, read Go source directly - don't trust
past interpretations.
Approach: push often and check CI, use todos to manage context window usage,
refactor as appropriate, mirror go code as needed, algorithmic parity and
performance matter. When checking Go behavior, read the actual test and source
files (eg., anything_but_test.go, shellstyle_test.go) rather than relying on notes.
In the early days I’d also ask it to checkpoint before ending a session:
check if there's any feature, algorithmic, or functional parity gaps using subtasks / todos. if yes, let's update the spec for future sessions. Keep spec under 300 lines.
This eventually became unnecessary. Not only did the agents finish “full parity”, but at some point while I was mindlessly kicking off these sessions, the agents started picking up open issues from the Go version and implementing them on their own.
Looks like we’re already at parity. Let me check what else I can do … issue #363 has a lot of discussion, let me implement this!
That made me curious about how much further I could take it. Over the past month I’ve been deliberately adding things that the Go version doesn’t have. I was building regex pattern matching in parallel to Tim’s, then added matchers for CIDR, suffixes, and numeric ranges. I also added niceties like a model checker, random fuzzers, memory explosion detection, annoying linters, a WASM binary, and a GitHub Pages playground where I get a kick of seeing JS-based rule matching at microsecond speed.
Speaking of speed, through the grapevine I hear a lot of song and dance about speed of software delivery. quamina-rs was at parity in about four weeks and ahead in another four. But I suspect much of that speed (in this project and in other side-projects I keep hearing about) isn’t just because the cost of coding has dropped. A major factor is zero coordination costs. No backward compatibility to worry about, no users to break, no review queues with other humans. You can see the same dynamic in many recent launches where they ship with a bang, then slow down in impactful features once they have users and tradeoffs. Those coordination costs for big software haven’t gone away, neither has ye old technical debt.
Is It Slop? Link to heading
When I was hands-off, the agents were mostly on the mark in creating code that made the tests pass. Because of the sheer number of samples the Go version had, they couldn’t fake their way through the underlying logic, but they did take shortcuts until I put in guardrails. For example, while the Go version uses finite automata, the Rust version was using much slower hashmap-based matching early on. It was also pulling in dependencies that created bloat. Once I added benchmarks that needed to pass before merging and gave a minor nudge to actually read the Go code instead of trusting its own notes, it made progress in leaps and bounds towards actual finite automata based matching.
Wherever it had reference from the Go code and was nudged to read the right files, the agents did very well. The Go is written with nuanced documentation, well-judged code organization and conventions, and simpler test harnesses. All that good taste along with the rigor of sample-driven development helps the agents enormously. It’s highly likely that Rust’s ecosystem helped a ton as well. There’s a wealth of performance-focused content and projects like regex-automata that I didn’t know about when I started, but I’m certain models drew from. The tools went most haywire in uncharted territory; falling back to hashmap-based matching, implementing uber-slow matchers, or writing tests that provided no meaningful value.
I see that as my own making though. Quamina-rs has been my test ground for learning what works with agents. At one point I was pushing code without even looking at it. I got impressively far, just mostly in the wrong direction.
As I started paying more attention to the session transcripts and went back to reviewing code, the rate of quality and performance improvements stabilized. Not to mention I could actually understand what the Rust code was doing and help make it more idiomatic. For example, agents were struggling to implement chained finite automata in their Go form since cycling memory dependencies don’t play well with Rust’s borrow checker, and performance was barely on par with the Go version. Once I moved to an arena-based solution, I found 2x to 4x performance improvements, though I would rarely celebrate because the agents had originally promised me 60x. The models have learned to estimate as poorly as software engineers!
Are You Done? Link to heading
The main goal was to port quamina to Rust. That’s been achieved, and I’m able to keep up as new changes come in. If my recent changes look different than the Go version, it’s because many of the recent changes in quamina have been optimizations I’d already done or was in the middle of doing. I suspect that’s because both are originating from Claude Code; so same model, same tools, similar patterns []. The Go optimizations do tend to be more refined, which I’d attribute to @sayrer and @timbray spending more time reviewing and being more deliberate with the changes.
And I think that’s the thing worth saying plainly. It’s human to care. Agents don’t care. Automation doesn’t care. They need to be told what to care about, and even then they’ll misbehave the moment you look away because of the unknown unknowns. It’s human to steer them on what to care about. I care about getting quamina-rs to a 1.0 GA, but that means going back and cleaning up the slop I let accumulate when I wasn’t paying enough attention. There’s also a lot more to learn from diving into the transcripts about how agents actually behave and how Rust works under the hood, but that’s for the future.
Is Software A Factory? Link to heading
On paper, quamina-rs looks like an assembly line. Same prompt, same loop, same agent, running session after session. A hundred of them to burn billions of tokens. If you squint, it’s a factory floor.
But industrialization isn’t just repetition. It’s predictable repetition. Interchangeable parts. Uniform output. The whole point is that unit 500 comes out the same as unit 1. My unit 500 came out with hashmap-based matching, a hallucinated optimization that made things slower, and a dependency I didn’t ask for. Thinking of how non-deterministic agents and models are being used; more than factories, I notice build-up of runaway loops.
What I actually feel I have is closer to working with contractors. You put in trust, set expectations, and inspect the work when it’s done. You won’t always know what you’re going to get, and that’s fine as long as your checks catch what went sideways as well as some of the waste that happens along the way.
I spent way too many sessions and tokens on this quest, and I’m definitely on the low end of usage curve here. If I were to bet, industrialization in software will help; just not the kind where you point agents at a problem and walk away. The parts of the process that kept my project from being pure waste were well architected go code, CI, benchmarks, linters, and fuzzers. But those aren’t new. That’s just engineering rigor. The agents didn’t bring me true quality control; but they did make it non-negotiable.