Show HN: FizzBee – Formal Model based autonomous testing
fizzbee.ioGitHub: https://github.com/fizzbee-io/fizzbee-mbt-examples Quick Start: https://fizzbee.io/testing/tutorials/quick-start/
Most developers agree testing is important. At the same time, most developers don’t enjoy writing tests. With AI generating code faster than ever, testing is becoming even more crucial. But even AI-generated tests need review and maintenance, which makes them another burden.
I'm introducing another form of autonomous testing - "model-based testing". Instead of writing test cases, you describe expected behavior in a Python-like specification language.
The FizzBee model can be: - Verified exhaustively for design bugs (like formal methods). - Mapped to your actual system, automatically generating the tests.
This gives you:
- No hand-crafted test cases - Automatic testing of concurrent as well as sequential behavior - No cascading test rewrites when behavior changes - No cluttering the SUT with tracing code
With FizzBee, you get both design validation (like in formal methods) and automatic test generation, saving time and effort.
Currently, only Go is supported. Java and Rust are next and would love to hear which language you’d want supported next.
I’d love your feedback!
Interesting read. I’ve tried Alloy and Dafny for verification before. Seeing how this integrates with real code would be useful. Does it handle concurrency or just sequential logic?
Thanks a lot. It does handle concurrency.
https://fizzbee.io/testing/tutorials/quick-start/#parallel-t...
Sequential logic is generally easier to test (also concurrency testing of linearizable systems). FizzBee specification language is created primarily to express concurrent behavior of non-linearizable systems - like eventual consistency, etc.
This is neat. I've used FizzBee and TLA+ for model checking. Being able to test the implementation would be nice. How is this different from test case generation in TLA+?
Glad you have tried FizzBee before. Do you have any feedback on it?
With TLA+, I mostly see papers and example projects that typically implement model based trace checking solutions in TLA+.
While it works, usually, it will clutter the main code (SUT) with tracing library calls. And in some papers, you'll need to create a separate modified version of spec with the tracing spec.
MongoDB published a paper a while ago comparing model based testing and model based trace checking. I'll soon list more details.
Using Python syntax makes it more accessible.
Thanks. Please give it a try, and let me know if you have any issues. I'd be happy to help.