Most JSON parses are small. Does this have bigger spin-up overhead? Also the memory handling requirements of working with GC'd JS data are very particular, this sounds like it has its own way with memory.
It's also still faster than other parsers in our tests, even on the smallest documents (https://github.com/simdjson/simdjson/issues/312#issuecomment... advantage is just smaller, like 1.1x and 1.2x instead of 2.5x :) It really starts to pull ahead somewhere between 512 bytes and 1K.