This crate is not an alternative of the serde_json, it only do the validation.
Currently, there is no other crates do the sames validation works on JSON, so I have to parse the dataset by a common JSON parser (sede_json) and do the same validation on its deserialized value as the comparable results.
So it would be better to compare to other crates which do the same work, but I didn't found the similar crate so far. And this is also the reason I developed this crate.
Excellent! I think your "faster%" is calculated in a way that understates the speedup. In the last row, the document is processed in a bit less than half the time, so the speedup should be a bit more than 100%.
Great to see this article, I totally agreed with the view that rejecting any invalid case by designing the right data structure.
Unfortunately, it is hard to achieve it in practice and people even don't realize this, JSON Object is a good example, Human are incline expecting the duplicated key is not allowed in JSON, but it happens.
For this goal, I think the Protobuf is good way to eliminate the possible invalid data for data transportation.
Perhaps checking a service's behavior in response to such JSON is high on the security researcher's list of things to do that are high priority and simple.
Btw YAML would be a proper superset of JSON if it wasn't for the fact that yaml doesn't allow repeated fields while JSON is relaxed about that.
That's just a small detail though. You can for all intents and purposes out JSON objects in YAML files and I'm still puzzled while so many people fiddle with indent in helm templates instead of just using toJson
Some YAML parsers support duplicate keys (IIRC, Ruby does…or at least whatever GitLab uses does). The disparate state of YAML parsers is what makes me sad about it…it seems like just a hard spec to implement.