Well, the author cares enough about RethinkDB to test it, even if he's a mongodb fan, even if his first benchmark was wrong, he was right to publish it: you all helped him when you pinpointed the problems in his tests... Thanks you for that.
I don't see any marketing here, just the "do your own benchmark" best practice, and the "share with community" best practice... Does it make it a perfect benchmark? No, but at least he tried... and the author has corrected the discrepancies since then.
Now imagine the benchmark was against [your favorite DB here] with even stronger results against RethinkDB. Notice how the most upvoted comment is joking about MongoDB. The second one is a pro-mysql comment. What's the point? Would it have been a better benchmark if it read "mysql is 10x faster than RethinkDB?" or "MongoDB is even slower than RethinkDB"?
> the author has corrected the discrepancies since then
As of the time I posted this comment, the blog post still seems to be comparing indexed MongoDB operations against non-indexed RethinkDB operations. Under those conditions I'd expect RethinkDB to be at least 1000x slower than MongoDB. The fact that he's finding that RethinkDB is only 3x slower than MongoDB makes me think that there are still other major problems with this benchmark.
> No, but at least he tried...
It's true that the author tried; but that doesn't change the fact that people are going to read this blog post and assume that the numbers are at least approximately correct. As a RethinkDB employee, it really frustrates me to see RethinkDB being judged according to benchmarks that are conducted so carelessly that they are essentially random.
I think this is the fourth time in the past year that I've seen a third party try to benchmark RethinkDB and get something wrong. Maybe we need to start a "best practice" of checking in with the maintainers of a project before publishing benchmark results about the project.
Some mistakes in his benchmarks, among probably others:
- I don't see any mongodb index creation, so mongodb is inserting with no index while rethinkdb is inserting with the index. That's probably why there's a gap between the two
- there's no mongodb index, and rethinkdb queries do not make use of the index (this is probably why rethinkdb is not 1000x slower: both aren't using indexes)
- the $in query should be last_update: random_timestamp(), there's no need for $in here
- his insertion code creates 100K memory clones of the object to insert in the mongodb version only, not in rethinkdb
I'm sad to add: what the author is benchmarking here is the likely performance of a system he could build with either db. It's not necessarily bad (save for bad press) that he's bad at benchmarking: the mistakes he's made in his benchmark are similar to the mistakes he'll make in his code.
In the benchmark script [1] that the author provided on his GitHub account, there's a call to ensure_index() for MongoDB. And he's reporting an average latency of 0.15ms for MongoDB read operations, so it's pretty clear that the MongoDB index is actually being used.
Yes, this kind of latency definitely says "indexed".
When I wrote this, I'd read a rethinkDB employee say "it's not using the index on rethinkdb", and performance was similar (3x) between the two. I trusted the "no index" path, and couldn't find an ensureIndex command... So I assumed there was no index on mongodb.
Truth is, this kind of performance can only come with indexes, on RethinkDB and mongodb.
RethinkDB needs to ship its own benchmark client. Also implement an ugh YCSB driver. Provide both with the database download.
It's madness to expect someone new to database benchmarking to implement a correct fully featured benchmark client. They are going to stumble enough on database and instance configuration as it is.
Not sure why you are frustrated it's just a blog post by someone who was inexperienced with your product. At least he owned up to the mistakes and was willing to fix it. It's an opportunity for you to work with the guy to show him how to do it properly and write a blog post of your own.
I would say that you probably should look at your API because I've never used a database that required me to explicitly define which indexes I want to use for a read. But I've never used RethinkDB so maybe there is a legitimate reason.
GP is frustrated because it's not just the one blog post. Even if the author is willing to correct it, the original bad data will tend to get more exposure, because most people won't check in for corrections (unless they see a post like OP). Then, there'll be another crappy benchmark next month or next week. If I thought my product was being judged this way, it would drive me frothing mad.
So basically you're saying that because he cares and because he tried, and because the comments were dumb, nobody should criticize him? Do you really not care about getting accurate results?
Running benchmarks is an engineering practice. If you failed to get meaningful results, you failed. Yes, he cares, yes, he tried, yes, the comments are dumb, but he still failed. Sure, I'll give the guy kudos for trying, but I'm not going to pretend he didn't fail. As far as I'm concerned, telling someone they failed is a favor, because now they can change their methodology, try again, and maybe succeed. It's part of the process of achieving meaningful results. The entire point of what he's doing is to achieve meaningful results, not to get a participation medal.
Weddpros was making the point that criticism worked. The poster tried (An important first step), failed, and critical people pointed out his errors. Weddpros points out the poster corrected those errors because he received criticism. Weddpros is clearly a fan of critical feedback.
Maybe he wanted to know where each DB shines compared to each other, to see if some workloads are better suited to one or the other.
Of course, benchmarks "should" include concurrent reads/updates/writes/deletes because it can make a huge difference depending on the DB's implementation.
Of course, the author "should" also have tested sharding / durability / resistance to partition / resource consumption in his tests... Maybe he didn't have the resources to test properly. I also do quick&dirty benchmarks like these, mostly because exhaustive benchmarks cost so much more (time, money, expertise)...
Is this a best practice? It seems like we've been delivered evidence that it is really hard to do good benchmarks unless you're already intimately familiar with what you're testing, which says something about how hard it is to make a good choice.
I don't know about other industries, but this sort of result is what stuff like the STAC M3 Benchmark suite was designed for: Typical usecases that experts can implement so you can get realistic performance comparisons.
Maybe you didn't get the joke. The joke is not about MongoDB, but about MongoDB fanbois that care only about some very narrow definition of "performance".
The wider message is that DBs are way more complex beasts that is meaningful to test this way.
Obviously there are cases in which MongoDB is a great choice, but equally obviously tests like this should not be a reason for the choice.
I do understand that second degree, but I don't think the author is a fanboy...
At work, we're also Mongodb users. If tomorrow we try to benchmark against Cassandra, performance will probably be the selling point. I don't think it's absurd to compare mongodb and rethinkdb, they're very similar dbs.
As for the benchmark, I agree it doesn't explore every facet of both databases and focus on performance... That's what benchmarks do.
The intent of the author may have been to challenge his existing choice (mongodb) which is a good thing. The (corrected) results may lead to: expect no performance gain if we migrate to RethinkDB... What's wrong with that?
> In RethinDB, you have to create databases and tables manually and it will raise an exception if they already exist. Compared to MongoDB that could be an inconvenience for some(and me) - one of the things I find appealing in MongoDB is the fluid interaction with databases
... well at least now I don't feel so bad about having some old MySQL stuff still in production. MySQL already has too much "fluidity" in dealing with my data...
Indeed, and typing out the query really doesn't take that long.
It does affect the initial experience though. The admin panel looks so slick I just assumed I would be able to click on the table and it would jump to the data. When it didn't I was surprised and initially blamed myself and fired up the console to see if there were any errors showing. It just affects the polish of the panel.
As I said though it is a minor point for an excellent product.
> it's essentially saying "we have something to hide" (like bad performance...), or "we don't want competition"
This seems a totally imaginary dichotomy when literally the comment you're replying to presents an alternative option, namely; "We don't want you to publish things that are almost always going to be wrong and misleading"
I've been working with RethinkDB recently on some slightly unusual things and the Rethink team has been first class. They've got great support on IRC and GitHub and are open and friendly. I highly recommend them.
Any output like this, unless maliciously fallacious, is contributing in some way to the general understanding of the software concerned and benchmarking best-practices, even through its mistakes.
It's the job of the reader to judge their sources wisely, and interrogate what they read, rather than the job of the author to conduct their explorations in private.
Understandably, it can be frustrating for people involved in the projects but that's just the nature of the beast. They can do things to help their cause by championing good examples of benchmarking, even those which don't look upon them favourably.
I wish something similar existed for databases. I think exact figures would be hard to get, but I believe there are many 2x 10x differences that we should be aware of.
> Benchmarking is hard and a lot of reports are bogus. However they are still very useful for a lot of developers.
In this case we were presented with a benchmark setup that failed to perform the task it supposedly benchmarked. That's not hard to avoid, and it makes the benchmark completely useless and misleading.
I don't think that this problem can be generalized in the sense you do here, since the problem with benchmarks usually isn't a complete failure to perform the task to be benchmarked, but things like finding a set of tests that give a fair representation of what you'd typically use the subjects for, or performing the tasks in idiomatic and optimal ways.
Don't a lot of those benchmarks end up only measuring how fast your language can call out to GMP to do the real work? And regex-dna ends up measuring your regex implementation which for a lot of them is again just going to be measuring how fast they can all call out to PCRE.
They're neat and all and it is called the benchmarks game but I wish they'd remove the ones that end up getting gamed like that.
I always say do your own benchmarking for your own use case.
The risk of a colored benchmark is quite high when benchmark is done by owner of product or by "fan" of product. With the exception of a well explained, clear benchmark that everyone can understand and reproduce easily.
At least with a public benchmark, people can point out flaws. With something rolled out internally, you're still likely to get flaws, but no one will point out that you misconfigured Postgres or set up Mongo the wrong way, or any of the other errors you are just as likely to make by doing it yourself.
Perhaps, once you've winnowed the choices down to just a few, it might make more sense, but I think good public benchmarks can be a helpful thing for that selection process.
I agree, public benchmarks do have their uses as i noted in my reply:
"With the exception of a well explained, clear benchmark that everyone can understand and reproduce easily"
An internal benchmark indeed requires knowledge on the subject, but most of the times people at the mailinglists are quite helpful when you explain what you are trying to do. Especially when you post a benchmark which is not in favorite of their product.
Not sure what's worse here - people relying on third party benchmarks (hint: always do your own; see how a tool performs on your data, on your hardware, for your problem set), or the fanboy-ish panic when they are unsettled that a benchmark might make their chosen toy less shiny?
I don't see any marketing here, just the "do your own benchmark" best practice, and the "share with community" best practice... Does it make it a perfect benchmark? No, but at least he tried... and the author has corrected the discrepancies since then.
Now imagine the benchmark was against [your favorite DB here] with even stronger results against RethinkDB. Notice how the most upvoted comment is joking about MongoDB. The second one is a pro-mysql comment. What's the point? Would it have been a better benchmark if it read "mysql is 10x faster than RethinkDB?" or "MongoDB is even slower than RethinkDB"?