Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This depends on what you mean by "atomic". Prior to 5.0, MongoDB's defaults were to use a sub-majority write concern. This allowed all kinds of interesting atomicity violations, even on single documents. For instance, you could write a value, some clients might read it, and then your write would be silently lost as if it never happened. Clients could disagree on whether the write happened or not. A single client could observe, then un-observe the write. It gets weird. :-)


This is actually an evolved model, as the system has to handle data loss as a primitive upfront.

Similar to how Rust forces you to reason about allocators.


Nah they're super different. Rust failing at compile time means you fix the bug before you ship it, which is helpful. MongoDB failing in production means either you fix the bug via testing before shipping, or you discover the bug after shipping, which is at best extra work and at worst disastrous.


Are you saying data loss is a positive feature of MongoDB because it forces systems that use it become more robust to deal with the data loss?


Yes


But this is not a design problem, all clustered DB work like that this is more a configuration issue.


Um, no they don't?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: