Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny I noticed this on an internal server as well but chucked to an older version. Hoping it "clearly is fixed in latest code, something so glaringly obviously broken, wouldn't be hanging around too long with all the hype surrounding node these days..."

Anyway, I wouldn't stick node out exposed to the outside world. Granted sticking nginx in front presumely won't help with this issue. Just keep feeding a 4GB file to it and it will crash the back-end [EDIT: n.m. I am not sure anymore, someone mentions it is possible to mitigate it that way]

Yikes, this is a bad one. Glad they fixed it. But it leaves me with the same impression I had after finding out how MongoDB used to have unacknowledged writes turned on by default, and people's data was silently getting corrupted.



Granted sticking nginx in front presumely won't help with this issue. Just keep feeding a 4GB file to it and it will crash the back-end.

Why does that happen? nginx can't help here?


See mathrawka's reply (I haven't test it though)

BTW, I just ran memory of my server into swap with this:

    $ dd if=/dev/zero of=2g bs=1M count=2048

    $ curl -F "2g=@2g" <myresource>
(EDIT: explanation, this creates 2g file then uploads it <myresource> as a file upload -- multipart mime. @ sign just insert the named file data into the form)


And that is why you should never blindly use bodyParser middleware in production...

http://andrewkelley.me/post/do-not-use-bodyparser-with-expre...


Wouldn't client_max_body_size (which is set to 1MB by default) in nginx config prevent the 4GB to even reach the node backend ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: