Hacker Newsnew | past | comments | ask | show | jobs | submit | melp's commentslogin

I just added a detailed dRAID overview to my OpenZFS guide. Be sure to scroll down to play with the js app that shows various dRAID layouts and how they get shuffled.


I'll add some details around that, thanks for the feedback!


Author here, good call, I'll add that today.


Good call... I'll prop it up on something soon. This is on the second floor of my house, so it's safe from flooding, but I'd hate to have spilled water kill my stuff.


> This is on the second floor of my house, so it's safe from flooding

Not if there's a toilet on the second floor. I speak from experience. :( Neighbor's upstairs toilet tank burst while she was at church. Couple hours later, it's raining inside my apartment and her whole place is ruined. I was very lucky that my computers didn't get ruined. 50-cent piece of plastic connecting the tank to the supply line caused thousands of dollars of damage.


I have this system sitting next to me in my home office which I share with my wife. Both of us work from home full time and the server is quiet enough that she doesn't complain about it.


Because I've never done this before and I wanted a more manageable learning curve. I had very little *nix experience and zero bsd or zfs experience before I started this project. My next server will probably be vanilla bsd, but then again it might be freenas because it just makes the whole setup process easier.


36T to CrashPlan sits on 36GB of RAM at all times... screw that. rclone never uses more than ~2GB to manage my whole 50TB dataset.

edit: Also, I'm using a SuperMicro chassis, not a Norco. I've got a section where I go into why I went with SuperMicro.


Uhm, what exactly is crashplan doing with all that memory?


I wonder if it's related to the memory "issue" with their client, where exceeding 1TB/1 million files, a manual edit of the .ini file to bump the JVM memory allocation is necessary. I remember reading somewhere that it has to do with CRC checksum calculation for all the files? I've had to change the setting multiple times (currently at 8GB for ~8TB/1 million files).

https://support.code42.com/CrashPlan/6/Troubleshooting/Adjus...


Hm, maybe give Borg a shot. For initial backups it's not exactly the fastest thing, though much better after that.


> rclone never uses more than ~2GB to manage my whole 50TB dataset.

How do you handle file versions with rclone?


I guess that's the downside. It only syncs the latest version of every file it syncs.


BackBlaze B2 is probably the lowest price S3-like cloud provider and for 50TB (my current dataset size), it would be ~$250/mo: https://www.backblaze.com/b2/cloud-storage-pricing.html

Granted I didn't do this math before I dropped a few stacks on hardware. I really wanted to build it, configure it, play with it, etc. It's been a really fun project for me, and I've got a bunch of other stuff I want to do with it in the next few years (10GbE, X11 nodes, set up dedupe, etc).


Enterprise Edition, baby! It's an official variant.

By the way, if anyone is considering deploying their own LackRack, I would highly recommend reading the installation section in the OP. It's got some quirks that are worth considering before you dive in.


how the fuck does a metal box fail?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: