For the longest time I’ve had FreeNAS virtualized on my Proxmox host. This is not recommended; I had to do some wonky configurations to pass the disks through to the guest VM, which meant they never really showed up well in the Proxmox UI and made it hard to figure out which disks were assigned to what. It was also a software raid6 made up of some random 3-4TB disks I had handy. I’ve been stockpiling some new disks for a proper build for a while now, and recently stumbled across a good deall on an R320 to complete the build.
The R320 came with two throwaway SAS disks, and a E5-2403 (4 core 1.8GHz). I spent an extra $20 getting a pair of E5-2448L (16 core 1.8GHz). This is pretty overkill for my NAS, but at that cost, with the performance increase, and a decrease in processing power, it was a no brainer.
For drives, I’ve been picking up WD Easystore 8TB drives for $130 apiece slowly over the last few months. WD only makes Red 8TB drives, so if you buy the cheap external drive and tear it out of its enclosure, you wind up with a $300 drive on the cheap. There’s plenty of detailed writeups on the internet (like this one) on how to find and shuck the drives. After a while, WD caught on and started putting a white label on the drives in the external enclosures. It’s the same physical drive, but they can get out of honoring their Red warranty.
With 32TB of raw disk space, the only question left was how to split it up. I originally wanted to go with a raid 10, which would be a good balance of durability, write speed, and time to replace a failed disk. After reading up on FreeNAS, the proper way to implement this with ZFS is a Striped Mirror. I chose to stripe the like drives together (Red1 + Red2 = Stripe 1) then mirror them to the White drives. It seemed more likely that the two similar drives would fail at the same time, and I to be able to recover easily from this scenario.
For offsite backup I decided to sync to S3. Storage is fairly cheap for what I need ($0.023/GB), and FreeNAS makes it pretty easy to set up the sync task. If things were to fail, it’s also easy to set up a PULL task and reload my data locally. The total cost should run ~$10/month for my 480GB; if this grows too large I’ll look at pushing to AWS Glacier instead, and cut the cost down to $2/month ($0.004/GB).