The NAS dilemma - a dive into the depths of storage
#1
Hi guys
As I've said on Bean's NAS topic, I'm going to build one for myself and it has to be upgradeable and it must have the very best networking solution.
I've decided to go with ZFS and RAIDz2 (some sort of RAID6) for multiple reasons, which you can find here:
https://calomel.org/zfs_raid_speed_capacity.htmlFirst of all, the most important: networking. Any NAS will be transferring data back and forth through its network interface. Therefore, considering the fact that ZFS provides two levels of high-speed cache (ARC=RAM, L2ARC=SSD), I need a networking solution that is capable of taking advantage of these. A single Gigabit Ethernet link is useless, same goes for two. I could use two 4GbE NICs but it wastes so many ports on the switch... So I'll be going with 10GbE (in the future, too expensive for now and haven't got enough $$
). The NAS will mainly be serving my own computer, so a direct 10GbE link between the two is perfect (10GbE switches cost a lot). Two or three Gigabit links will serve other computers, in rare occasions. Each 10GbE card runs at PCI-e 8x so let’s consider one port used for that. I could also use double 10GbE NIC for a 20GbE link, but I have time to think about this.
RAM runs at 1333 9-9-9-27 2T (Rampage III Extreme with 990X and ValueRAM DIMMs, shame on me for those timings), and still 10GbE isn’t enough to fully take advantage of the RAM cache (obviously). But anyway, 1GiB/s is still pretty darn fast. The SSD won’t even use that much bandwidth so that’s fine.
Now let’s get down to storage. As I’ll use 2-3 GbE links for now, while also taking into count a dedicated PCI-E 8x port for the future NIC, I still need another 8x port for the disk controller. This one will provide me eight SATA3 ports, so up to 12TB with RAIDz2, with each drive being 2TB. I believe that’s enough space for a quite a few years. When 12TB becomes not enough, I’ll be switching to newer drives anyway.
So in this regard, I only need two PCI-E 8x ports (16 lanes). No need for quad PCI-E 8x capable motherboards; two 8x ports are enough, some 4x are welcome.To have decent network speeds, I’ll be throwing in a PCI GbE NIC to have 384MiB/s until I get the 10GbE stuff.
With this in mind, I chose the following components:
PSU: Seasonic G-450 => Seasonic rocks, solid power units and Gold certification tops it up
Case: BitFenix Shinobi Core => 8 disk bays, two fans for HDD cooling and good cable management. What else ?
CPU: i3 3220 => ECC support (mandatory) and cheapest 2C/4T (good for compression) socket-1155 processor
MOBO: ASUS P8B WS => Good PCI-E layout, PCI slot, ECC support (mandatory)
RAM: ValueRAM 4GB ECC 1333 => cheapest ECC memory module at 1333 or more
SSD: Kingston SSDNow V300 60GB => cheap SSD used only for caching, no need for 2K$ Intel crap
Disk controller: Highpoint RocketRAID 2720SGL => best option after LSI, decent performance and JBOD
Disks: WD RE 2TB => WD hasn’t failed on me yet, and RE drives are solid
Fans: Noiseblocker XL2 Rev.3 => silent fan with decent airflow
I think that's the best way to go. I'll start with 4 drives, so 4TB usable and 2 drives for parity. I'll be using a 1810G-24 switch from HP (24 ports) with dual GbE and one PCI NIC for the server and three out of four ports from my Intel server NIC for my personal computer. For now, that's enough and a bit faster that my normal Caviar Black.
What do you guys think ?
Edited by Dreadlocky on 2013/12/12 15:37:45