I’ve been running happily with FreeNAS 8.1 for quite a while now, on my HP Microserver N40L.  It has been rock solid, and has handled all the data I’ve thrown at it, along with gracefully dealing with various HDD failures without compromising any data.

So, I was planning on running the latest FreeNAS (9.2) on the new server… The Dell PowerEdge 2950 is pretty vanilla hardware these days, and there’s nothing really that special about my system.  Imagine my surprise then, when I boot up the latest FreeNAS 9.2, to find that it spontaneously reboots when trying to write to the disk pool.  Further questions on the support forums confirm that this is not an isolated problem, and that other 2950 owners are experiencing the same thing.
So far, help has been non-existent — some posts going as far as to suggest “just buy yourself a new server, the 2950 was never on the FreeBSD supported hardware list” (which is ironic, since the 2950 runs the latest FreeBSD 9.2 just fine — only FreeNAS is failing)

Anyway – given that I absolutely MUST have a ZFS file system (there’s nothing out there to touch it yet; btrfs simply isn’t far enough along in development, and Microsoft ReFS parity performance is woeful, and hardware RAID simply doesn’t cut it) – I decided to look into alternatives.

I decided to give a couple of these alternatives a go

  • FreeNAS 9.1 (the latest (and non-working) version is 9.2)
  • openIndiana (based on Open Solaris from SUN)
  • FreeBSD (on which FreeNAS is based)
  • Ubuntu Linux

Out of all of these, Ubuntu was a surprise to me… I’ve used all of the others before.  Last time I built a server, Linux was never an option because the ZFS implementation was unstable, and available only through the FUSE filesystem hooks – so performance was pretty atrocious.   Linux now supports ZFS at a kernel level, and has matured a lot since then.

In each case, I configured the system with the following

  • Dell Poweredge 2950 Gen III
  • 32Gb ECC RAM
  • 6 x 3Tb Western Digital Green Drives
  • 2 x Intel 80Gb SSD
  • 1 x Hitachi 2.5″ 100Gb HDD
  • IBM M1015 (IT Mode) driving the 6x3Tb in the server drive cage
  • Rosewill PCIe SATA controller (Sil3132 chipset) driving the 2 x SSD’s
  • Motherboard SATA driving the Hitachi 100Gb boot drive
  • Intel Pro 100/1000 DUAL NIC (internal Broadcom NIC’s are disabled in BIOS)

The drive pool in each case was configured as

  • 6 x 3Tb as storage
  • 2 x 80Gb SSD (8Gb partition as ZIL, mirrored… remaining 72Gb as L2ARC striped)
  • No compression, no de-duplication, sync as standard

The read/write speeds were tested from the console using the dd command

dd if=/dev/zero of=/ZPOOL/test.dat bs=2048k count=50k

(writing out about 150Gb of test file to fully flush any cache)

The network write speeds were testing from an 27″ iMac (3Ghz i7) with 32Gb RAM and a 2Tb Fusion drive, running OS X Mavericks 10.8 – using the ‘nc’ command.

server:   nc -v -l -n 1111 > test2.dat
client:   time yes | nc -v -l -n 1111 < FreeNAS-9.1.img

(I chose the FreeNAS image file to transfer, because it was sufficiently large (about 2Gb) and was lying around my drive anyway)

The results actually surprised me a bit…

FreeBSD 9.2 (ashift=9)
W: 744.2 seconds (144279522 bytes/sec)
R: 314.5 seconds (341377873 bytes/sec)

FreeBSD 9.2 (ashift=12 gnop -S 4096)
W: 437.9 seconds (245168343 bytes/sec)
R: 352.1 seconds (304951263 bytes/sec)

nc: 2000000000 transferred in 27.331 seconds (73176978.52 bytes/sec) (69.78Mb/s)

FreeNAS 9.1.1
W: 437.4 seconds (245460370 bytes/sec)
R: 354.5 seconds (302887486 bytes/sec)

nc: 2000000000 transferred in 28.612 seconds (69900740.94 bytes/sec) (66.66Mb/s)

Open Indiana 151a8
W: 288.79 seconds (372MB/s)
R: 495.60 seconds (217MB/s)

Linux (Ubuntu 12.04LTS)
W: 281.277 seconds (382MB/s)
R: 359.095 seconds (299MB/s)

nc: 2000000000 transferred in 30.56 seconds (65427898.456 bytes/sec) (62.40Mb/s)

Linux (Ubuntu 13.10)
W: 296.71 seconds (362MB/s)
R: 278.72 seconds (385MB/s)

nc: 2000000000 transferred in 19.94 seconds (100300902.71 bytes/sec) (95.65Mb/s)

Damn!  Those modifications worked perfectly…  installing the 47ohm resistors across the 4 CPU/Drive chasis fans, and on the 2 fans in both the active and redundant PSU’s (6 fans in all) has dropped the noise level to that of a normal desktop PC!  I can actually be in the same room as the machine when it’s on 🙂
I had some issues where the fan speeds would ‘bounce’ — the resistors brought the spin speeds down so low that the firmware would read them as being below the threshold, and spin them back up — but I followed the instructions on the other blog (using an Ubuntu Live boot CD) and modified the latest DELL firmware with much lower speed thresholds, and flashed it.  Now, it just purrs along like a little kitten 🙂

I took the opportunity to strip out the DELL Perc SATA controller (I wasn’t using it) and added the new IBM M1015 SATA controller.  I used a CD-bootable DOS 6.22 image to boot into DOS, and flashed the M1015 controller with the latest LSI firmware into IT mode.   I plan on installing some variant of ZFS on this machine, but ZFS is a completely software driven RAID solution — hardware RAID just gets in the way; IT mode just removes all of the RAID functionality from the M1015 controller, so that the system just see’s a JBOD disk array.

With the DELL Perc controller removed, there was enough space to install a couple of old 80Gb SSD’s sitting atop the drive cage, and hook them into the unused integrated motherboard SATA ports A/B.  I’ll use these as a boot drive probably.

So, that’s the hardware side of things finished — time to move on to the software side 🙂

Modified PowerEdge 2950

Modified PowerEdge 2950

Holy crap, that was unexpected.  I powered on the PowerEdge 2950, and I’m pretty sure it awoke the neighbors!  That thing is LOUD!
I think, if I could perch this thing vertically, so that the rear of the machine is sitting on the desk – I’m sure that it would hover above the desk with all the fans running at full tilt!!
Considering this beast has to live in the spare bedroom closet (server room), this is unacceptable…

I found a couple of blogs elsewhere addressing the noise issue – I guess I’m not the only person with one or more of these machines in a home environment.

This one explains how to install a set of resistors in series across the fans –

This one explains how to patch and burn a new BIOS with lower fan speed thresholds, so that they don’t run so hard all the time —

Time to nip down to RadioShack for some heat shrink tubing and a couple of 47ohm resistors!

All the parts of my new home server have arrived!

I ordered a reconditioned Dell PowerEdge 2950, Dual quad core Zeon 5450’s, 32Gb EDD RAM, an IBM M1015 SATA controller and 6 x SATA drive trays for $600 from
hat’s a beast of a machine for the price!

I also picked up 6 x Western Digital 3TB Green drives for $85 each on Black Friday from an Amazon marketplace seller.

My current home server setup is made up of

HP N40L ProLiant Microserver, 16Gb EDD RAM, Intel Pro/1000 NIC, IBM M1015 SATA controller and 6 x 2TB HDD’s (a mix of green, non-green and SATA I and II)
HP N40L ProLiant Microserver, 16Gb EDD RAM, Intel Pro/1000 NIC and the stock 250Gb HDD it shipped with

One of the Microservers is running FreeNAS 8.2 and the 6 x 2Tb drive array is set up as ZFS-RAID-Z2 for maximum data resiliancy and redundancy.  The dataset is not partitioned in any way, and is just exported as an iSCSI target
The other Microserver is running Windows Home Server 2011 on the stock 250Gb HDD, and the data drive is the iSCSI target from the FreeNAS Microserver.  The two Intel NIC’s in both machines are hooked up via a crossover ethernet cable, and the iSCSI interface is bound directly to those NIC’s — dedicating that 1000Mb/s link to sharing the ZFS protected space with the Windows Home Server machine.

Originally, I chose this layout because I wanted the protection of ZFS-RAID-Z2, but I also wanted the features of WHS (PC backups, easy SMB shares, etc)

Under normal circumstances, this would suffice for a few years, but I’ve found that I’m in need of more space and some additional services that the Microservers just don’t have the CPU grunt for.