# FreeNAS, and Why I went with openindiana + Nappit...



## swat565

Before I start this I really badly wanted Freenas to work for me. Its web interface is easiest to use by far, and had features I wanted with supposedly no hassle. But I do want to throw out a warning to people putting critical data on there Freenas box's as I've seen alot of recent builds using FreeNAS.

Now before I get flak from FreeNAS enthusiasts on these forums, I've truly put FreeNAS through its paces, and had the motivation to work out kinks.The build I did contained x16 3TB drives in a norco 4220 and have stored roughly 21 TB on the 37TB array, and recently expanded to a second box of the same capacity. I used freenas 8.0.2-8.2.0 And here were my biggest complains.

1. Unable to hotswap devices- Supposedly you were able to this with freebsd 8.2 to my knowledge, but I've never once gotten it to work. I had to do a power down and swap out bad drives

2. ZFS rebuilds-This one was the deal breaker for me. During 8.0.2 at least I was able to do ZFS rebuilds from the command line, mind you that was a pain for something that's suppose to be done through GUI. After trying out 8.2.0 I was unable to do it from the GUI or CLI, pretty much making the RaidZ worthless.

3. Setting Spare drives- This feature was also completely broken and could never get it to work successfully.

4. Logs- I started getting errors that the logs were completely full on the machine, and after spending hours looking at a fix for this I couldn't really find what I could delete or how I could expand the partitions it was saving them on.

All this finally led me to install Openindiana and Napp-it. It pretty much gave the same features I was looking for in Freenas (CIFS/NFS). Spares and hotswapping drives seems to work flawlessly, while I cant speak for its longevity (as I've just began to move data to it), but it seems to be doing much better than FreeNAS was stability wise.

What has your guys experience been with FreeNAS or Openindiana?


----------



## Plan9

I use FreeBSD rather than FreeNAS (i prefer the command line to web interfaces and, if we're really honest, ZFS's command line tools are childs play anyway). Personally FreeBSD has worked flawlessly.

My past experience with OpenSolaris has put me off ever using OpenIndiana for production systems. OpenSolaris was bloated, buggy and didn't receive much love from developers. The latter part is definitely true even now. So I'm inclined to stick with FreeBSD for at least the foreseeable future.


----------



## drbaltazar

command line?ibm os2 command line (exemple)wow?that must have been patience testing in this day and age.


----------



## Bonz(TM)

I've been using OpenIndiana and Napp-It for over a year now. I've had nothing but great experiences with it. With 20 spindles, you are bound to have problems down the road, and OI+Napp-It combined with ZFS have simplified everything for me. With limited knowledge of ZFS etc, I've yet to have a data failure despite few erroring disks and bad backplanes.

I absolutely love the simplicity of Napp-It and the speed, redundancy, and resiliency of ZFS. I've lost data AND had data corruption to HWRAID on more than 1 occasion. I've been scared a few times with ZFS, but it's always pulled through for me in the end.

I've got an M1015 and an HP SAS Expander nestled in an EVGA 758 board with an i7 920 and 12GB DDR3.
I have a pool containing 2x RAIDZ2 vdevs. 1 vdev containing 2TB and 1 vdev containing 3TB. Total space in GB: 50TB. Total space in GiB: 45TB. Total useable space 34TB.

Glad to see other OCN users utilizing such a great NAS/SAN setup.


----------



## tycoonbob

I always wanted to try FreeNAS, but never really got around to it. I'm a big Microsofy anyway, and ended up just doing a hardware array (LSI MegaRAID 9261-8i + HP SAS Expander in a Norco RPC-4224), and I admit I love the MegaRaid Management software, along with Starwind SAN software for my iSCSI/SMI-S needs.

FWIW, OI's lead developer has called it quits a few weeks back.


----------



## swat565

Quote:


> Originally Posted by *Plan9*
> 
> I use FreeBSD rather than FreeNAS (i prefer the command line to web interfaces and, if we're really honest, ZFS's command line tools are childs play anyway). Personally FreeBSD has worked flawlessly.
> My past experience with OpenSolaris has put me off ever using OpenIndiana for production systems. OpenSolaris was bloated, buggy and didn't receive much love from developers. The latter part is definitely true even now. So I'm inclined to stick with FreeBSD for at least the foreseeable future.


I don't mind living in CLI world either, but I know I might not be the only one working on it(the two NAS in question are for a client). Sadly most sysadmins today are more comfortable with GUI or webmin interface (or at least up here they are).

Plan9, are you able to hotswap drives in freeBSD build you did?

Bonz, I'm loving it so far, its easy to use and just works. We had it hard lock up once on me, which I assume was from crazy wind and power outages were getting here (currently ordering rack mountable UPS for client), as its been up for two days with no issues.


----------



## jrl1357

I would just use FreeBSD like plan9. the tools are very intuitive and easy to use. when OI gets closer to a stable version, then well see (if that happens at all)


----------



## CaptainBlame

There was massive improvements to FreeBSDs ZFS implementation with version 9. Since FreeNAS is based on 7 or 8 (I forget) I'm not surprised it has its issues. I would wait for a FreeNAS version based on FreeBSD 9 before I would recommend using FreeNAS and ZFS.


----------



## Plan9

Quote:


> Originally Posted by *swat565*
> 
> I don't mind living in CLI world either, but I know I might not be the only one working on it(the two NAS in question are for a client). Sadly most sysadmins today are more comfortable with GUI or webmin interface (or at least up here they are).
> 
> Plan9, are you able to hotswap drives in freeBSD build you did?


I run a number of virtual machines off my ZFS array so shutdown everything to be safe (once you've shutdown 5 VMs it's not really any more effort to shutdown the host as well). I only run FreeBSD 8.1 though, never got round to upgrading.
Quote:


> Originally Posted by *CaptainBlame*
> 
> Since FreeNAS is based on 7 or 8 (I forget)


8


----------



## parityboy

Couple of questions regarding ZFS:

*1)* Is there support yet for Online Capacity Expansion? If not, how far away is it?

*2)* Does anyone here have experience of Nexenta Core?

*3)* All of the Napp-It screenshots I've seen seem to be English/German language hybrids. Is there a pure English version?


----------



## Imrac

Quote:


> Originally Posted by *parityboy*
> 
> Couple of questions regarding ZFS:
> *1)* Is there support yet for Online Capacity Expansion? If not, how far away is it?
> *2)* Does anyone here have experience of Nexenta Core?
> *3)* All of the Napp-It screenshots I've seen seem to be English/German language hybrids. Is there a pure English version?


*1)* You can always add vdevs to a pool to expand the capacity or you can upgrade each disk in a vdev with a higher capacity one by one.
*2)* Cannot comment
*3)* It's mostly English with some Engrish, although really, the zfs commands are trivial.

There is a great 2 part video on youtube about becoming a ZFS ninja

http://www.youtube.com/watch?v=3-KesLwobps
http://www.youtube.com/watch?v=jDLJJ2-ZTq8

Whats the performance differences between FreeBSD and openindiana?


----------



## swat565

Quote:


> Originally Posted by *Imrac*
> 
> Whats the performance differences between FreeBSD and openindiana?


While I never have tested both of them officially against each other, Solaris (what OI of course is based off of) "technically" should handle ZFS I/O better then FreeBSD, and not to mention its native to Solaris. In small scale arrays of 16 drives and less, I doubt the performance is really anything to lose sleep over.

Also Nexenta core to my knowledge is just OI mixed with GNU/Debian.

BTW those videos are great, and watching now to hope and learn more.


----------



## Plan9

Quote:


> Originally Posted by *parityboy*
> 
> Couple of questions regarding ZFS:
> *1)* Is there support yet for Online Capacity Expansion? If not, how far away is it?
> *2)* Does anyone here have experience of Nexenta Core?
> *3)* All of the Napp-It screenshots I've seen seem to be English/German language hybrids. Is there a pure English version?


1. kinda. you can't expand RAIDs but you can expand storage pools by adding RAIDs or single drives* to the pool

2. Yes, but only version 1 and it was buggy. Personally I think you're better off with vanilla FreeBSD than some weird OpenSolaris kernel mashed with Debian user land. But NexentaCP might have improved somewhat since.

3. Never used it. Sorry.

* if you don't want/need redundancy


----------



## Plan9

Quote:


> Originally Posted by *swat565*
> 
> While I never have tested both of them officially against each other, Solaris (what OI of course is based off of) "technically" should handle ZFS I/O better then FreeBSD, and not to mention its native to Solaris. In small scale arrays of 16 drives and less, I doubt the performance is really anything to lose sleep over.
> Also Nexenta core to my knowledge is just OI mixed with GNU/Debian.
> BTW those videos are great, and watching now to hope and learn more.


ZFS is native in FreeBSD too. A kernel driver is a kernel driver, it shouldn't matter that ZFS was originally written for Solaris as the source was ported to FreeBSD and kernel drivers implemented. Plus OI/OpenSolaris is a different kernel to Solaris (SunOS) anyway.

Also OpenSolaris added a great deal of bloat that affected it's performance. OI might fair better, but FreeBSD definitely has a small footprint.


----------



## parityboy

Was looking for some FreeNAS vs OpenSolaris benches. Found this from 2010. Unless there have been major changes in either camp, I think they would still be relevant.


----------



## Plan9

Quote:


> Originally Posted by *parityboy*
> 
> Was looking for some FreeNAS vs OpenSolaris benches. Found this from 2010. Unless there have been major changes in either camp, I think they would still be relevant.


FreeNAS seems to run a lot heavier than vanillia FreeBSD - though I couldn't tell you why. Simple things like pre-fetch seem to require a minimum of 4GB in FreeNAS yet I've had it running on FreeBSD with just 2GB of system RAM.


----------



## ramibb

I run FreeBSD as well on a 24/7 server with many jails managed with ZFS storage and Rock Solid, no issues at all,Solaris x86 have a series problem with latest hardware support


----------



## ramibb

I have been reading this article many times, there was no FreeBSD tuned build in order to compare benchmarks so I would take this article results with some care and it looks like their was more knowledge their with nexenta than with FreeBSD


----------



## CaptainBlame

The article isn't relevant at all, this is an early implementation of ZFS in FreeBSD. If the article was written in 2010 and they used FreeNAS then you can expect it to be a representation of ZFS in FreeBSD in circa 2008-2009.

FreeBSD 10 will be the first OS with SSD trim support in ZFS I hear.

On the topic of GUI based systems, you will always pay some kind of penalty, whether it's performance or stability. The time it takes to be proficient on the CLI makes it up ten fold in the long run.


----------



## shodan

Does anybody know when and if udev expansion will be implemented on ZFS?
It is a pity that such an advanced and robust file system offers so much but cannot offer Online Expansion (I know you can add udevs) of udevs,

Lets say I have 4 hdd in RaidZ1 and i want to add another one. I can not I must buy 3 and add another RaidZ1 which is a pity.


----------



## Plan9

Quote:


> Originally Posted by *shodan*
> 
> Does anybody know when and if udev expansion will be implemented on ZFS?
> It is a pity that such an advanced and robust file system offers so much but cannot offer Online Expansion (I know you can add udevs) of udevs,
> 
> Lets say I have 4 hdd in RaidZ1 and i want to add another one. I can not I must buy 3 and add another RaidZ1 which is a pity.


Due to the way how data is spread across the drives, what you're asking isn't possible without first destroying the existing RAID then rebuilding it. The reason unRAID and SnapRAID can do this is because they don't spread the data across drives evenly - they do the storage equivelent of "load balancing" the data (ie writes the data to whichever drive has the most free space at the time). What this means is that unRAID/SnapRAID disks are effectively running in isolation outside of a RAID but the file system controller collates the contents of all of them so it appears to be one larger storage pool. This also means that using such a RAID doesn't offer any redundancy without having additional parity disks.


----------

