|Jeremy Chadwick||Oct 30, 2008 8:31 pm|
|Danny Carroll||Oct 30, 2008 9:07 pm|
|Jeremy Chadwick||Oct 30, 2008 9:33 pm|
|Andrew Snow||Oct 30, 2008 9:43 pm|
|Danny Carroll||Oct 30, 2008 9:47 pm|
|Danny Carroll||Oct 30, 2008 9:49 pm|
|Danny Carroll||Oct 30, 2008 9:54 pm|
|Simun Mikecin||Oct 31, 2008 2:20 am|
|Simun Mikecin||Oct 31, 2008 4:56 am|
|Peter Schuller||Nov 2, 2008 7:08 am|
|Simun Mikecin||Nov 3, 2008 12:31 am|
|Dieter||Nov 12, 2008 2:57 pm|
|Danny Carroll||Nov 12, 2008 9:46 pm|
|Jeremy Chadwick||Nov 12, 2008 11:42 pm|
|Willem Jan Withagen||Nov 13, 2008 12:32 am|
|Danny Carroll||Nov 13, 2008 3:09 am|
|Danny Carroll||Nov 13, 2008 5:58 am|
|Nikolay Denev||Nov 13, 2008 7:05 am|
|Scott Long||Nov 13, 2008 8:49 am|
|Danny Carroll||Nov 13, 2008 12:46 pm|
|Danny Carroll||Nov 13, 2008 12:59 pm|
|Eirik Øverby||Nov 16, 2008 12:26 pm|
|Danny Carroll||Nov 16, 2008 7:15 pm|
|Matt Simerson||Nov 16, 2008 10:06 pm|
|Jeremy Chadwick||Nov 16, 2008 11:07 pm|
|Wes Morgan||Nov 17, 2008 3:26 am|
|Danny Carroll||Nov 17, 2008 3:42 am|
|Matt Simerson||Nov 17, 2008 1:04 pm|
|Matt Simerson||Nov 17, 2008 2:07 pm|
|Danny Carroll||Nov 17, 2008 3:45 pm|
|Jan Mikkelsen||Dec 2, 2008 2:38 am|
|Wes Morgan||Dec 2, 2008 4:04 am|
|Danny Carroll||Jan 7, 2009 4:33 pm|
|Zaphod Beeblebrox||Jan 7, 2009 11:40 pm|
|Koen Smits||Jan 7, 2009 11:48 pm|
|Nikolay Denev||Jan 8, 2009 1:19 am|
|Danny Carroll||Jan 8, 2009 6:29 pm|
|Koen Smits||Jan 9, 2009 12:46 am|
|Danny Carroll||Jan 9, 2009 1:02 am|
|Koen Smits||Jan 9, 2009 7:57 am|
|Andrew Snow||Jan 9, 2009 6:38 pm|
|Danny Carroll||Jan 9, 2009 8:58 pm|
|Danny Carroll||Jan 20, 2009 10:40 pm|
|Koen Smits||Jan 21, 2009 1:15 am|
|Danny Carroll||Jan 21, 2009 5:14 am|
|Subject:||Re: Areca vs. ZFS performance testing.|
|From:||Jeremy Chadwick (koi...@FreeBSD.org)|
|Date:||Oct 30, 2008 9:33:49 pm|
On Fri, Oct 31, 2008 at 02:07:56PM +1000, Danny Carroll wrote:
Jeremy Chadwick wrote:
I think these sets of tests are good. There are some others I'd like to see, but they'd only be applicable if the 1231-ML has hardware cache. I can mention what those are if the card does have hardware caching.
The card comes standard with 256Mb of cache.
I'd like to see the performance difference between these scenarios:
- Memory cache enabled on Areca, write caching enabled on disks - Memory cache enabled on Areca, write caching disabled on disks - Memory cache disabled on Areca, write caching enabled on disks - Memory cache disabled on Areca, write caching disabled on disks
I don't know if the controller will let you disable use of memory cache, but I'm hoping it does. I'm pretty sure it lets you disable disk write caching in its BIOS or via the CLI utility.
I do have some concern about the size of the eventual array and ZFS' use of system memory. Are there guidelines available that give advice on how much memory a box should have with large ZFS arrays?
The general concept is: "the more RAM the better". However, if you're using RELENG_7, then there's not much point (speaking solely about ZFS) to getting more than maybe 3 or 4GB; you're still limited to a 2GB kmap maximum.
Regarding size of the array vs. memory usage: as long as you tune kmem and ZFS ARC, you shouldn't have much trouble. There have been some key people reporting lately that they run very large ZFS arrays without issue, with proper tuning.
I followed the recommendations here: http://wiki.freebsd.org/ZFSTuningGuide
vm.kmem_size="1024M" vm.kmem_size_max="1024M" vfs.zfs.debug=1
And : kern.maxvnodes=400000
I have not added the following because they were listed in the i386 section. (These values were quoted for a machine with 768Mb of ram) vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M"
Am I right in assuming these do not apply to amd64? The article was not specific.
All of the tuning variables apply to i386 and amd64.
You do not need the vfs.zfs.debug variable; I'm not sure why you enabled that. I imagine it will have some impact on performance.
I do not know anything about kern.maxvnodes, or vfs.zfs.vdev.cache.size.
The tuning variables I advocate for a system with 2GB of RAM or more, on RELENG_7, are:
vm.kmem_size="1536M" vm.kmem_size_max="1536M" vfs.zfs.arc_min="16M" vfs.zfs.arc_max="64M" vfs.zfs.prefetch_disable="1"
You can gradually increase arc_min and arc_max by ~16MB increments as you see fit; you should see general performance improvements as they get larger (more data being kept in the ARC), but don't get too crazy. I've tuned arc_max up to 128MB before with success, but I don't want to try anything larger without decreasing kmem_size_*.
Also, just a reminder: do not pick a value of 2048M for kmem_size or kmem_size_max; the machine won't boot/work. You shouldn't go above something like 1536M, although some have tuned slightly above that with success. (You need to remember that there is more to kernel memory allocation than just this, so you don't want to exhaust it all assigning it to kmap. Hope that makes sense...)
It makes sense. I'm using 1024 at the moment, but I've never really looked into what memory is actually being used.
Tuning advice here would be well received :-)
The only reason you need to adjust kmem_size and kmem_size_max is to increase the amount of available kmap memory which ZFS relies heavily on. If the values are too low, under heavy I/O, the kernel will panic with kmem exhaustion messages (see the ZFS Wiki for what some look like, or my Wiki).
I would recommend you stick with a consistent set of loader.conf tuning variables, and focus entirely on comparing the performance of ZFS on the Areca controller vs. the ICH controller.
You can perform a "ZFS tuning comparison" later. One step at a time; don't over-exert yourself quite yet. :-)
You can add raidz2 to this comparison list too if you feel it's worthwhile, but I think most people will be using raidz1.
-- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
_______________________________________________ free...@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "free...@freebsd.org"